For those of you who aren’t familiar with the thought experiment:
Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’. […] What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?
There seems to be a paradox. On one hand, Mary knows everything there is to know about seeing color, yet our intuition says she’ll learn something new once released into the world of color. How can she know everything and then subsequently learn something new? The thought experiment misleads us into this sort of conclusion because our idea of knowledge and learning is fuzzy.
What does it mean for humans to know something? In a broad sense, we know a system when we have an internal model of that system that enables us to predict its behaviors for any set of inputs. Sort of like an accurate city map that enables us to predict where everything is inside the city. We learn something about a system when we get new information that causes us to modify our internal model of the system to maintain its accuracy. If we learned that a new road was built in our city, we’d have to include it in our city map in order to maintain its accuracy. We know everything about a system when no new information could necessitate modifying our internal model in order to improve its accuracy.
But like city maps, our internal models are only representations of reality, not exact replicas. They’re inherently missing details that exist in reality. A city map wouldn’t include a representation of a fly on a sidewalk. And leaving that detail out doesn’t make it a worse map of the city. Similarly, our internal models of systems in the world leave certain details out (evolution designed it that way). But it’s still meaningful to say we completely know a system even if our models don’t represent every detail of reality.
Mary is said to know everything about visual processing. However, her knowledge is based on an internal model that doesn’t represent every detail of reality. For instance, her model may not be representing how the nervous system works on a quantum scale. But despite this fact, we can still say she knows everything about visual processing. Learning about the nervous system on a quantum scale wouldn’t require her to modify her visual processing model because including representations of it wouldn’t affect the accuracy of her predictions.
When Mary first encounters a red rose, she’ll experience redness for the first time and feel like she has learned something new. Seeing a red rose activated a subconscious process that created a representation of reality and made that representation available for conscious attention. She had a new experience when focusing her conscious attention on her subconsciously produced representation of the color red.
How does this new representation of reality, the experience of redness, play into Mary’s understanding of visual processing? The answer is, it doesn’t. The new experience doesn’t lend new information that requires her to update her visual processing model in order to improve it. In a meaningful way, Mary didn’t learn anything new about visual processing. Her existing models of her brain and visual processing could’ve predicted her brain’s exact changes upon seeing a red rose for the first time – without incorporating into her models a representation of her new redness experience.
Mary cannot activate the subconscious process that generates the redness experience simply by having an accurate conscious model of the process. But activating that subconscious process and experiencing its productions isn’t a necessary condition for understanding it. Just like how you can maximally understand something by having a model that doesn’t include a representation of every detail of reality.
When Mary sees a red rose for the first time, she might feel as if she’s learned something new, but her understanding of reality will remain unchanged.