As The Toast searches for its one true Gal Scientist, we will be running a ton of wonderful one-off pieces by female scientists of all shapes and sizes and fields and education levels, which we are sure you will enjoy. They’ll live here, so you can always find them. Most recently: Zombie Tree Swallows at Dawn.
Preface: The Amedi lab, whose 2012 work I’m profiling here, struck again just last week with a similar finding about the extrastriate body area. I hope, if you have already been struck with fresh wonder by their latest, that this post serves to deepen your interest. Kind of like when I discovered RuPaul’s Drag Race in its second season, and quivered with anticipation when I realized there was a whole first season that was still new to me!
In the “I Fucking Love Science” era, my chosen profession has been fetishized as a beacon of objectivity. The internet was rife with bloodlust at the prospect of Bill Nye using reason and logic and evidence to summarily crush creationism not too long ago. “Show me the data” seems to be the rallying cry of an internet hopped up on TED talks, frothing with wonder and demanding clarity.
But data don’t talk. They are a pain in the ass. It turns out, scientists must use their fallible, human storytelling abilities to decide what they mean, and in fact, to decide what to look for in the first place. Sometimes there is consensus in the community (as there absolutely is with things like climate change or evolution), but when you get down to the details, there often isn’t. Scientists find themselves spinning tales, reaching for the narrative that best explains their data.
So. Storytime. Let’s have a kiki. Topic? The visual word form area, affectionately known as the VWFA. It’s a part of the brain that’s particularly active when you’re reading. In fact, you’re using it right now. Across fonts, across languages, across systems of writing, injuring this area would take away your ability to read or even recognize words.
But what’s this area doing when you’re reading? Why does it like words so much? Neuroscientists, modern-day phrenologists that we are, have spent a lot of time trying to pinpoint which chunks of brain matter respond to what, and what those responses are doing for us. What we do is (this is so dumb you’re not going to believe it) show you (or an animal) a bunch of pictures, measure the response in some part of the brain, and whichever kind of picture the neurons in that area like best, wins. We then say that area is “selective” for that kind of picture. So, even assuming selectivity is a good measure of specialization, we still don’t know why might an area come to have this selectivity, or, dare we say, specialization. But because of how the visual system is laid out, you can find brain areas with all kinds of weird preferences. The fusiform face area (FFA) likes, you guessed it, faces. Parahippocampal place area: places. Extrastriate body area: body parts. It’s weird to see perfectly rational sciencey people talking about the representation of things like faces and houses being physically IN the brain, as though the items in question had been beamed there via Wonkavision. And yet, our own best thinking got us here.
There has been much ado about what this kind of specialization might mean. Is the FFA innately wired for face perception? Or does its preference for faces reflect evolutionary pressure for us to become experts at reading facial expressions? The VWFA’s status as a specialist is even more tenuous. After all, faces are something we orient to as babies, but words, especially in visual form (reading), come later. It seems, if anything, less likely that we’d have a specialized area for a skill that presumably arose much later, developmentally as well as evolutionarily. But it’s starting to appear that, once you expand your definition of reading, it may be just as fundamental. That, like Matilda, like Belle, like yours truly, the VWFA is a born bookworm, and not a historical by-product of our expertise with reading letters with our eyes so much. And what really convinced me of this, oddly enough, was what it does in the brains of people who have never experienced any visual input at all.
Amir Amedi’s lab at the Hebrew National University in Jerusalem is doing amazing work with patients who have been blind since birth, helping them learn to use a sensory substitution device called the vOICe (as in the video above). Using a webcam, it scans the visual scene and translates the visual information in three dimensions (left-to-right, top-to-bottom, and light-to-dark) into three components of sound (time, pitch, and loudness, respectively). Blind patients can use these “soundscapes” to perceive visual information present in the world around them. You can even experience this for yourself by downloading vOICe to your Android or just letting the smooth sounds of the visual forms of the alphabet wash over you.
As if this technology isn’t amazing enough, this 2012 study by Ella Striem-Amit and colleagues uses it to shed light on what it might mean for the VWFA to be “wired” for reading. We know that the VWFA responds more to letters than to other kinds of visual stimulation. In blind Braille readers, it prefers Braille words over other kinds of tactile stimulation. In Striem-Amit’s study, blind patients learned to identify letters by listening to soundscapes, and once they’d learned this, the VWFA preferred soundscape letters, as well. But, counterintuitively, it showed no preference for spoken letters. It seems to really become active only when there is some kind of tangible shape in the world that points us to that letter. After all, when someone is speaking to you, chances are you aren’t picturing their words written out before your eyes. But if you had to read a written letter or a Braille one or listen to a soundscape of a letter, you’re using your eyes or fingers or ears to get at a shape that communicates the concept of the letter (if you’re not blind, probably by visualizing). Thanks to this study, we know that the VWFA can take process information about the shape of words from seemingly any sensory apparatus.
Now, that’s not so weird. If you lose function in one part of the brain, your remaining faculties can sometimes sense that there’s some prime real estate not being used, and that chunk of brain matter can take on new information processing roles. For instance, if you lose a finger, the part of your brain that used to process sensation in that finger becomes extra computing space for the sensations of pushy neighboring fingers. The brain is incredibly plastic, and when part of it isn’t being used, it can get repurposed.
What’s crazy, then, is NOT that the VWFA can take Braille or soundscape inputs, but that it has stubbornly saved a seat for the shape of words, even if they’re not read visually. To find evidence of the Labrador-like depths of its loyalty to word shapes, Striem-Amit and her team also present a case study. They managed to find one participant who, blind since birth, had learned Braille but had spunkily refused to ever waste her time learning the shapes of letters she’d never see. They brought her into the lab, and sure enough, her VWFA lit up when she read Braille letters, but called bullshit when presented with non-letters. It also sat there inert when they played meaningless soundscapes. Then, they taught her the to recognize the shapes of written letters by feel, and trained her to use the vOICe to identify those letters using soundscapes. Now that these letters had meaning for her, the VWFA managed to pick them out just as well as Braille letters. Not only is this a convincing magic trick, it is also amazing that this area hadn’t been colonized by some other function after all the years she spent NOT knowing the shapes of letters.
It seems silly, all of a sudden, to debate whether the VWFA really is specialized for words. In fact, what’s more questionable is whether we can rightly call it a visual area. Given the historical contingencies of language, it is hugely surprising that, in questioning the “word” in “visual word form area,” we ended up disproving the “visual” instead. When the VWFA was first named in 2000 by Laurent Cohen and colleagues, critics pointed out that it probably served a mix of reading and non-reading functions. This seems to suggest that people thought, just like the FFA had been accused of being an “expertise” area rather than a face perception area per se, the VWFA was a specialist, but not necessarily for words. Stanislas Dehaene, an author on the 2000 paper that coined the term, wrote in 2011, “Reading acquisition partially recycles a cortical territory evolved for object and face recognition, the prior properties of which influenced the form of writing systems.” So he thinks that not only do our reading abilities grow our of our already-wired face perception abilities, but our writing system itself was probably partly determined by our skill at perceiving shapes.
What I think this means is, the VWFA isn’t just innately wired to take on the wordy part of our visual perception, but also, that the widespread use of letters didn’t bully object-selective brain areas into developing literacy or expertise for them, either. Letters didn’t will the VWFA into existence, it’s the other way around. We, as a species, invented systems of writing that capitalized on perceptual capabilities we already had, ultimately settling on shapes (letters) to communicate meaning through words.
So the next time you find yourself having a kiki with a friend, telling stories just like I’m doing right now, notice how much you’re getting from their words and how much you’re getting from their facial expressions. Notice how your eyes dart around their features. Notice how they’re darting around right now. Maybe the phrase “read you like a book” takes on new meaning for you. Are you reading them like a book, or are you reading these words like you’d read my face?