Advertisement

How the ‘Moth Radio Hour’ helped scientists map out meaning in the brain

Share

This is your brain on stories. By tracking the blood flow in people’s brains as they listened to a storytelling radio show, scientists at UC Berkeley have mapped out where the meanings associated with basic words are encoded in the cortex, creating the first semantic atlas of the brain.

The findings, described in the journal Nature, provide an unprecedented view of language and meaning as it plays out on our neural terrain, and could potentially offer a road map for those looking to help patients with certain types of aphasia or other neurological disorders.

See the most-read stories in Science this hour >>

Advertisement

For a long time, researchers thought about language as a primarily left-hemisphere function that took place in specific spots of the brain, such as Broca’s area and Wernicke’s area. But those areas aren’t associated with understanding language but producing it -- speech, in short.

“You could even think of them as pre-motor areas specialized for language,” said senior author Jack Gallant, a computational cognitive neuroscientist at UC Berkeley. “But for semantics, we’re not really even talking about language, we’re talking about the meaning of language. It’s not even words, so it happens at a whole different level.”

The researchers had seven test subjects (including the lead author, Alexander Huth) lie in a functional MRI machine while listening to more than two hours’ worth of stories from the “Moth Radio Hour,” a public radio show in which people tell funny, sad or otherwise poignant autobiographical stories.

------------

FOR THE RECORD

April 28, 8:14 a.m.: A previous version of this article stated that the “Moth Radio Hour” was a production of Public Radio International. It is distributed by the Public Radio Exchange.

------------

“Our subjects love to be in this experiment because they can just lie there and listen to these really interesting stories,” Gallant said. “It’s a million times better than any other experiment we’ve ever done.”

Advertisement

Of course, it’s hard not to laugh at a funny story. And because movement can destroy fMRI data, they 3-D-printed personalized “head cases” for each subject that would keep their heads very stable.

The researchers then used natural language processing-related programs to extract meaning from common words in the stories. They compared the time-coded transcripts to the fMRI data, which had painstakingly mapped the blood flow in about 50,000 different locations in the brain.

As it turns out, different regions in the brain responded to different families of related concepts. Dog, hypothetically speaking, might be associated with animals like “wolf,” or areas related to how they look or smell, or (if you had a dog as a kid) perhaps in areas related to words like “home.”

“Each semantic concept is represented in multiple locations in the brain,” Gallant said. “And each brain location represents sort of a family of selected concepts. So ... there’s a bunch of regions in the brain that respond to dogs.”

The map sheds light on the ways we process meaning through language, coloring parts of the brain in different shades depending on what kind of information they encode. Red, for example, has to do with certain social concepts, while green spots pertain to visual and tactile concepts. The model is available online, and Gallant says he hopes the work will become a handy resource for other researchers.

Advertisement

“We’re trying to build an atlas just like a world atlas,” he explained. “If I give you a globe, you can do anything with it – you could look at how big the ocean is or what the highest mountain is or what the distance from New York to California is.”

Every brain, of course, is a little different. But on the whole, the scientists were surprised to find that the general layout of the different-colored regions was largely similar across the seven test subjects – implying that each listener was encoding the same meanings in basically the same ways.

But is this shared semantic structure innate, or the result of environmental influences? After all, these test subjects shared both language and culture, which could potentially account for some of these similarities.

Gallant isn’t sure whether that pattern will hold across people who speak a language very different from English, such as Mandarin Chinese or Japanese, or across bilingual people responding to their second language. But these are questions that the scientist said were well worth exploring.

“The first law of neuroscience is ‘the brain is complicated,’” he said.

Follow @aminawrite on Twitter for more science news and “like” Los Angeles Times Science & Health on Facebook

ALSO

Advertisement

Long after brain trauma, sleep problems persist

Why having a food allergy costs more for the poorest kids

Child obesity has grown unabated since 1999, study finds

Advertisement