Study Reveals Our Minds May Process Language Like Chatbots

Study Reveals Our Minds May Process Language Like Chatbots
17th November 2024 Moriah Aharon

A recent study suggests that the human brain might process language in a way similar to advanced AI language models, by using flexible, context-aware patterns instead of fixed rules. Researchers studied brain activity in the inferior frontal gyrus as participants listened to a podcast, finding geometric patterns in brain activity that closely matched those in AI language models. This similarity allowed researchers to predict how the brain would respond to new words, showing that the brain represents language in a dynamic and context-driven way. These insights could deepen our understanding of how language works in the brain and inspire future improvements in language-processing AI

A recent study led by Dr. Ariel Goldstein from the Department of Cognitive and Brain Sciences and Business School at the Hebrew University of Jerusalem with close collaboration with Google Research in Israel found and New York University School of Medicine fascinating similarities in how the human brain and artificial intelligence models process language. The research suggests that the brain, like AI systems such as GPT-2, may use a continuous, context-sensitive embedding space to derive meaning from language, a breakthrough that could reshape our understanding of neural language processing.

Unlike traditional language models based on fixed rules, deep language models like GPT-2 employ neural networks to create “embedding spaces”—high-dimensional vector representations that capture relationships between words in various contexts. This approach allows these models to interpret the same word differently based on surrounding text, offering a more nuanced understanding. Dr. Goldstein’s team sought to explore whether the brain might employ similar methods in its processing of language.

To investigate, the researchers recorded neural activity in the inferior frontal gyrus—a region known for language processing—of participants as they listened to a 30-minute podcast. By mapping each word to a “brain embedding” in this area, they found that these brain-based embeddings displayed geometric patterns similar to the contextual embedding spaces of deep language models. Remarkably, this shared geometry enabled the researchers to predict brain responses to previously unencountered words, an approach called zero-shot inference. This implies that the brain may rely on contextual relationships rather than fixed word meanings, reflecting the adaptive nature of deep learning systems.

“Our findings suggest a shift from symbolic, rule-based representations in the brain to a continuous, context-driven system,” explains Dr. Goldstein. “We observed that contextual embeddings, akin to those in deep language models, align more closely with neural activity than static representations, advancing our understanding of the brain’s language processing.”

This study indicates that the brain dynamically updates its representation of language based on context rather than depending solely on memorized word forms, challenging traditional psycholinguistic theories that emphasized rule-based processing. Dr. Goldstein’s work aligns with recent advancements in artificial intelligence, hinting at the potential for AI-inspired models to deepen our understanding of the neural basis of language comprehension.

The team plans to expand this research with larger samples and more detailed neural recordings to validate and extend these findings. By drawing connections between artificial intelligence and brain function, this work could shape the future of both neuroscience and language-processing technology, opening doors to innovations in AI that better reflect human cognition.

The research paper titled “Alignment of brain embeddings and artificial contextual embeddings in natural language points to common geometric patterns” is now available in Nature Communications and can be accessed HERE.