
EPFL researchers uncover “language neurons” in AI that echo the human brain
20 May 2025

In an era increasingly defined by intelligent machines, researchers at EPFL are peering into the minds of AI models, and finding familiar patterns. In a discovery that blurs the boundaries between human cognition and artificial computation, they have identified specific units in large language models (LLMs) that mirror the brain’s language network.
For decades, we’ve treated machine intelligence as a black box, complex, opaque, and somehow different. But this new research from EPFL’s NeuroAI and NLP laboratories suggests otherwise. Like our own brains, LLMs may also possess networks of specialized “neurons” that allow them to understand and generate language. And, like us, they falter when those networks are disrupted.
Drawing inspiration from neuroscience, researchers analyzed 18 leading language models to determine whether they contained localized, language-specific processing units. Applying techniques traditionally used to map human cognitive networks, they found that less than 1% of the model’s units (which represents roughly 100 artificial neurons) are critically responsible for all linguistic functionality. Removing these “language-selective units” rendered the models nearly incoherent.
A mirror held to machine … and mind
This revelation invites deeper questions. If we can localize a model’s language network, can we do the same for reasoning or social cognition? The EPFL team used similar methods to investigate functions associated with the Theory of Mind and Multiple Demand networks in the human brain. Some AI models appeared to possess those cognitive clusters, while others did not.
What could possibly explain this variation? Is it training data, model architecture, or something more elusive? For now, the answers remain unknown. But the questions point to a future in which we may better understand not only how artificial minds function, but how our own do.
Next, the team plans to study multi-modal models that process language, vision, and sound simultaneously. If these systems also develop brain-like specialization, they could become powerful tools for understanding perception, disease, and the very nature of consciousness.
As machines evolve to mirror the architecture of the human mind, we edge toward a profound reversal, where the key to understanding ourselves may lie not in biology, but in the circuits of our own creations. In decoding artificial intelligence, we begin to glimpse the encrypted depths of human thought. At this threshold between code and consciousness, these researchers from Western Switzerland are not merely refining technology, they are unveiling a new frontier, where neuroscience and machine intelligence converge to illuminate the mysteries of both mind and machine, forging a path into the great unknown.