The tiny worm Caenorhabditis elegans has a mind simply in regards to the width of a human hair. But this animal’s itty-bitty organ coordinates and computes advanced actions because the worm forages for meals. “Once I take a look at [C. elegans] and think about its mind, I’m actually struck by the profound class and effectivity,” says Daniela Rus, a pc scientist at MIT. Rus is so enamored with the worm’s mind that she cofounded an organization, Liquid AI, to construct a brand new sort of synthetic intelligence impressed by it.
Rus is a part of a wave of researchers who suppose that making conventional AI extra brainlike might create leaner, nimbler and maybe smarter expertise. “To enhance AI really, we have to … incorporate insights from neuroscience,” says Kanaka Rajan, a computational neuroscientist at Harvard College.
Such “neuromorphic” expertise most likely gained’t fully substitute common computer systems or conventional AI fashions, says Mike Davies, who directs the Neuromorphic Computing Lab at Intel in Santa Clara, Calif. Moderately, he sees a future wherein many forms of techniques coexist.
Imitating brains isn’t a brand new concept. Within the Nineteen Fifties, neurobiologist Frank Rosenblatt devised the perceptron. The machine was a extremely simplified mannequin of the way in which a mind’s nerve cells talk, with a single layer of interconnected artificial neurons, every performing a single mathematical perform.
Many years later, the perceptron’s primary design helped encourage deep studying, a computing method that acknowledges advanced patterns in information utilizing layer upon layer of nested synthetic neurons. These neurons go enter information alongside, manipulating it to supply an output. However, this strategy can’t match a mind’s capacity to adapt nimbly to new conditions or study from a single expertise. As an alternative, most of at this time’s AI fashions devour huge quantities of information and power to study to carry out spectacular duties, equivalent to guiding a self-driving automobile.
“It’s simply greater, greater, greater,” says Subutai Ahmad, chief expertise officer of Numenta, an organization seeking to human mind networks for effectivity. Conventional AI fashions are “so brute pressure and inefficient.”
In January, the Trump administration introduced Stargate, a plan to funnel $500 billion into new information facilities to help energy-hungry AI fashions. However a mannequin launched by the Chinese language firm DeepSeek is bucking that pattern, duplicating chatbots’ capabilities with much less information and power. Whether or not brute pressure or effectivity will win out is unclear.
In the meantime, neuromorphic computing consultants have been making {hardware}, structure and algorithms ever extra brainlike. “Individuals are bringing out new ideas and new {hardware} implementations on a regular basis,” says pc scientist Catherine Schuman of the College of Tennessee, Knoxville. These advances primarily assist with organic mind analysis and sensor improvement and haven’t been part of mainstream AI. At the least, not but.
Listed below are 4 neuromorphic techniques that maintain potential for enhancing AI.
Making synthetic neurons extra lifelike
Actual neurons are advanced residing cells with many elements. They’re consistently receiving indicators from the atmosphere, with their electrical cost fluctuating till it crosses a selected threshold and fires. This exercise sends {an electrical} impulse throughout the cell and to neighboring neurons. Neuromorphic computing engineers have managed to imitate this sample in synthetic neurons. These neurons, a part of spiking neural networks, simulate the indicators of an precise mind, creating discrete spikes that carry data by means of the community. Such a community could also be modeled in software program or inbuilt {hardware}.
Spikes will not be modeled in conventional AI’s deep studying networks. As an alternative, in these fashions, every artificial neuron is “a bit of ball with one sort of data processing,” says Mihai Petrovici, a neuromorphic computing researcher on the College of Bern in Switzerland. Every of those “little balls” hyperlinks to the others by means of connections referred to as parameters. Often, each enter into the community triggers each parameter to activate without delay, which is inefficient. DeepSeek divides conventional AI’s deep studying community into smaller sections that may activate individually, which is extra environment friendly.
However actual mind and synthetic spiking networks obtain effectivity a bit in a different way. Every neuron isn’t linked to each different one. Additionally, provided that electrical indicators attain a selected threshold does a neuron fireplace and ship data to its connections. The community prompts sparsely fairly than unexpectedly.
Importantly, brains and spiking networks mix reminiscence and processing. The connections “that characterize the reminiscence are additionally the weather that do the computation,” Petrovici says. Mainstream pc {hardware} — which runs most AI — separates reminiscence and processing. AI processing often occurs in a graphical processing unit, or GPU. A unique {hardware} part, equivalent to random entry reminiscence, or RAM, handles storage. This makes for easier pc structure. However zipping information backwards and forwards amongst these parts eats up power and slows down computation.
The neuromorphic pc chip BrainScaleS-2 combines these environment friendly options. It comprises sparsely linked spiking neurons bodily constructed into {hardware}, and the neural connections retailer reminiscences and carry out computation.
BrainScaleS-2 was developed as a part of the Human Mind Mission, a 10-year effort to know the human mind by modeling it in a pc. However some researchers checked out how the tech developed from the undertaking would possibly make AI extra environment friendly. For instance, Petrovici educated completely different AIs to play the online game “Pong.” A spiking community operating on the BrainScaleS-2 {hardware} used a thousandth of the power as a simulation of the identical community operating on a CPU. However the actual take a look at was to check the neuromorphic setup with a deep studying community operating on a GPU. Coaching the spiking system to acknowledge handwriting used a hundredth the power of the standard system, the group discovered.
For spiking neural community {hardware} to be an actual participant within the AI realm, it needs to be scaled up and distributed. Then, it may very well be “helpful to computation extra broadly,” Schuman says.
Connecting billions of spiking neurons
The educational groups engaged on BrainScaleS-2 at present don’t have any plans to scale up the chip, however a number of the world’s greatest tech corporations, like Intel and IBM, do.
In 2023, IBM launched its NorthPole neuromorphic chip, which mixes reminiscence and processing to save lots of power. And in 2024, Intel introduced the launch of Hala Level, “the biggest neuromorphic system on the planet proper now,” says pc scientist Craig Winery of Sandia Nationwide Laboratories in New Mexico.
Regardless of that spectacular superlative, there’s nothing in regards to the system that visually stands out, Winery says. Hala Level suits right into a luggage-sized field. But it comprises 1,152 of Intel’s Loihi 2 neuromorphic chips for a record-setting whole of 1.15 billion digital neurons — roughly the identical variety of neurons as in an owl mind.
Like BrainScaleS-2, every Loihi 2 chip comprises a {hardware} model of a spiking neural community. The bodily spiking community additionally makes use of sparsity and combines reminiscence and processing. This neuromorphic pc has “essentially completely different computational traits” than an everyday digital machine, Schuman says.

These options enhance Hala Level’s effectivity in contrast with that of typical pc {hardware}. “The realized effectivity we get is certainly considerably past what you may obtain with GPU expertise,” Davies says.
In 2024, Davies and a group of researchers confirmed that the Loihi 2 {hardware} can save power even whereas operating typical deep studying algorithms. The researchers took a number of audio and video processing duties and modified their deep studying algorithms so they may run on the brand new spiking {hardware}. This course of “introduces sparsity within the exercise of the community,” Davies says.
A deep studying community operating on an everyday digital pc processes each single body of audio or video as one thing fully new. However spiking {hardware} maintains “some data of what it noticed earlier than,” Davies says. When a part of the audio or video stream stays the identical from one body to the subsequent, the system doesn’t have to begin over from scratch. It may “maintain the community idle as a lot as attainable when nothing attention-grabbing is altering.” On one video job the group examined, a Loihi 2 chip operating a “sparsified” model of a deep studying algorithm used 1/one hundred and fiftieth the power of a GPU operating the common model of the algorithm.
The audio and video take a look at confirmed that one sort of structure can do a great job operating a deep studying algorithm. However builders can reconfigure the spiking neural networks inside Loihi 2 and BrainScaleS-2 in quite a few methods, developing with new architectures that use the {hardware} in a different way. They will additionally implement completely different sorts of algorithms utilizing these architectures.
It’s not but clear what algorithms and architectures would make the most effective use of this {hardware} or provide the best power financial savings. However researchers are making headway. A January 2025 paper launched a brand new method to mannequin neurons in a spiking community, together with each the form of a spike and its timing. This strategy makes it attainable for an energy-efficient spiking system to make use of one of many studying strategies that has made mainstream AI so profitable.
Neuromorphic {hardware} could also be finest suited to algorithms that haven’t even been invented but. “That’s really probably the most thrilling factor,” says neuroscientist James Aimone, additionally of Sandia Nationwide Labs. The expertise has a whole lot of potential, he says. It might make the way forward for computing “power environment friendly and extra succesful.”
Designing an adaptable ‘mind’
Neuroscientists agree that one of the vital essential options of a residing mind is the power to study on the go. And it doesn’t take a big mind to do that. C. elegans, one of many first animals to have its mind fully mapped, has 302 neurons and round 7,000 synapses that enable it to study constantly and effectively because it explores its world.
Ramin Hasani studied how C. elegans learns as a part of his graduate work in 2017 and was working to mannequin what scientists knew in regards to the worms’ brains in pc software program. Rus discovered about this work whereas out for a run with Hasani’s adviser at an educational convention. On the time, she was coaching AI fashions with a whole bunch of hundreds of synthetic neurons and half one million parameters to function self-driving automobiles.

If a worm doesn’t want an enormous community to study, Rus realized, perhaps AI fashions might make do with smaller ones, too.
She invited Hasani and one in all his colleagues to maneuver to MIT. Collectively, the researchers labored on a collection of tasks to offer self- driving automobiles and drones extra wormlike “brains” — ones which might be small and adaptable. The top outcome was an AI algorithm that the group calls a liquid neural community.
“You’ll be able to consider this like a brand new taste of AI,” says Rajan, the Harvard neuroscientist.
Commonplace deep studying networks, regardless of their spectacular dimension, study solely throughout a coaching section of improvement. When coaching is full, the community’s parameters can’t change. “The mannequin stays frozen,” Rus says. Liquid neural networks, because the title suggests, are extra fluid. Although they incorporate lots of the similar strategies as commonplace deep studying, these new networks can shift and alter their parameters over time. Rus says that they “study and adapt … based mostly on the inputs they see, very similar to organic techniques.”
To design this new algorithm, Hasani and his group wrote mathematical equations that mimic how a worm’s neurons activate in response to data that modifications over time. These equations govern the liquid neural community’s habits.
Such equations are notoriously tough to resolve, however the group discovered a method to approximate an answer, making it attainable to run the community in actual time. This resolution is “outstanding,” Rajan says.
In 2023, Rus, Hasani and their colleagues confirmed that liquid neural networks might adapt to new conditions higher than a lot bigger typical AI fashions. The group educated two forms of liquid neural networks and 4 forms of typical deep studying networks to pilot a drone towards completely different objects within the woods. When coaching was full, they put one of many coaching objects — a crimson chair — into fully completely different environments, together with a patio and a garden beside a constructing. The smallest liquid community, containing simply 34 synthetic neurons and round 12,000 parameters, outperformed the biggest commonplace AI community they examined, which contained round 250,000 parameters.
The group began the corporate Liquid AI across the similar time and has labored with the U.S. army’s Protection Superior Analysis Initiatives Company to check their mannequin flying an precise plane.
The corporate has additionally scaled up its fashions to compete straight with common deep studying. In January, it introduced LFM-7B, a 7-billion-parameter liquid neural community that generates solutions to prompts. The group reviews that the community outperforms typical language fashions of the identical dimension.
“I’m enthusiastic about Liquid AI as a result of I consider it might remodel the way forward for AI and computing,” Rus says.
This strategy gained’t essentially use much less power than mainstream AI. Its fixed adaptation makes it “computationally intensive,” Rajan says. However the strategy “represents a major step in the direction of extra sensible AI” that extra intently mimics the mind.

Constructing on human mind construction
Whereas Rus is working off the blueprint of the worm mind, others are taking inspiration from a really particular area of the human mind — the neocortex, a wrinkly sheet of tissue that covers the mind’s floor.
“The neocortex is the mind’s powerhouse for higher-order considering,” Rajan says. “It’s the place sensory data, decision-making and summary reasoning converge.”
This a part of the mind comprises six skinny horizontal layers of cells, organized into tens of hundreds of vertical constructions referred to as cortical columns. Every column comprises round 50,000 to 100,000 neurons organized in a number of hundred vertical minicolumns.
These minicolumns are the first drivers of intelligence, neuroscientist and pc scientist Jeff Hawkins argues. In different elements of the mind, grid and place cells assist an animal sense its place in house. Hawkins theorizes that these cells exist in minicolumns the place they monitor and mannequin all our sensations and concepts. For instance, as a fingertip strikes, he says, these columns make a mannequin of what it’s touching. It’s the identical with our eyes and what we see, Hawkins explains in his 2021 e-book A Thousand Brains.
“It’s a daring concept,” Rajan says. Present neuroscience holds that intelligence entails the interplay of many alternative mind techniques, not simply these mapping cells, she says.
Although Hawkins’ principle hasn’t reached widespread acceptance within the neuroscience neighborhood, “it’s producing a whole lot of curiosity,” she says. That features pleasure about its potential makes use of for neuromorphic computing.
Hawkins developed his principle at Numenta, an organization he cofounded in 2005. The corporate’s Thousand Brains Mission, introduced in 2024, is a plan for pairing computing structure with new algorithms.
In some early testing for the undertaking a number of years in the past, the group described an structure that included seven cortical columns, a whole bunch of minicolumns however spanned simply three layers fairly than six within the human neocortex. The group additionally developed a brand new AI algorithm that makes use of the column construction to research enter information. Simulations confirmed that every column might study to acknowledge a whole bunch of advanced objects.
The sensible effectiveness of this technique nonetheless must be examined. However the concept is that will probably be able to studying in regards to the world in actual time, much like the algorithms of Liquid AI.
For now, Numenta, based mostly in Redwood, Calif., is utilizing common digital pc {hardware} to check these concepts. However sooner or later, customized {hardware} might implement bodily variations of spiking neurons organized into cortical columns, Ahmad says.
Utilizing {hardware} designed for this structure might make the entire system extra environment friendly and efficient. “How the {hardware} works goes to affect how your algorithm works,” Schuman says. “It requires this codesign course of.”
A brand new concept in computing can take off solely with the best mixture of algorithm, structure and {hardware}. For instance, DeepSeek’s engineers famous that they achieved their positive factors in effectivity by codesigning “algorithms, frameworks and {hardware}.”
When one in all these isn’t prepared or isn’t obtainable, a good suggestion might languish, notes Sara Hooker, a pc scientist on the analysis lab Cohere in San Francisco and writer of an influential 2021 paper titled “The {Hardware} Lottery.” This already occurred with deep studying — the algorithms to do it had been developed again within the Eighties, however the expertise didn’t discover success till pc scientists started utilizing GPU {hardware} for AI processing within the early 2010s.
Too typically “success relies on luck,” Hooker mentioned in a 2021 Affiliation for Computing Equipment video. But when researchers spend extra time contemplating new combos of neuromorphic {hardware}, architectures and algorithms, they may open up new and intriguing prospects for each AI and computing.