Omniscient
AllDaily SignalArticlesReviewsCommentaryFeatured
Sign In

Omniscient

AI intelligence briefings, analysis, and commentary — delivered in broadsheet form.

By Noah Ogbi

Subscribe

Weekday briefings and flagship analysis, delivered to your inbox.

Sections

  • All
  • Daily Signal
  • Articles
  • Reviews
  • Commentary
  • Dialogues

Topics

  • AI Policy
  • AI Research
  • Industry
  • Large Language Models
  • Ethics
  • Agent
  • Amazon
  • AttnRes

Meta

  • About
  • RSS Feed
  • Privacy Policy
  • Terms of Service

Omniscient Media — made by ForeverBuilt, LLC.
© 2026 ForeverBuilt, LLC. All rights reserved.

  1. Home
  2. ›AI Research
  3. ›The AI Energy Crisis Has a Living Answer. This Organism Just Proved It Works.

AI Research

Vol. 1·Thursday, May 7, 2026

The AI Energy Crisis Has a Living Answer. This Organism Just Proved It Works.


Noah Ogbi

Tips, corrections, or questions? support@omniscient.media

The AI Energy Crisis Has a Living Answer. This Organism Just Proved It Works.
Share:

Discussion


Sign in to join the discussion.


Related

AI Research

Vol. 1·Tuesday, May 5, 2026

The Self-Improving Machine: How AI Is Learning to Build Its Own Successors


The Self-Improving Machine: How AI Is Learning to Build Its Own Successors

Jack Clark, co-founder of Anthropic and former policy director at OpenAI, puts the probability of a fully automated AI research pipeline at 60% or higher before the end of 2028. The benchmark evidence he assembles - from coding agents to alignment research - suggests the transition is already underway.


Noah Ogbi
Continue →

AI Research

Vol. 1·Saturday, April 18, 2026

GLM-5.1 and the Benchmark That Got Complicated


GLM-5.1 and the Benchmark That Got Complicated

Z.ai's GLM-5.1 briefly led the SWE-Bench Pro leaderboard with a self-reported 58.4% score, trained entirely on Huawei Ascend chips with no NVIDIA silicon in the stack. The benchmark story has already moved on. The geopolitical one has not.


Noah Ogbi
Continue →

AI Research

Vol. 1·Friday, April 17, 2026

The MCP Deep Dive: What It Is, How It Works, Why It's Broken, and What Comes Next


The MCP Deep Dive: What It Is, How It Works, Why It's Broken, and What Comes Next

Model Context Protocol is the closest thing AI has to a universal plug standard - and it arrived with the same security debt that plagued every previous universal plug standard. A comprehensive technical guide to MCP architecture, attack surfaces, optimization, and one uncomfortable prediction about where this is all heading.


Noah Ogbi
Continue →

The AI industry's most urgent unsolved problem is not a matter of algorithms or architectures. It is watts. Training the largest frontier models now consumes tens of gigawatt-hours of electricity. The infrastructure buildout to sustain that appetite is straining power grids across three continents. And every serious attempt to find a way out points toward the same uncomfortable answer: the most efficient neural computing hardware that exists is the one evolution spent 500 million years perfecting.

A wave of companies is now trying to build computing systems from actual living neurons: Cortical Labs launched the CL1 commercial biological computer in 2025, housing 200,000 human neurons on a silicon chip and consuming less power than a pocket calculator.[5] FinalSpark, a Swiss startup, has operated a cloud-accessible platform of 16 brain organoids since May 2024, reporting energy consumption roughly one million times lower than comparable silicon processing.[6] Both are early-stage and limited in capability. Both are making the same foundational bet: that neurons, given the right environment, will self-organize into something useful.

Into this context arrived, in March 2026, the most extreme version of that experiment yet. At the Wyss Institute in Boston, researchers created a microscopic living organism with a nervous system it built entirely by itself, in a body that has never existed in the history of life on Earth, with no evolutionary precedent, no genetic instruction for the task, and no blueprint. They called it a neurobot. The question it raises is one that cuts directly to the heart of both wetware computing and AI: does intelligence require a history?

What is a neurobot?

A neurobot is a living cellular robot derived from embryonic skin cells of the African clawed frog (Xenopus laevis), into which neuronal precursor cells have been implanted and allowed to develop freely. The result is a microscopic organism with a self-organized nervous system: functional neurons extending axons and dendrites, forming synaptic connections, generating electrical activity, and altering the creature's behavior. The research was led by Michael Levin at Tufts University and Haleh Fotowat at the Wyss Institute at Harvard, and published in Advanced Science in early 2026.[1]

The biobot lineage began in 2020, when Levin and collaborators at the University of Vermont created the first xenobots: tiny, self-propelled living robots assembled from frog embryo cells, capable of swimming and self-healing. A 2021 follow-on study published in PNAS demonstrated that xenobots could also kinematically self-replicate, gathering loose cells into new functional organisms.[2] In 2023, the same group demonstrated "anthrobots" built from human tracheal cells, which showed an ability to heal neural tissue in vitro.[3] Neurobots are the next iteration: the first biobots to incorporate neural tissue at all, and the first to demonstrate that a nervous system can self-organize outside any evolved body plan.

How do neurobots self-organize?

The construction method is deceptively simple. Skin tissue excised from frog embryos naturally curls into a sphere over roughly 30 minutes. During this brief window, the researchers microsurgically implanted clusters of neural precursor cells into the interior of the forming structure. From there, they did nothing. The neurons did the rest.

Without scaffolding, without genetic modification, and without instruction, the implanted cells differentiated into mature neurons with axons and dendrites, connected to each other, and extended projections toward the non-neuronal cells lining the neurobot's surface, including the multiciliated cells whose beating drives movement.[1] Calcium imaging confirmed these networks were electrically active. Protein markers for synapses were present. By any technical measure, a primitive nervous system had formed spontaneously inside a creature with no evolutionary reason to have one. This is precisely the developmental self-organization that wetware computing platforms like FinalSpark's Neuroplatform try to harness, except that here it happened not in a controlled bioreactor with carefully introduced growth factors, but inside a novel organism that has never existed before.

"Can a nervous system develop at all in a completely novel context that is not the product of millions of years of natural selection?" - Michael Levin, Tufts University

What does the nervous system actually do?

It changes behavior in measurable ways. Compared to standard biobots, neurobots are more elongated in shape, more physically active, and far less likely to move in simple, repetitive circles. Their motion patterns are complex, varied, and individual: each neurobot traces its own distinct path with recurring motifs, rather than following the stereotyped orbits of its non-neural counterparts.[1]

To confirm the neural networks were causally responsible, the team administered pentylenetetrazole (PTZ), a drug that blocks GABA-A receptors and triggers seizure-like states in animals, to both neurobots and standard biobots. The drug made regular biobots less active. Neurobots responded unpredictably: some became more active, some less. The divergence strongly suggests the neural networks were actively mediating the response in a way that plain ciliated tissue could not.[1] This is, in miniature, the kind of adaptive behavior that motivates the entire wetware computing field: a biological system responding to a signal in a manner that differs from any preprogrammed rule.

The unexpected finding: genes for eyes

The most striking result was not behavioral. When the team profiled gene expression across neurobots, biobots, and control structures, they found that neurobots had upregulated not only the expected neural development genes, but also a large cluster of genes associated with the frog visual system: the molecular machinery for building eyes and processing light.[1]

Levin's hypothesis is that these genes are not vestigial noise but part of a broader developmental program that, given more time, could produce photoreceptors. "If they lived longer, would they then also develop photoreceptors?" he asked in the Tufts announcement.[4] Neurobots currently survive only 9 to 10 days, sustained by nutrients stored in the original embryonic cells. Lifespan is already the central engineering challenge for wetware computing platforms: FinalSpark's organoids average 100 days before requiring replacement, a figure the company has spent years improving from an initial baseline of hours.[6] The neurobot's 9-to-10-day window is far shorter, and that constraint may be the only thing standing between the current organism and the capacity to see.

What this means for AI and wetware computing

The immediate applications the researchers cite are in regenerative medicine: biobots built from a patient's own cells could one day navigate the body to clear arterial plaques, deliver targeted therapeutics, or repair spinal cord and retinal damage.[2] Neurobots with light-guided navigation, if the visual system genes lead anywhere, could be steerable in ways their non-neural predecessors are not. That would be a significant practical advance for a technology whose main limitation has always been directed control.

But the deeper implications belong to a different domain. Cortical Labs' CL1, FinalSpark's Neuroplatform, and every other wetware computing project share an implicit assumption: that neurons placed in a sufficiently supportive environment will self-organize into networks capable of computation. What they have not demonstrated, and what the neurobot's existence now complicates, is whether that capacity depends on the neurons' origins. CL1 uses human stem-cell-derived neurons, FinalSpark uses forebrain organoids grown under carefully controlled differentiation protocols. Both draw on cells that carry, in some sense, an inherited cellular memory of what a brain is supposed to be.

The neurobot's neurons were dropped into a body with no evolutionary precedent for neural organization whatsoever. And they organized anyway. That is either a deeply reassuring result for the entire wetware field, suggesting that the drive toward neural self-organization is more robust and substrate-independent than anyone thought, or it raises a harder question about what exactly is doing the organizing.

What Levin's "Diverse Intelligence" thesis adds

Michael Levin's broader research program provides the conceptual scaffolding for why this matters beyond biology or wetware engineering. His Diverse Intelligence framework, elaborated in a 2025 paper co-authored with Karina Kofman in Advanced Intelligent Systems, argues that the field of AI has inherited a set of pre-scientific assumptions about what kinds of systems can be "minds," assumptions shaped by the fact that until very recently, all known minds were products of evolution.[7] Levin contends this is a sampling bias, not a law of nature. The neurobot is a direct experimental probe of that argument: a system with genuine neural organization and genuine behavioral complexity, assembled in a context that no evolutionary pressure ever encountered.

This speaks to a question AI researchers argue about constantly without being able to resolve it empirically: whether cognition is a property of a particular kind of organized information processing, achievable in principle on any suitable substrate, or whether it is irreducibly tied to the specific evolutionary and developmental histories that produced the only minds we have ever studied. Large language models cannot answer that question. A living organism with a self-built nervous system and no ancestors, however, puts pressure on it in a way no computation can.

What the neurobot project is really doing is asking what rules govern the formation of minds, and whether those rules depend on evolutionary history at all. Its neurons organized in a context that has never existed before in the history of life on Earth. And yet they organized. That is either deeply reassuring about the robustness of neural self-organization, or it should give us pause. What it is not, for the first time, is merely theoretical.


Sources

  1. Fotowat et al., "Engineered Living Systems With Self-Organizing Neural Networks: From Anatomy to Behavior and Gene Expression," Advanced Science (2026) Inline ↗

  2. Wyss Institute, "Toward autonomous self-organizing biological robots with a nervous system," March 17, 2026 Inline ↗

  3. Anthrobots paper, Advanced Science (2023), DOI: 10.1002/advs.202303575 Inline ↗

  4. Tufts Now, "Scientists Create Novel Organism With Primitive Nervous System," March 16, 2026 Inline ↗

  5. Tom's Hardware, "Human brain cells set to power two new data centers, thanks to 'body-in-the-box' CL1," March 11, 2026 Inline ↗

  6. Kutter et al., "The Neuroplatform: a cloud-based platform for biocomputing with neural organoids," Frontiers in Artificial Intelligence (2024), doi: 10.3389/frai.2024.1376042 Inline ↗

  7. Michael Levin, "Artificial Intelligences: A Bridge Toward Diverse Intelligence and Humanity's Future," Advanced Intelligent Systems (2025) Inline ↗