Estimated read time6 min read

This is what you’ll learn when you read this story:

There are more than 325 theories competing to explain consciousness, but very few ways to test them.Neuroscientist Erik Hoel wants to eliminate the theories using asystem that would stress-test all of them using substitution arguments—comparing systems that exhibit the same behavior but have different internal structures. The idea is to find contradictions that don’t make sense. Different kinds of brains, including AI, could provide testbeds for this framework.However, another scientist says Hoel’s methods may not solve the “hard” problem of consciousness: fully understanding our internal experience.

In a scientific competition for the messiest scientific mystery, consciousness would probably take the crown. By some estimates, there are now more than 325 competing theories. Some claim consciousness emerges when information is processed within sufficiently complex systems—such as computational theories of mind and some artificial consciousness (AI) models. Meaning, a future version of your laptop could, in theory, “wake up.” Others argue consciousness is older and deeper than biology itself: a fundamental feature of the universe, as in panpsychism, or something the brain receives rather than creates. The competition has been running for decades, but no single theory has won the crown.

That worries Erik Hoel, PhD, neuroscientist and founder of Bicameral Labs, a research group exploring new ways to study consciousness.

It’s as though 1,000 flowers are blooming, with “no way to differentiate those, no clear way to sort of make progress and push the field forward,” he says. The field is saturated. Anyone can invent a theory—even a chatbot can generate endless ones—but if scientists can’t properly compare or falsify them, he believes, those theories don’t mean much. The field can look productive from the outside, but Hoel says much of it amounts to researchers promoting favored ideas rather than making clear progress.

Hoel studied consciousness under Giulio Tononi and has spent years constructing and critiquing consciousness theories. He is both confident and worried that the field is perpetually stuck in a “pre-paradigmatic” phase of science—akin to how biology was fragmented before Charles Darwin transformed taxonomy (the scientific classification of living things) through evolution theory. Hoel believes he can move the consciousness field into a “post-paradigmatic” era through a “consciousness-theory-killing machine.”

The machine is conceptual—part framework, part stress-test system—and could, ideally, slash hundreds of rival theories down to size.

But how?

The first step is to stop treating those theories as sacred ideas and start treating them like claims that must survive crash tests. Hoel’s main tool is the “substitution argument.” His background in Integrated Information Theory (IIT)—a model developed by Tononi that links consciousness to how information is integrated in a system—helped shape his focus on how to test such theories.

Imagine one system that sees the color green and says “green.” Now build a second system that behaves exactly the same—same inputs, same outputs, and comes up with the same answer to the color—but uses a very different internal architecture. If a theory says the first system is conscious but the second is not, Hoel asks a pointed question: why? Both systems did the same job, so a theory needs a clear scientific reason for saying one is not conscious. If it can’t provide that reason, Hoel thinks the theory starts to crack.

Any theory that cannot make predictions, survive testing, or risk failure gets cut.

Once those filters are in place, Hoel wants to run the stress tests at scale across brains, animals, neural networks, and AI systems. The latter are central to the plan—not as the answer, but as crash test dummies; machine systems can be rewired, flattened, stretched, or swapped internally in ways human brains cannot, making them ideal testbeds for consciousness theories. One model may use feedback loops. Another may run in a straight feedforward line. One may look biologically realistic. Another may look alien.

If they all produce the same behavior but a theory keeps changing its answer about whether the system is conscious, Hoel sees a red flag. He calls it “logical judo”: build mathematically precise substitutes, expose contradictions, then eliminate weak contenders. The goal is not to crown a winner overnight. Hoel wants fewer flowers—and stronger survivors.

“Do you know how hard it is to say that something is not conscious?”

Before labs, algorithms, and philosophical combat, there was a small independent bookstore that his mother ran, and Hoel worked there as a child. Surrounded by shelves, stories, and long afternoons of reading, he first wanted to become a writer. Then, in college, he realized he had an aptitude for science, which eventually led him to consciousness research, including work on how consciousness may shift across different levels of the brain. During graduate school, he wrote a murder mystery built around the science of consciousness. In it, a character wonders whether you could solve the puzzle not by staring directly at awareness, but by tracing its outline.

Hoel compares this image to drawing in negative space—sketching everything around an object until the object finally appears. “Couldn’t you just draw the negative space?” he recalls thinking. Today, that old intuition sits at the heart of Bicameral. Instead of claiming to know exactly what consciousness is, he believes science may now have the tools to hunt it indirectly, using logic, falsifiability, and systematic elimination—until the surviving shape reveals itself.

Not everyone buys the Bicameral approach, though. Some critics argue the bigger risk is mistaking advanced behavior for awareness itself. Like Seth Dobrin, PhD, CEO and founder of Arya Labs, who warns that conflating AI intelligence with consciousness is “one of the more dangerous moves happening in AI discourse right now.”

“AI can do real, valuable work in consciousness research. It can synthesize thousands of papers, stress-test competing theories, find where their predictions diverge, and flag which experiments would actually discriminate between them. That accelerates science,” Dobrin says.

But the hard problem is a wholly different challenge. How do physical processes produce inner worlds? How can executing calculations feel like experience on the inside?

“That is not a data problem, and it is not a [computing] problem,” Dobrin says. “Throwing more parameters at it does not make it more tractable.”

Above all, to eliminate a competing theory of consciousness, you need an agreed standard for what counts as a winning answer, Dobrin adds. “We do not have one. The field has not converged on what it is even trying to explain. An algorithm cannot adjudicate that.”

What AI will eventually do (and is already doing) is force researchers to sharpen their theories. “When a model reproduces the behavioral outputs of a conscious system and nobody seriously argues the model is conscious, that exposes how little our current theories actually explain,” Dobrin says.

Interestingly, Hoel largely agrees with this second half of Dobrin’s argument. Even if his theory-crushing framework never explains subjective experience in its deepest sense, he says it could still achieve something the field badly lacks: a systematic way to eliminate weak ideas, expose contradictions, and shrink hundreds of rival theories down to a serious shortlist. “If it fails, we still succeed,” he says.

Besides, he does not paint that process as immediate. When asked whether this kind of winnowing could take years, he replies: “It may take us some years.” He compares the ambition to the Human Genome Project, which mapped the human genome, and LIGO, the observatory that detected gravitational waves.

Most importantly, Hoel believes his approach could still deliver something science has never truly had: the first taxonomy of non-conscious things. Instead of only asking what has awareness, researchers could begin ruling out what almost certainly does not—from simple lookup-table programs to future AI systems that mimic intelligence without inner experience. In his own telling, that alone would mark rare progress in a field built largely on speculation. “Do you know how hard it is to say that something is not conscious?” he says. If researchers ever can, the implications would stretch far beyond Philosophy 101.

Animals, food ethics, and AI could all look very different. “Maybe chickens aren’t conscious. Maybe they’re as non-conscious as rocks,” he says, stressing that he is speaking hypothetically. If a future theory could show anything like that, debates over vegetarianism would look very different, he believes. If AIs one day showed signs of conscious experience, questions of rights and moral status could arrive just as fast.

Download Pop Mech Digital IssuesChevron Left IconChevron Right IconHeadshot of Stav Dimitropoulos

Stav Dimitropoulos is a Gold and Community Anthem Award–winning journalist, and writes about consciousness, science, and culture for Popular Mechanics, Nature, and the BBC. Her work often explores mind-stretching angles where science meets philosophy. Her debut nonfiction book, Slow, Lazy, Gluttons (Greystone Books, 2026) asks: What if the traits society shames — laziness, darkness, nostalgia, and more — are actually survival superpowers? 

Comments are closed.