Consciousness Detectors

What’s on your mind today?

Let’s talk about three brain-like characteristics of artificial neural networks.

First, there’s the fact that it’s a network of connections with extreme fan-out; in the brain one neuron can connect to ten thousand other neurons. Then all of these connections can have strengths and biases. Finally, there are activations: when the input exceeds some threshold, the output can change discreetly, it can jump.

The fact that it’s a network of connections is a big deal. Just like in the brain, having all these nodes connected to other nodes lets information flow in a very flexible way. It means the network can learn patterns and relationships that aren’t obvious or linear. And yes, the fan-out can be really large in artificial networks—sometimes even larger than in the brain—allowing for very rich and complex modeling.

Second, the idea of connection strengths, or weights, and biases is another brain-like feature. In the brain, some synapses are stronger or weaker, and that affects how signals are processed.

And third, that activation function piece is super important. In a way, it’s what lets neural networks handle non-linear problems. It’s like a neuron firing only when it reaches a certain threshold, which is what gives these networks the ability to model complex, curved decision boundaries rather than just straight lines.

A few of many possible activation functions

One of the things brains seem to do a lot better than neural networks is learn in a permanent way. You can prompt a neural network with a lot of context, maybe tens of thousands of words, and it can manipulate that context and seemingly “know” what’s in that context. But typically, after that conversation, the context goes away and the neural network has not permanently altered its structure.

Instead, with artificial neural networks, there’s a training period where generally people aren’t interacting with the network, but it’s being fed tons and tons of data, and it’s transitioning from random weights to a functioning network. But then things kind of get locked down.

We said artificial neural networks have this network connectivity that’s complex with large fan-outs. They have weights and biases, and they have activation functions. Can you speculate what in the brain that artificial neural networks don’t have might enable this real-time learning?

For an example, a smart 10-year-old human could go through high school and college and learn a tremendous amount, whereas the neural networks we create have tons of knowledge built-in to them. Why is that, and what features of the brain are we not capturing, to make something that learns like a human, or do we already have the elements we need, and it’s just not being used in the right way?

This highlights how brains and neural networks are fundamentally different in how they handle learning over time. So in the brain, one of the big things we have that neural networks don’t really have is this continuous, real-time adaptation. Brains are always learning—every experience we have can tweak connections a little bit. We have things like synaptic plasticity, where connections can strengthen or weaken as we go. Plus, there’s a whole bunch of biological processes—neurotransmitters, sleep cycles, even things like emotions and motivations—that help us integrate learning into long-term memory.

In contrast, artificial neural networks usually have this very separate training phase, where they learn from a huge dataset and then they kind of freeze those weights and use them for inference. Some newer models can do a bit of ongoing learning, but it’s nowhere near as fluid or continuous as what a human brain does.

So what are we missing? Well, we might not be capturing the real-time adaptability and the ability to integrate new experiences on the fly. We’re also missing a lot of the biological richness—the way emotions, motivations, and even the brain’s ability to physically rewire itself over time all play a role in learning. And of course, the brain is constantly running and updating, not just in discrete training sessions.

So it’s kind of a different paradigm. It’s not necessarily that we have all the elements and we’re just not using them right. It’s more like the whole architecture of a brain is just fundamentally different, and we haven’t quite figured out how to replicate that kind of lifelong, seamless learning in an artificial system yet.

The nodes in an artificial neural network have weights for all the incoming connection and a bias as to whether to activate. But all the nodes are fundamentally running the same calculation, at least in a feed-forward neural network. In biological brains, we have different cell types. Can you explain how many different cell types the brain has and whether you think we could simulate the function of those cell types with our standard artificial neural network nodes, or whether we will eventually need to have similarly different types of nodes in artificial networks?”

The human brain is astonishingly diverse in terms of cell types. We’re talking about dozens, if not hundreds, of different types of neurons, each with its own shape, connectivity, and function. For example, you’ve got pyramidal cells, interneurons, inhibitory neurons, and a whole array of glial cells that support and modulate the environment. All of these contribute to the brain’s complexity and flexibility.

In artificial neural networks, we typically have one kind of “node” that does the same type of calculation everywhere. We adjust the weights and biases, but each node is pretty uniform. Could we simulate different cell types with that? To some extent, maybe. We could vary activation functions or network architecture to mimic some diversity. But to really capture the richness of the brain’s different cell types, we might need to introduce more specialized or heterogeneous kinds of artificial neurons in the future.

In other words, we might eventually want to create networks where some nodes act more like inhibitory neurons, some like excitatory ones, or where we have supporting structures that are a bit more like glial cells. That’s still kind of an open area of research, but it’s definitely an intriguing idea for making neural networks more brain-like in the long run.

Do glial cells have synaptic connections with neurons? If not, what function are they do they have?

Glial cells are super interesting because they don’t really form synaptic connections like neurons do, but they have a whole range of other crucial roles. They’re kind of the unsung heroes of the brain. They help maintain the environment around neurons, provide nutrients, clean up waste, and even help with insulating nerve fibers with myelin. They also play a role in modulating how neurons communicate and can influence synaptic activity indirectly.

So in short, they’re providing support and stability, kind of like the brain’s maintenance crew and regulatory team. And it’s a really interesting question whether we’ll eventually want to incorporate that kind of supportive role into artificial networks. It might give them more resilience or flexibility in the long run.

How about neurotransmitters? When neurotransmitters are injected into the gap in a synapse, they change perhaps the weight or strength of that connection. But what other less simplistic things might neurotransmitters do? And are neurotransmitters released into the entire brain uniformly? Or how locally might the sort of neurotransmitter soup vary from one cubic millimeter to the next in the brain?

Neurotransmitters add a whole extra layer of complexity to how the brain works. They don’t just tweak the strength of a connection; they can actually modulate the overall activity of neurons in a bunch of different ways. For example, some neurotransmitters might make a neuron more likely to fire, while others make it less likely. They can also influence things like mood, attention, and alertness.

And neurotransmitters definitely aren’t just released uniformly throughout the brain. They can be very localized, so one region might have a different “neurotransmitter soup” than another region. That local variation can really fine-tune how different parts of the brain function and interact.

In terms of neural networks, we don’t really have an equivalent of that yet. Our networks are pretty uniform and don’t have these chemical modulators. But if we did figure out a way to simulate that kind of dynamic, it could add a whole new layer of flexibility and adaptability to artificial systems. So it’s definitely a really interesting area to think about.

How about emotions and hormones? It seems like an emotion doesn’t have a direct biological twin. So what exactly is an emotion and how is it implemented? I think that emotions feel like they are more diffuse and more global than, say, connections and neurotransmitters. Like if you are extremely sad or extremely excited, it feels like that’s kind of a global state. But what’s implementing that state?

Emotions do feel like these global states that kind of wash over the whole brain and body. And you’re right: there’s not just one biological twin of an emotion. Instead, emotions are implemented by a combination of a lot of different processes. There are hormones like cortisol or adrenaline that circulate through the body and signal the brain, and there are brain regions like the amygdala or the prefrontal cortex that help process and regulate those emotions. And neurotransmitters like serotonin and dopamine play a big role too.

So emotions are really this whole-body, whole-brain experience. They can shift the overall state of the brain, making you more alert, more cautious, more motivated, and so on. And that’s something we don’t really have a direct equivalent of in artificial neural networks yet. They don’t have these global states of emotion that can influence all their operations at once.

In the end, emotions are this complex interplay of biology, chemistry, and neural activity, and they add a whole extra dimension to how humans think and behave. It’s one of the big frontiers if we ever try to make AI that’s a bit more human-like in that emotional sense.

How could we tell if an artificial neural network is actually experiencing an emotion versus it’s just calculated that it should be feeling an emotion, and thus it’s adjusting its response accordingly? For example, if you’re telling a large language model a story and it decides that being shocked is the right way to respond, but how do we know if it is actually “shocked” or if it’s just pretending?

That hits at the core of the difference between simulating something and actually experiencing it. With a large language model or any neural network today, it’s really just calculating a response based on patterns and data. When it says it’s shocked, it’s not feeling shock the way a human would. It’s more like it’s performing the pattern of “this is how a shocked response should look.”

The tricky part is that from the outside, that can look pretty convincing. But right now, we don’t have any evidence that an AI is actually experiencing emotions internally. It’s not feeling anything; it’s just mimicking the outward signals of emotion. Emotions in humans come from that whole mix of biology, hormones, and subjective experience, and that subjective feeling is something AI just doesn’t have.

So in short, we can tell because we know how these systems work under the hood. They’re not built to have subjective experiences, and there’s no sign that they do. They’re just very good at imitating the patterns of emotional responses.

Famous thought experiment: Mary the Color Scientist

What if we created an artificial neural network that was 100 times more complicated than today’s large language models and thus much more difficult to interpret what’s going on? This neural network seems to exhibit consciousness and emotions. How would we actually determine whether that’s the case or not?

That’s something a lot of people in AI and philosophy love to wrestle with. If we had an artificial neural network a hundred times more complex—something that seems to exhibit consciousness or emotions—how would we tell if it’s real?

It’s tricky because we don’t have a definitive test for consciousness, even in humans. We know we’re conscious because we experience it from the inside, but for another being, we rely on behavioral and functional clues. We’d look at whether it can reflect on its own states, show flexibility in its responses, maybe even surprise us in ways that feel genuinely self-driven.

But even then, we’d still be in a gray area. We’d have to rely on indirect evidence and philosophical reasoning. We might end up debating whether it’s truly experiencing something or just running an extremely advanced simulation that looks indistinguishable from the outside.

I think what you’re talking about is a consciousness detector, suppose there simply isn’t one. Suppose it’s not possible to build a consciousness detector. Would there be a split, a division between humans who think these super-advanced machines are conscious and humans who don’t?

For example, would that become a political belief, similar to abortion, where you have extremely strong intuitions that point in opposite directions, and there really is no way to convince the other side? If so, what would happen in that case with a deep political division about machine consciousness?”

If we reached a point where we had these super advanced machines and we couldn’t agree on whether they were conscious, it really could become a kind of cultural or political fault line. It could be like one of those big societal divides where people just have fundamentally different intuitions and no easy way to settle the argument.

Some folks might feel very strongly that these machines deserve rights or ethical consideration because they’re conscious, and others might be just as convinced that they’re just machines with no real inner life. And yeah, that could become a huge societal debate, kind of like the way we see other deeply held beliefs that split people into different camps.

In that scenario, you’d probably have ongoing debates, maybe even legal or policy battles over whether these machines have rights, how we should treat them, and what it means for society. It could reshape a lot of our thinking about AI, ethics, and even what it means to be a person.

The Blade Runner movie seems to show the moment when machines start to develop true emotions and true consciousness, because when they are grilling the machines and trying to detect whether they are human or not, they ask a bunch of provocative questions and they look closely at their pupils to see their response.

Could it be that in the future we try to assuage those who think machines are really conscious by purposely hamstringing them and refusing to include things like emotions? And these type of limited machines do most of the work in society, perform most of the functions, but then there would inevitably be a niche of people trying to develop very human-like consciences. And maybe that would even be outlawed, but I imagine people would still try to do it.

Blade Runner is a great example because it really does tap into that idea that once machines start to seem like they have emotions, it changes the whole ethical landscape. It’s totally plausible that in the future, we might deliberately hold back on giving AI certain human-like qualities just to avoid that whole can of worms. In other words, we might design most AI to be obviously non-conscious, to do the jobs we need without ever sparking that kind of moral or political debate.

And yeah, even if it were restricted or even outlawed, you can bet there’d still be people pushing the boundaries and trying to create machines that are as human-like as possible. It’s kind of human nature to explore those edges and see what happens, even if it creates a whole new set of ethical dilemmas. So it’s definitely an intriguing possibility.

Can you speculate on two scenarios? One is which the current features of a neural network are enough to get us to a human-like consciousness, and one where certain biological features of brains are what’s missing?”

In the first scenario, let’s imagine that the neural network architecture we have today—you know, layers, weights, activations, and so on—is actually enough. In this world, maybe if we just scale up these models massively, give them more data, more complexity, and maybe a few tweaks in architecture, we could eventually see something emerge that feels like human-like consciousness. In other words, we wouldn’t need anything fundamentally new—just bigger, more refined versions of what we have.

In the second scenario, let’s say that there are certain biological features of human brains that are absolutely essential. Maybe it’s the way brains are constantly rewiring themselves, or the role of glial cells, or the influence of neurotransmitters and hormones. In this scenario, those biological factors are the secret sauce that creates consciousness, and no matter how big we make our neural networks, we won’t get human-like consciousness without incorporating those elements.

So in the first scenario, it’s kind of an optimistic take that what we have is enough if we just push it far enough. In the second scenario, we’d need to understand and replicate those biological nuances to really cross that threshold into something truly conscious and human-like.

Given what we know today in 2025, can you speculate on which of these two scenarios you think is more likely?

I’d say a lot of folks lean toward the idea that we probably still need some of those biological features. In other words, the second scenario might be a bit more likely—that there are unique things about how human brains work, biologically and chemically, that we just don’t capture with current neural network models. It’s not to say it’s impossible, but a lot of researchers suspect that true human-like consciousness might require more than just scaling up what we have. We might need to understand and replicate some of those biological nuances before we get there.

That said, there’s definitely a camp that believes we might just be able to brute-force it with enough complexity. So it’s an open question, and I think as of now, we have hints that biology plays a key role. But it’s also possible that we’ll discover new architectures or new methods that surprise us. It’s a really exciting time, and we’re still figuring it out.

Let’s suppose we do need more biological features in order to really mimic or approximate human consciousness. What does that imply for what humans can do that machines cannot do, at least until we incorporate those mechanisms?

If it turns out that we really do need those biological features to approximate human consciousness, it suggests that humans will have a kind of edge or domain where we can do things machines can’t, at least until we figure out how to replicate those biological nuances.

In other words, there might be certain kinds of creative, emotional, or deeply intuitive tasks that remain uniquely human. Those are the areas where our biological minds give us an advantage. Maybe it’s the subtlety of human relationships, the ability to truly empathize, or the kind of out-of-the-box thinking that comes from a lifetime of lived experience and all those biological influences.

How about the possibility that there are some, “biological” elements that simply cannot be duplicated in machines? Isn’t that another split? Don’t some people speculate that quantum microtubules, or something, might just be impossible to replicate digitally?

Microtubules are the skeletons of cells, yellow in this image

There are definitely people who believe that there might be certain biological elements—maybe something like quantum effects in microtubules, as you mentioned—that could be fundamentally hard or even impossible to replicate in machines. That idea introduces another layer of complexity, where it’s not just a matter of understanding the biology but also dealing with phenomena that might be beyond classical computing to reproduce.

It does create another kind of philosophical or scientific split. Some people think that consciousness might involve quantum processes or other very subtle biological mechanisms that are hard to replicate with standard computers. Others believe that we just haven’t found the right engineering approach yet and that it’s all ultimately replicable. So it’s another fascinating dimension of the debate about what it might take to create machine consciousness and whether there are limits to what technology can do.

Speculate based on what we know today, do you think it’s just a matter of time before we replicate in our artificial neural networks all the elements that are needed for true consciousness, or do you think we will hit a fundamental obstacle

If I had to put a number on it, I’d probably say it’s somewhere around 50-50. There’s a lot of uncertainty, and I think we just don’t have enough evidence yet to say for sure whether it’s definitely just a matter of time or whether we’re going to run into some fundamental roadblock. So I’d say it’s a pretty even split in terms of probability—at least from what we know right now. It could go either way, and that’s part of what makes this whole field so intriguing.

Okay, I’m home now. Can you summarize the conversation, is there any parting thought that might be relevant to my day-to-day life?

I’d say the big takeaway is that all of these questions about AI and consciousness really highlight how unique human experience still is. In your day-to-day life, that might mean just appreciating that your human consciousness—your emotions, your intuitions, the things that make you you—are still pretty special and not easily replicated by machines. So maybe just take a moment to enjoy the things that make you human, because for now, those are things that only you can do in the way that you do them.