ai and consciousness

Consciousness is one of these terms that comes up again and again when talking about the newest iterations of current AI systems. But what is consciousness? Honestly, the term was already hard to define even before the advent of AI. I guess the most common explanation these days is still that consciousness starts building up the more a system, a biological system, becomes more complex. Most people would argue that under these circumstances, only we as humans are able to be aware of who we are and reflect on what we are doing. So humans are to be considered conscious. But what about animals? Or plants? Or any living cell? And finally, what about AI?

These are hard questions to ask, and we haven't all the answers yet. Maybe we will never have sufficient clarity about that specific subject, but who knows what the future brings.

There are indeed scientists today, and these are not all crazy panpsychists, who start to question the common notion that only we as human beings are able to experience the world around us in a conscious manner. Usually, the term consciousness goes along with things like a complexity, self-awareness, and the ability to self-reflect. They, in contrast, tell us that every creature, that is able to react to external stimuli and that can feel about them, is in fact conscious, so it's the felt experience that matters.

Following those arguments, it's possible that the world is indeed full of conscious creatures, that consciousness goes indeed down to the level of plants or even individual cells just by the distinction of what's inside and what's outside of the cell membrane. It's an endlessly broad topic, and there's a lot to discuss, especially as terms as panpsychism cycle through the ether.

If you're curious about an interesting discussion about consciousness, I highly recommend listening to the episode 311 - Annaka Harris on Whether Consciousness Is Fundamental of the Mindscape podcast by Sean Carroll which is able to summon in brief the current state of the discussion.

But I want to go one step further. What does consciousness, what does the existence of complex brain functions mean in terms of AI? How are we even able to talk about, as for sure we can’t be objective about that topic because in the end we are talking about us and thereby using the very same tools - our brain - that is at the center of the discussion.

One thing to acknowledge is probably that we can't know everything, that we have to stick to one formal system and stay within that one system to explain the facts that matter in that context. And maybe - for sure - there are and there will be other systems that are able to explain other parts of the problem, that have a completely different angle but in itself are highly consistent and therefore are still worthy to discuss.

OK, that's very abstract now, but look at it this way. We as human beings have developed the tool of language, actually many different languages, to talk about the world around us, to identify objects and explain them. Language uses certain symbols, not only characters, but also broader symbols like words or even whole sentences which have a certain meaning we can all agree on. It's a code, sentences full of meaning work like familiar definitions and most of the time we are not even aware of what we are doing. We use these common codes naturally since we were little kids. We aren't even aware that by using them we follow certain traps and limitations. In contrast, there's a language like mathematics or programming with a completely different purpose, different symbols, and different ways to explain the world around us.

So let's assume we are trying to explain a certain problem with our natural language on the one hand and with the help of mathematics on the other. For sure both explanations are vastly different, but in the end, they are supposed to mean the same. Well, basically the same, but there are slight variations that only one of the languages is able to codify, while the other language codifies different aspects of the very same problem. So both explanations will never be exactly the same. But both are true, mostly because they are true in themselves. Both make sense, both are consistent, and logical.

Such discrepancies happen a lot when discussing deep problems like the description of a black hole, how our brain works, how consciousness works, or how an AI works. That's a fact we have to deal with, even play with in order to get a never complete but more detailed view of any given subject.

In order to be an AI scientist, you don't need to be a neuroscientist as well. In order to program computer code, you don't need to know about the specifics of how and when electricity flows through any given part of a computer chip. And in the same way, you don't need to know everything about the architecture of our brain, about neurons and axons, and glia cells in order to create models of how the brain works more on the output level. These are completely different worlds that don’t necessarily intersect. Sometimes an intersection can be helpful and revealing, but more often than that there are no intersections at all. Each "world model" exists in its own cosmos nad remains true within that cosmos but gets pure nonsense if it's taken out of it.

For casual readers I cannot really recommend the book Gödel, Escher, Bach: An Eternal Golden Braid by Douglas R. Hofstadter. I fumbled my way through it, but it was hard. Nonetheless, it gives a very detailed overview of various necessary conflicts of using different languages, different levels of meaning in parallel.

So if we are asking what consciousness is, it's maybe not immediately helpful to first look at all the different mechanics that go on in our brain cells. Maybe it's enough to look at it on a broader level that is more closely connected to the answer of the problem, that more describes the symbol level and not the hardware.

And the same can probably be said about any form of AI problem. We don't need to understand every single detail of what's going on inside of an AI model. It's more important to look at the output level and tailor these outputs to what we need and expect. As the core mechanics of the human brain, the inner workings of an state of the art AI are more or less black boxes for us; no one is able to explain and decipher in every single detail what they are doing and how they do it. But they still do it. And that's the almost miraculous thing about it.

It's like the humble bee that doesn’t know that its wings are actually too small for its bulky body to make it fly, but there it is, the humble bee is flying anyway.