Share
Will Machine Consciousness Be Achieved in Our Lifetimes?

Will Machine Consciousness Be Achieved in Our Lifetimes?

“There is no threshold that makes us greater than the sum of our parts, no inflection point at which we become fully alive. We can’t define consciousness because consciousness does not exist”.

– HBO’s Westworld.

While viewers wait patiently, or impatiently as the case may be, for the final season of HBO’s fantasy blockbuster Game of Thrones, the American network and production giant will be happy to know the almost 2-year gap since the penultimate batch of episodes has seen the ratings slack taken up by a new show – Westworld. The first season attracted viewing figures on par with Game of Thrones and the second, recently released, looks set to slightly improve on that.

Set in a fantasy near-future, the show centres around the futuristic theme park Westworld, inhabited by ‘hosts’. These hosts appear to all intents and purposes to be human. In fact, the popularity of Westworld has been built on providing guests with a simulated experience so indistinguishable from reality that they are only able to because they are aware the hosts are, in fact, robots. Robots whose physical bodies are largely the same flesh and blood as ours and whose artificial intelligence is so advanced, they themselves do not realise they are not human. The hosts’ algorithms include backstories they experience as ‘memories’. They feel emotions and respond to physical stimulus in the same way we do. The only difference is they were built by and are controlled by ‘real’ humans. Checks and balances are in place to keep hosts within the ‘loop’ of the theme park narratives they are part of.
Of course, it wouldn’t be a very interesting show otherwise, some of the hosts start to break out their ‘loop’.

They become aware of the greater ‘reality’ behind the nature of their existence – robots built by humans. Westworld is so popular because it ticks all of the boxes that make a highly engaging narrative – empathy, suspense, surprise, character association etc. And like the Westworld theme park itself, it also has another, deeper level.

That is what seems to have elevated it, in the eyes of viewers, from being a popular show to a great show. The viewer is provoked into thinking about the nature of reality, what it is to be human, to be sentient and the line between what is ‘real’ and what is not. Even more crucially, the question is posed if there really is a line at all.

From Almost-Reality Fantasy to Almost-Reality Reality
Westworld’s plot is also timely. Recent leaps in the Artificial Intelligence sub-categories of Machine and Deep Learning are exponentially accelerating what machines can do and how robot intelligence processes information. Importantly, we are not just getting better at programming more intricate, deep and detailed sets of instructions that software follows. We are getting better at understanding how the human mind processes information and starting to build rough facsimiles of that architecture out of code. The way our minds process information is integral to how it subsequently arrives at conclusions. These conclusions are what we ‘know’ and the process of arriving at them is learning.

Machine Learning is the science of how the coded approximation of our own learning architecture is beginning to allow software to ‘learn’. The software doesn’t just follow a matrix of fixed rules but comes to its own conclusions based on data it is exposed to. It then updates the rules it follows based on those data-informed conclusions.

Today’s Machine Learning algorithms are still a long way away from the human mind. In some ways, they are superior – they can store, retain and process more efficiently far more data. But there are still significant limitations. Creativity, expression and being able to instinctively read the hugely complex sets of rules, and the exceptions to them, humans somehow can without us yet fully understanding how is still beyond coded algorithms. But the pace at which our understanding of neuroscience is moving, and our ability to replicate its architecture through code, is impressive. Some say alarming.

From antiquity to today, both Western and Eastern schools of philosophy have wrestled with the question of how sentience and consciousness should be defined. These concepts play an intrinsic role in how we perceive our ‘humanity’, as distinct from other forms of life. This raises the spectre of a theoretical point at which ‘Machine Learning’ might cross the line into sentience. Popularised by sci-fi, the theoretical point at which AI becomes self-aware and potentially autonomous is referred to as ‘Singularity’.

There is a school of scientists who believe it won’t be long before we are able to accurately enough replicate the architecture of our own minds to build code-based AI that exhibits all of the learning ability we have as humans. But without the limits on data storage and processing speed our own minds are inhibited by. These scientists believe code-based machine consciousness could quite feasibly be reached within the next few decades. Some think even sooner.

Another school of thought holds that their scientific peers underestimate the complexity of the human mind. They also believe Machine Learning will certainly see impressive advances in capability over the coming years and decades. In many applications, AI will be more efficient and ‘smarter’ than the organic human brain. However, they also argue that the Singularity camp fails to acknowledge the true complexity of the human brain and what ‘sentience’ involves.

What are the scientific arguments for and against Singularity being an imminent reality rather than science fiction? And if it is theoretically possible, what do we have to start seriously thinking about as an international community when it comes to ethics, checks and balances?

Arguments for Foreseeable Machine Consciousness
Sentience is defined as the ability to perceive, feel and experience. This doesn’t logically exclude the possibility of non-organic sentience. We have no fixed criteria for the definition on non-biological sentience but if we stick to the accepted definition then this is not a major issue. The question then becomes whether a machine can be conscious. The subtle difference between sentience and consciousness is being aware of the ‘self’ within surroundings. This is the ‘what it’s like’ of subjective experience.

For us, being conscious is self-evident but the problem is that consciousness is intangible. It can’t be picked up on a brain scan or assessed in any other objective way. We don’t know what the physical process that leads to experience and awareness is. From a scientific point of view, however, we must presume what we define as ‘consciousness’ is a result of the purely physical processes of our brains’ neurology and not some sprinkling of magic fairy dust.

In determining the realistic possibility of conscious AI evolving, scientists are focusing their efforts on defining consciousness. A 2017 paper by neuroscientists Stanislas Dehaene, Hakwan Lau and Sid Kouider published in the Science Today journal breaks consciousness down into three categories, only one of which AI has so far mastered.

The first kind of consciousness we possess is subconsciousness. This allows us to spot a chess move or recognise a face and is the kind of intelligence AI is already getting pretty good at. The first of the other two kinds of consciousness we display is maintaining a range of thoughts at once. This, for example, is what allows us to plan longer term and is something that distinguishes humans from other animals. The third kind of consciousness is being able to obtain and process information pertaining to ourselves. This allows us to reflect on and learn from mistakes. The second two kinds of consciousness AIs are still not very good at but progress is happening quickly and nobody believes that AI won’t completely master them within the next decade or two.

Imagine a machine with a combination of machine learning AI and sensory input technology not specifically programmed to recognise itself but with the data input and to recognise a ‘robot’. If the machine, in front of a mirror, instructed itself to raise its arm, and ‘saw’ the movement take place would if develop a sense of the arm in the reflection being its own and not that of an external robot being controlled by it? Many information scientists, including Google director of engineering Ray Kurzweil and Softbank CEO Masayoshi Son believe so and that this moment isn’t far away. They both predict before 2050 with Kurzweil’s bet 2029 and Son’s 2045. This would be a sense of self and self-awareness which many would argue we would have to define as machine consciousness.
Arguments Against Foreseeable Machine Consciousness

However, not all scientists agree that machine consciousness will arise out of AI. Giulio Tononi of the University of Wisconsin-Madison leads a team which has developed an influential mathematical theory for consciousness. Their model of a conscious mind is that integrated information cannot be broken down into components by the brain without reflection. When we see a red triangle, the brain does not register it as a ‘colourless triangle plus a shapeless patch of red’.

Our brains fully integrate information into a whole whereas computers compile information. The example given is that of a digital photo album. The whole album is easily modified by deleting individual picture. However, when our brains create memories that involve images these are integrated into a complex bank of previous memories. It is then very difficult to edit out one ‘scene’ from our brain. But AI, so argues Tononi, cannot completely integrate data in this way.

The conclusion to their paper’s argument is that if it is accepted that consciousness is based on total integration, then computers can’t be conscious.

Conclusion
The answer to the question of whether machine consciousness will become a reality within our lifetimes has to be we don’t know. That is because we still don’t fully understand the nature of what the kind of consciousness we have as humans resides in. To a large extent, the development of AI is likely, if anything, to help us narrow the field when it comes to understanding our own consciousness. When we are able to create accurate machine facsimiles of our own thought process as far as we understand it, the result will show us if that is where consciousness is to be found or if is in something more subtle. Right now, we’re still only guessing.

Nonetheless, the possibility of machine consciousness is tangible enough that there is now an urgency for ethical and regulatory frameworks around AI to be decided upon internationally. Not least of those is an objective classification of how we will recognise and accept machine consciousness if and when it does appear.

Leave a Comment

18 − fourteen =