Strong AI and simulating the brain on a computer
(A previous version of this essay was published here 5 years go)
During these past few
years (2014-2020), Artificial intelligence has been mentioned a lot in news
stories and movies. This was mainly fuelled by the success of one branch of AI
known as Machine Learning, and specifically its subbranch Deep Learning (Neural
networks). With these successes and naming the technology “Artificial
intelligence”, a lot of experts have engaged in a global discussion about its
dangers as well as its potential.
The most alarmist voices
have talked about Terminator scenarios and the risk to the future of global
employment, especially when the day that most white-collar work will be
automated by these AIs. Billionaires like Elon Musk are funding projects to
make AI
safe for humanity. Some famous scientists like the late Stephen Hawking have
warned us that the end was near with the advent of the thinking machine. Other
computer scientists like Ray Kurzweil have talked up the coming of a transhuman
state with what he has coined as The Singularity. In 2015 the Box Office heated
up with movies like "Ex Machina" and "Avengers: Age of Ultron,"
all star smart and evil AI protagonist.
It seems that if one talks to a smart scientist today, she will confirm that Strong AI is going to happen, and that humanity should prepare to deal with its consequences.
Strong AI is the idea
that computers will achieve human-like self-awareness or Consciousness. It is
contrasted with Weak AI, which only claims that computers will be capable of
mimicking intelligent behavior without being conscious. Some don't seem to care
about the distinction. At a panel discussing on Consciousness, the famous
psychologist Daniel Kahneman gave the example of a future where a robot will be
created who will seem to mimic intelligent behavior so well that for all
intents and purposes, it won't matter to people whether or not it has Consciousness:
we would treat it as if it were
conscious.
Does consciousness
matter?
I believe that such a distinction is not superfluous but essential as Consciousness might make all the difference in a being displaying intelligent behavior. Philosophers have theorized models of human beings that lack all forms of Consciousness. These are known as Philosophical Zombies or p-zombies. They live and act as though they have Consciousness, and if you ask them if they are conscious, they answer with a resounding "Yes," yet they are not. And the problem of proving that other people are conscious is a hard one. The "Other minds" problem is a challenge in epistemology (or theory of knowledge) and goes like this: "given that I can only observe the behavior of others, how can I know that others have minds?"
I believe that such a distinction is not superfluous but essential as Consciousness might make all the difference in a being displaying intelligent behavior. Philosophers have theorized models of human beings that lack all forms of Consciousness. These are known as Philosophical Zombies or p-zombies. They live and act as though they have Consciousness, and if you ask them if they are conscious, they answer with a resounding "Yes," yet they are not. And the problem of proving that other people are conscious is a hard one. The "Other minds" problem is a challenge in epistemology (or theory of knowledge) and goes like this: "given that I can only observe the behavior of others, how can I know that others have minds?"
This might be a reason why the question of Consciousness was mostly ignored by the pioneers of artificial intelligence who have sought instead to determine whether a computer can do the same things as human beings. One of them was Alan Turing, who is credited with creating the theoretical model for any computer which was named after him: "Turing machine." In the latter part of his career, he was interested in the philosophical implications of computation, and so he came up with an experiment to determine whether a computer can maintain a conversation. It is known as the Imitation Game. In one version, it will have a human examiner sit in a room, write questions on cards, and pass them through an opening to another room. She would then get answers back through the opening written on cards as well. This model applies equally well to modern-day online chat software. The examiner should not know whether she is talking to a computer or a human being. The purpose is for her to judge with whom she is talking. If she rules to be indeed talking to a human being when, in fact, she is conversing with a machine, then the machine has passed the Turing Test. Notice how practical this test is; and that it is strictly concerned with external behaviour. Furthermore, since we don't know whether other people are conscious, why not also assume that this machine is.
Machines, Consciousness
and Strong AI – some definitions
At this point, I need to
clarify some of my definitions to make sure you can well follow my argument. When
I speak of machines in the context of this essay, I am referring to a special
type of machine that we commonly call a digital computer. This machine can be
modeled theoretically as a "Turing Machine." When I speak of Consciousness,
I am referring to this experience that I hope all human beings have and
certainly know that I have, of how it feels to be me (or how you feel to be
you). It has some qualities as an internal monologue, thinking in thoughts and
flashes and images. This internal movie that we all play in our minds eye. This
is as best as I can do for a definition of Consciousness. The concept is
notoriously hard to define.
Finally, I will point that some definitions of Strong AI ignore Consciousness all together and focus on the idea that the machine can do the same kind of thinking like human beings. I will, however, for the purpose of the essay, restrict my analysis to the definition of Strong AI that encompasses Consciousness.
Two thought experiments about
Consciousness
In Philosophy, Consciousness is a Hard Problem. It is claimed to be beyond scientific knowledge or understanding. One thought experiment shows the limits of this scientific knowledge. It is known as Mary, the Color Scientist problem, and it goes as follows:
Mary is a human girl who has never seen color in her
life. She lives in a Black and White room. Everything in it is black and white.
There are no mirrors to see herself, and she wears gloves and black and white
clothing. She has a TV in black and white and a computer in black and white.
And for her entire life, she has studied colors. She knows everything there is
to know about colors from a scientific perspective, but she has never seen
them. One day she gets out of her room into the real world, and she sees colors
for the first time. The experiment asks us: Does she learn something new? This question
shows the subjective nature of our conscious experience. It is something that
is innate in us that can't translate into scientific knowledge. This is the
idea of the subjective experience of the self. Which raises the question, how
will the computer that passes the Turing test be programmed? Will it have to
experience things on his own to be able to fool the examiner into thinking that
he is human?
Another one shows that external behavior is not enough to prove Consciousness, as in the thought experiment presented by a philosopher named John Searle. He tries to circumvent the difficulties in building the AI to show that even when the Turing Test is passed, the machine will not be conscious. The thought experiment is known as the Chinese Room argument and goes like this:
There's a room with
an English speaking human inside. She doesn't understand Chinese, but she is
given a book that contains lookup values for Chinese characters. The human
knows, however, how to recognize Chinese characters and can look them up in the
book. She would be passed questions written in Chinese on cards from the
opening of the room (the chat window) and she would then lookup for every
Chinese word combination an appropriate answer in the book. She would later
write down the Chinese reply on a card and return it to the examiner. Now, if
this Chinese room were to pass the Turing test, it can't be said to understand
Chinese, since the operator in the room doesn't understand Chinese. She is
merely following a syntactic system to translate Chinese.
Strong AI counter attacks
This Chinese room argument has had several attacks from Strong AI proponents. The first one is known as the Systems Response. It says that while the operator himself doesn't understand Chinese, the system comprised of the operator and the lookup book does indeed understand Chinese. Notice that in such a system, there is no center of Consciousness. We can't find that familiar core we all have when we understand a language. Removing what was called the Central Meaner was an essential innovation that Strong AI proponent Daniel Dennett came up with to be able to cross the obstacle posed by the Chinese problem to Strong AI.
Another attack on the Chinese room is the Simulation Argument. It argues that if we can simulate a human brain in a computer, then we would be able to have Artificial Consciousness. Searle replies with an analogy based on water pipes. He presents a system where there are water pipes that will activate the search in the lookup table for an answer to a Chinese question. The operator in the Chinese room would activate these pipes. He argues that this system is analogous to the operations of the brain is still a syntactic system and would not be capable of Consciousness.
My own take is that if
we can simulate the brain in a computer, that brain will be conscious, even if
the computer is a hydraulic system. In fact, the theory that the universe is a
computer simulation is a serious scientific theory. So how would a Strong AI
opponent or a Weak AI supporter answer?
But the universe is a
simulation
We start with the basics
of computer simulation. For this, we need to understand the Church-Turing
thesis. American Mathematician Alonzo Church and Alan Turing
created this theory. It states that "a function on the natural numbers is
computable in an informal sense (i.e., computable by a human being using a
pencil-and-paper method, ignoring resource limitations) if and only if it is
computable by a Turing machine." Computer Simulations of the natural world
come from the fact that they compute the results of laws that are known to be
computable on pen and paper, and it is these laws that get simulated inside the
computer. So, for instance, a simulation of a nuclear explosion (to test atomic
weapons), would involve duplicating the law: E=mc2 in a computer. The
calculations are made based on the initial mass, yield volume weight, and other
natural variables. But there is an essential point about the Church-Turing
thesis when it comes to simulation. It requires that the functions to be
simulated to operate on natural numbers. Sadly, for the Strong AI proponents,
such functions don't dominate our natural world or the laws that govern our
brains.
In 1961, a scientist named Edward Lorenz was trying to simulate the weather. He was working on computers at that time, and he plugged in the weather equations. The results he got were real numbers. He then had to cap them to the precision allowed by the registers he was using. (A Computer Register is a form of short-term memory that a computer processor uses to make calculations. They are limited by the number of bits they can store.) What Lorenz discovered was that if he increased the precision of his registers or used methods to increase their virtual size, he got different results from his simulation. This was the birth of Chaos Theory, which studies systems that are sensitive to initial conditions. It seems that for the weather - which he was trying to simulate - the small values after the decimal place that Lorenz capped had huge effects when it came to the physics of the weather. It is these initial conditions that gave birth to the analogy of a butterfly flapping its wings and creating a hurricane halfway across the world.
Another aspect of the natural world that was hard to simulate is the behavior of the subatomic particle and the quantum world. Their descriptive equations require the use of imaginary numbers and probability theory.
From these two facts, we know that a computer cannot simulate the natural world faithfully in all its complexities unless it has infinite memory, which is not possible in practice. So, Computer scientists have contented themselves with simulating the approximate behavior of the natural world. They really seek to make the approximation that captures the effect they are studying. For instance, a scientist studying the effect of gravity on a falling ball, would not need to simulate all the atomic particles that make up the ball. And in a lot of cases, the loss of precision from the approximations does not matter, and the full effect is captured: for example, the simulations of the weather a few days in advance. In others they do: for example, simulating the weather a month in advance or more. The key is that the physics that is being simulated is known to operate despite the approximations being made.
But the computer can learn
Knowing the laws governing the objects being simulated is vital. One current approach in artificial intelligence involves the use of machine learning. Machine learning is a technique that consists in capturing large amounts of data points from a phenomenon or natural process and then building a function that matches those data points, and that can be used to predict future behavior. For example, to be able to identify whether a patient has breast cancer, a machine learning system would first gather a lot of data on the sizes and shapes of patients' tumors, distinguishing those with breast cancer and those without breast cancer. The algorithm learns and builds a model that predicts breast cancer based on features that were accumulated. Once this prediction function is presented with new data on tumor size and shape for a patient, the function can make a diagnostic based on the history that it has seen. Machine learning finds its roots in statistical methods and has yielded a lot of successes in recent years. It also has been able to take advantage of the massive data accumulated on the web, which has led people to call our era the age of Big Data.
Machine learning is also one of the techniques being used and advocated to simulate the brain and create artificial intelligence. The idea is to gather a lot of data points from fMRI scans of brain waves and neuromechanics and then construct machine learning systems to simulate new brain functionalities. Many simulations of the brain that have been in the media recently (like the 1-second stimulation of the brain in Japan) use such techniques. Some argue that these will result in more understanding of how our brains work, if not outright artificial Consciousness. However, others are skeptical.
The famous linguist, AI researcher, and political dissident Noam Chomsky in one interview criticized the current use of machine learning in AI. He illustrated his point with the following thought experiment: "to simulate the weather outside a window today; we plug in the laws of meteorology to create a simulation that models those laws and check their interactions to make a prediction. Another way to run this simulation would be to take note of how the weather seems outside the window for one year or more and record variables we can measure from this window. We try to find the statistical inferential framework to link the variables to the weather and to use these variables; we create a model to predict the weather tomorrow." This approach, in his view, might be successful in predicting tomorrow's weather, but it would not discover fundamental science that is needed to predict the weather. I would add to this, however, that in some cases, this approach might yield wrong results. For instance, assume that one of the variables that got recorded was what people were wearing outside the window and that this was used in the simulation. Now imagine that on the day that is being predicted, there were no people outside, and that threw off the simulation.
Now statistics has a lot of science behind it to determine what are the relevant variables for inferences and what is not, and that is what gets used to create well-behaved machine learning models. However, usually, there is some intuition about how the system being modelled works and what laws it obeys.
The continuous brain
What we know of the inner workings of the brain is still limited, but we do know that its signals are both digital (binary signals) and analog (continuous functions with real values). We also know that it emits and reacts to chemical signals known as neurotransmitters and reconfigures its neural pathways, which is a phenomenon known as neuroplasticity. So in order to claim that we can reproduce one through simulation, we should be confident that we can simulate such continuous processes in the natural world. And from what we know about simulation, we can't assume that an approximate simulation of the brain will capture the mechanics of Consciousness. Neither can we infer that by using machine learning statistical approximations of neural signals, we will capture its intricate workings. For all, we know the complexity and finite interactions in the brain might be the generators of our Consciousness and that an approximation in a computer will fail to generate this complexity and the essence that is Consciousness.
Indeed, a committee of the world's top neuroscientists signed a petition in 2014 to stop the Blue Brain Project, which aims to simulate the human brain. They believe that without the fundamental science revealing the inner workings of the brain and namely the mechanism of Consciousness, the project is doomed to fail, and this will set back research funding for future projects.
News of human demise is greatly overstated
So, going back to media
reports of the birth of AI, while it makes good movies, there is no reason to
assume that we are on a cusp of a singularity that will usher in a new age of
thinking machines. Our current knowledge is not enough to believe that we will even
ever get there.
Comments
Post a Comment