Is artificial intelligence intelligent?
Between March 9 and 15, 2016, a historic clash took place in Seoul, capital of South Korea, between the 18-time world champion of the millennial board game Go, Lee Sedol, and the AlphaGo computer program, developed by the English firm DeepMind Technologies, now owned by Alphabet, Inc., the controller of Google.
The five-game series was followed by more than 60 million people, given the great interest aroused by a software that could master the oldest board game practiced today. The simple rules and movements of Go give it a level of abstraction that made it one of the four essential arts to study in ancient China, and which is still taught around the world to develop abstract and strategic thinking.
Lee SeDol after the Google DeepMind Challenge Match. Photo by www.news.cn
A series that began with a confident world champion saying his doubt was whether he would win it 5-0 or 4-1 quickly became a distressing event when AlphaGo won the first game, and the second, and the third. In the fourth game Lee Sedol achieved a victory that was celebrated by the public as a triumph of all humanity, but was defeated again in the last game to close the series 4-1 in favor of AlphaGo (It is worth to see the Netflix documentary or the story published in The Atlantic).
Throughout the series AlphaGo deployed a game style that was as unusual as it was effective, which led Lee Sedol to declare that it would change the way Go is played. That style of play took a climax with the famous 37th movement of the second game, so unexpected that it ripped out comments like "no human would have made that move", or "what was AlphaGo thinking?".
This question always arises when we witness an impressive performance of a computer program, and its answer is at the heart of current debates about artificial intelligence. When AlphaGo plays, is it thinking?. When computers make medical diagnoses, answer questions, paint pictures or write poems, are they thinking? Is artificial intelligence intelligent?
The simplest answer is no. Artificial intelligence programs use instruction sequences called algorithms to quickly analyze large amounts of information and find patterns that are then used to make decisions on the same topic. This way, a classification program analyzes tens of thousands of photos of dogs and cats, determines which parameters characterize each other, and then accurately predicts whether a new photo corresponds to a dog or a cat. But that's not thinking, is it?
Object detection and image classification with Google Coral USB Accelerator by PYimageSearch
AlphaGo uses three artificial intelligence techniques: a search algorithm to review millions of alternatives derived from any movement possible in a board position, deep neural networks to estimate the probability of triumph of each, and reinforcement learning to remember the most successful movements after testing millions of times (technical details can be found here). In this way, AlphaGo evaluates the possibilities derived from each alternative, estimates the ones that give the greatest chance of winning and takes advantage of past experiences. But that's not how a human being plays, is it?
Example comparison and trial and error are learning mechanisms for humans and many other animals. There are also automatic learning processes that make the human mind operate, as Nobel laureate Daniel Kahneman points out, in two modes: a slow one, which involves reflection, and a fast one, in which instincts, experiences and beliefs give automatic responses. The trial-and-error system helps shape habits that nourish the fast mode.
But a young child doesn't need thousands of photos to learn how to distinguish a dog from a cat, and it was necessary for AlphaGo to train the equivalent of centuries to reach the level acquired by Lee Sedol in 20 years as a professional. Clearly, the human mind is more efficient. In addition, the human being is self-aware, a source of creativity and feelings that led Ada Lovelace, daughter of Lord Byron and author of the first published algorithm in history, to write that a computer would never have ideas or intentions of its own.
However, there seems to be no limit to automatic learning processes that increase the activities we perform unconsciously, regardless of its complexity, such as driving a car, speaking another language or playing the piano. Is it then that consciousness is an illusion, the result of automatic processes that we have not yet explained? Professor Yuval Noah Harari states that an unsettling test is to try to put the mind blank, if there really is a mind-controlling consciousness, where do those thoughts that arise without us wanting them come from?
Medical 3D brain model. Photo by Kaique Rocha from Freepik
At this point the discussion continues through the realm of philosophy and personal beliefs (does the mind reside in the brain?) to follow with the limits of knowledge about the human mind, very important issues but not necessarily crucial for the development of artificial intelligence. While many of the recent advances in this field are in areas that emulate human intelligence, such as language or vision, and many of them have served to improve our understanding of how the mind works, the main goal of this discipline is not to imitate the human mind, as this restricts its scope and possibilities.
As Professor Michael I. Jordan of the University of Berkeley points out, whether we come to understand intelligence for the foreseeable future or not, we have a great challenge on our hands to bring computers and human beings together in ways that improve human life. For that we do not need an intelligence designed in the image and likeness of our own, but one with different approaches that increase our potential.
Did you enjoy this post? Read another of our posts here.