Artificial Intelligence, do we have to reason to think?

In the past days I received a message asking how the situation of Artificial Intelligence (AI) has changed from the expert systems of the 1980s. This is an excellent question, as it not only helps to understand the progress of recent years, but also highlights one of the most important topics of discussion in the development of AI.

A characteristic of intelligence is the wide variety of its manifestations, such as reading and writing, drawing, understanding a joke, doing mathematical operations, writing poems, or speculating about the origin of the universe. All these manifestations fall into two categories, those that involve a reasoning, such as solving a riddle, and those that we do without thinking, such as reading and writing.

This distinction has traced the parallel paths that the progress of AI has followed, known as Symbolic AI  and connectionism. Symbolic AI says that for a machine to behave intelligently, it needs to reason from a set of principles and rules, represented in symbols that a computer can process. For its part, connectionism states that to generate intelligent behaviors it is enough for a computer to be able to make connections between data or perceptions, without having knowledge of any kind.

Thus, connectionism believes that mind and behavior emerge from networks formed by simple interconnected units, whiles Symbolic AI believes that intelligence implies high-level "symbolic" representations of problems.

dos_corrientesjpg

Symbolic IA followers focused on developing systems to solve problems involving reasoning, such as proving mathematical theorems or playing chess, while connectionists focused on developing systems related to perception, such as image and writing recognition.

Initially both currents advanced at a similar pace and made significant progress (read our article on the history of AI). However, the connectionist approach was soon limited by technology, because for an algorithm to find patterns that allow it to recognize objects or letters in an image it needs a large number of examples, as well as a large amount of data from each example. These requirements exceeded the capacity of existing computers or made the recognition process too slow.

For his part, the symbolic approach yielded results that caught a lot of attention. Computer programs to translate languages, hold simple conversations, play checkers and chess at the professional level, as well as solve intelligence tests generated high expectations. On what was the cusp of symbolism, in the 1980s so called "expert systems" were used in various industries in the hope of capturing the knowledge of human experts, which would be available to assist anyone who used the system.

The expert systems were powered by a series of rules of the form if (...) then (...), which were obtained in interviews with specialists in the field in which we were working. The system operated through an interface in which the facts of a situation were fed, or a series of questions were answered, from this data the system did a search of the applicable rules and responded with the corresponding conclusions.

sistemas_expertosjpg

Expert systems were successfully used in medicine, chemistry, oil exploration, buying order processing and training, but these were rigid systems for specific topics. Making them more flexible and able to handle uncertainty involved generating a huge number of rules to rule out non-relevant information and impossible facts, as well as to consider all the alternatives of a situation. This, coupled with the difficulty of gaining the knowledge of the experts and transforming it into rules, led to the loss of interest and its use not extended.

In this 21st century, what is known as the AI spring has taken place, where the availability of big data, large-capacity computers, and powerful algorithms has enabled extraordinary advances in machine learning, the AI branch in charge of finding patterns in data. These advances have skyrocketed over the past five years with the use of deep neural networks, which have managed to exceed the human standard in various applications and are used to automate decisions in more and more activities.

Both machine learning and deep networks are expressions of connectionism, because to function they do not need rules or domain knowledge. From attributes from thousands of examples, algorithms find patterns to connect each example to the category to which it belongs and use these patterns to predict the category of new examples based on their attributes (read our article on how they work).

pensamientojpg

In this way, in the AI of the 21st century connectionism prevails, expert systems have been relegated to a few applications. In addition, deep neural networks not only achieve amazing results in image and sound recognition, but have proven to be effective in areas that were supposed to involve reasoning, such as language translation, board and video games, as well as a large number of activities involving decisions based on data from previous situations.

The ability to accurately replicate human decisions without the need for rules or domain knowledge generates a great debate about the nature of our minds, are connectionists right when they claim that thought and behavior emerge from thousands of simple components of a large network that don't reason? Doesn't consciousness exist then? Is making a machine that thinks and is aware just a matter of having enough data and larger networks?

I don't have the answer, but it seems to me that, whatever it is, it does not change the value of the person and the need to preserve his dignity. Problems and obstacles may be solved in the future to create a general and even conscious artificial intelligence, but along the way, what we should not lose sight of is that the development of these tools should aim to benefit the Humanity. As far as they do, they are worth having.


Did you enjoy this post? Read another of our posts here.

 
 

Visit our other sections