Vulcan and the four seasons, or the history of AI
Creating an artificial intelligence (AI) is an ancient dream of man. Already in Homer's Iliad the golden maidens were mentioned, two gold automatons with the appearance of young women who possessed intelligence, voice and strength, and who helped the god of fire Hephaestus (Vulcan for the Romans) in his palace at Olympus, where he also made tripods with gold wheels that by themselves would go to the gathering of the gods and return home. (Paragraphs 368 and 410 of book XVIII here)
Throughout history we find efforts, naturally limited by the technology of his time, to make this dream come true, such as Leonardo Da Vinci's mechanical knight at the end of the 15th century, the drummer flutist and the digesting duck by Jacques de Vaucanson in the 18th century, or the chess player of Leonardo Torres y Quevedo in the early 20th century.
Leonardo Da Vinci's automaton. Photo eldiario.es
These efforts have always been a matter of controversy, due to the several positions regarding the nature of the human mind, ranging from the duality of mind and body of René Descartes to the mechanics of Thomas Hobbes, who claimed that man is only a machine.
In this way, dreams and expectations, technological advancement and the concept of the human mind are the three forces whose interaction historically, and even today, has determined the speed and direction of AI progress.
The modern development of this field of knowledge starts with the introduction of the digital computer, which provided a "brain" in which AI could reside in, one with which one could experiment and develop practical applications. If we must mention a time and place, this would be the summer of 1956 at Dartmouth University in the city of Hanover, New Hampshire, USA. There the “Summer Research Project in Artificial Intelligence” was held, a program attended by 10 of the parents of this field, who over 6 weeks discussed papers on topics that continue among the main areas of research: neural networks, logic and reasoning, pattern recognition, geometry theorems proof and a program that played chess.
Attendees of the Dartmouth Artificial Intelligence Project in 1956 (Photo: Margaret Minsky)
The next 15 years were driven by the technological advance represented by the continuous progress of computers, as well as by the growing expectations of academic and defense institutions in several countries, which upon discovering the potential of AI supported the opneresearch centers and projects in almost all areas. Among the advances of this era are programs to translate, answer simple questions, play checkers and chess on a professional level, solve calculus problems and intelligence tests, classify images and the first robot that could move and perform tasks.
From the 70s, governments and companies supported numerous projects to develop "expert" systems that, according to their expectations, would replicate the reasoning processes, answer questions and make recommendations. However, the incredible number of options that had to be analyzed to solve real problems began to exceed the capacity of the most advanced computers, limiting the results. Despite this, advances were made such as robots that executed verbal orders, expert systems for medical diagnosis and commercial applications in banking and mining, as well as the first autonomous vehicles that could evade obstacles.
By the mid-80s the results achieved were no longer at the level of expectations, which led to the disenchantment of the defense apparatus and companies, who stopped investments and caused the so-called AI winter. However, along with the support, the pressures also decreased, so there could be a period of maturation that brought theoretical advances and a more scientific approach, waiting as the paradigm of computing changed towards the use of networks. Among the ideas that emerged at this time are the foundations for deep neural networks, as well as algorithms for machine learning and data mining.
Finally, the 21st century brought the explosion of internet connectivity, the increase in computing capacity and the invention of smartphones, advances that made huge amounts of data available to researchers, along with the ability to process them at great speed. Thanks to this, the new ideas could be put into practice and others were generated, giving rise to a spring of artificial intelligence in which it is now the companies that invest and in which we have seen natural language interpretation and image recognition become common, autonomous vehicles already in tests on the streets and machine learning applications making predictions and recommendations in numerous industries and services.
The progress has been so rapid and profound that we have not had time to assimilate it, so the controversies about the economic, social and ethical impacts of the current and future use of AI have acquired great intensity, with expectations that sometimes seem to exceed the reach of technology. It is paradoxical that, in the 80s, given the controversies caused by the progress of artificial intelligence, experts used to be heard to say “AI is much more than that!”, today it is common to hear them exclaim “AI is much less than that! ”.
We cannot guess the future, but what we can say for sure is that, after the passage of the four seasons and almost 2,800 years, it seems that the Homeric dream of automatons and autonomous vehicles begins to become reality, not for the gods, but for the men and women of this era, who must find a way to benefit from these tools that, in the vision of the Iliad, seemed exclusive of Olympus.
For more information on the history of AI:
- The Quest for Artificial Intelligence. Nilsson N. 2009. Cambridge University Press.
- A brief history of AI. Page of the Association for the Advancement of Art
- An Executives Guide to AI (Timeline). Mckinsey.