Autonomous machines acting and thinking for us

Today, August 1, I had the honor of participating in a panel of Mexico’s edition of the "World Legal Summit", an event that was held simultaneously in 32 cities in 25 countries, to bring together the legal and technological industries in a global collaboration effort to identify and propose measures for the sustainable development of technologies related to artificial intelligence. (Information about the event can be found here)

The event program included three panels that, in my opinion, deal with the key issues related to the application of artificial intelligence tools: identity and governance; cybersecurity and personal data, and autonomous machines. This last panel is where I participated, to present some ideas that I now share.

Panel-maquinas-autononasjpg

Juan Carlos Luna, Daniel Acevedo, Luis Fonseca, Adi Corrales, Christian Palacios

An analysis of the legal implications of the massive use of autonomous machines must include both the devices that act in the physical environment (such as autonomous vehicles of all types, drones or robots) and the systems that automate processes and decisions (from the planning of activities of the Hubble telescope and fleet logistics, up to the granting of credits, the hiring of personnel and facial recognition).

Both types of machines work from an intelligent agent, a computer program that receives inputs from the environment (data, images, sounds), evaluates them with a methodology based on artificial intelligence and machine learning algorithms, and finds the action that more comes close to fulfilling the objective that was programmed.

The difference is that some machines act in the physical environment to implement their decision and others do not. An autonomous vehicle or a robot physically acts to reach its destination using the chosen route or to execute the assigned task, while a system used to authorize credits only yields an answer. However, in both cases the decision making by an intelligent agent affects the role of those who participate or are affected by these decisions and have an increasing impact in the lives of people.

robot_factoryjpg

Thanks to advances in artificial intelligence and big data, autonomous machines with new capabilities begin to be used in environments that become complex in three dimensions: 1) the level of interaction with other agents, including humans; 2) the degree of specialization of the decision, which is no longer the simple physical or repetitive work, and 3) the nature of the decision (high stakes) and of the decision maker whom it supports or replaces (that has passed from private to public).

There are concerns about the operation of autonomous machines, which have to do with the way to make them enforceable the same rules that are applicable to human beings and how to preserve the guarantees and rights of the latter.

The concerns cover the three elements of the intelligent agent:

  • Input collection. How to ensure that, in high impact applications, the data sets used for machine learning are free of discriminatory criteria or biases. It is about detecting and reducing human and social biases, not keeping them, scaling them, or creating new ones.
  • Best alternative selection. The rise of deep neural networks has generated autonomous machines of great accuracy, but whose decision process is not possible to explain, an unacceptable situation in critical applications or involving acts of authority. On the other hand, an incorrectly defined objective function can lead to algorithms having unwanted behaviors (a classic example is that of a robot vacuum cleaner that is programmed to maximize the volume of dust aspirated, so it decides that the best strategy is a cycle of aspirating, throwing and re-aspirating).
  • Impact of decisions. Decision making by autonomous machines requires reviewing the responsibilities established by law or custom, setting rules for their interaction with human beings and establishing special measures for their use by the government or an authority.

Big_datajpg

Resolving these concerns is essential to gain the support of society towards the use of autonomous machines, so some regulation is necessary, not to limit innovation, but in order to establish conditions to maximize benefits and minimize the risks.

The regulation should aim to ensure principles of fairness, transparency, verifiability, security and control in the construction of the intelligent agent, as well as the generation of acceptable behaviors, which are the result of objective functions also acceptable, under criteria at least equal to those that a human being is demanded and proportional to the importance of the decisions taken.

The use of autonomous machines to assist or replace government's decision must have special attention. Rules should be promoted so that the decision to use an autonomous machine is the result of a public discussion, with mechanisms to ensure the transparency and verifiability of the data sets used, the preferential use of interpretable models under same level of accuracy situations, and the validity of citizen’s means of defense.

The new autonomous machines have the potential to generate enormous benefits, profoundly transforming areas of human activity and creating new ones. However, the positive effects will come as long as society and government establish conditions for their proper design and deployment, as well as measures so that the greatest number of people have access to their benefits. The goal is not only to enhance its use but being alert to mitigate or resolve risks and disruptions. This way the conjunction of machine and human being will be, as it has been throughout history, a generator of new spaces of growth and well-being for humanity.

In this task, forums such as the World Legal Summit are essential, not only to raise awareness among public opinion, but also to generate proposals and actions that guide the actions of governments and industry.

Did you enjoy this post? Read another of our posts here.
 
 

Visit our other sections