Artificial Intelligence: Diffuse apocalypse, tangible risks

The arrival of a self-aware Artificial Intelligence (AI) capable of self-improvement is still distant; with it, far away is the threat that could pose a machine that surpasses the human being in all areas. However, the risks and unintended effects of AI and machine learning (ML) use are real and tangible, as the applications of these technologies extend at a faster rate than understanding of their implications, or the setting of rules to maximize their benefits and mitigate their risks.

Lawsuits for discrimination in all kind of algorithmic decisions, fatal accidents involving autonomous vehicles, as well as applications to generate fake texts, images and videos, account for concrete negative effects. On the other hand, the government's use of these tools to increase population surveillance, make decision regarding economic support, or the launching of social scoring systems is disturbing. All this has led to the emergence of institutions to study the implications in detail and raise their voices on concern issues.

Along these lines, in December the IA Now 2019 Report on the short-term social and economic implications of AI technologies was published, one of the most influential reports. It is published by the AI Now Institute of the University of New York, a research institute dedicated to understanding the social implications of AI technologies. It was founded in 2017 by Kate Crawford, an academic and researcher at Microsoft, and by Meredith Whittaker, an academic, activist and researcher at Google until July 2019, when she resigned due to differences with the company.

AI_Now_2019png

Among the most important aspects of this year, the report highlights the following:

  • The spread of AI systems for employee control and management is increasing the asymmetry of power between employers and workers.
  • The action of civil society groups, and not institutional ethical statements and policies, has pressured companies and governments to establish barriers to the harmful use of AI.
  • Efforts are being made to regulate AI systems, but they are overtaken by the government's adoption of these tools for monitoring and control.
  • AI systems continue to amplify disparities with techniques such as affection detection that, without a sound scientific basis, are implemented in classrooms and job interviews.
  • The investment and development of AI has effects in areas ranging from patient rights and climate change, to geopolitics and inequality between countries.

On the other hand, the document mentions the identification of new issues of concern:

  • Private automation of public infrastructure. The hiring of companies to implement AI systems in public infrastructure raises concerns about conflicts of interest and inadvertent privatization of public spaces and government functions.
  • Built-in bias. Recent controversies in the AI industry draw attention to the problems of discrimination within companies, which reflect a biased culture that is transmitted to the systems and algorithms designed.
  • AI and climate crisis. Despite announcements from the industry of measures to reduce its environmental impact, the absence of public validation mechanisms, the use of models that demand more computing capacity and the relationship with the energy industry raises concern.
  • Faulty Scientific Foundations. The use of systems without a solid scientific foundation, such as affection detection, or attempts to diagnose psychological conditions from social media data, is spreading.
  • Health. Risks are perceived in the proliferation of corporate partnerships with health institutions to share data to train AI models, as well as in the emphasis on risk prediction, which could affect access to health care or stigmatize people.
  • Practices in the development of ML. It highlights the need to integrate the vision of the social sciences and humanities into the design decision-making of high-impact AI systems, as well as protection mechanisms against malicious designs or data contamination.

strike-51212_400jpg

Image by LoggaWiggler from Pixabay

To address these risks, the report formulates twelve recommendations:

  1. Regulators should prohibit the use of affection detection in important decisions that affect people's lives and access to opportunities.
  2. Government and businesses should stop all use of facial recognition in sensitive social and political contexts, until risks are fully studied, and appropriate regulations are established.
  3. The AI industry needs to make significant structural changes to address systemic racism, misogyny and lack of diversity in businesses.
  4. Research into AI biases must go beyond technical corrections to address public policy issues and the consequences of AI use.
  5. Governments must demand public disclosure of the climate impact of the AI industry, as is done for the automotive and air industries.
  6. Workers should have the right to challenge exploitative and invasive use of AI for labour management, and unions can help.
  7. Employees of technology companies should have the right to know what they are building and to challenge unethical or harmful uses of their work.
  8. States should develop expanded biometric privacy laws, which regulate both public and private actors, to protect against unauthorized collection and use of biometric data and the gray and black markets that sell data.
  9. Lawmakers should regulate the integration of public and private surveillance infrastructures. We need transparency, accountability and oversight, as well as public outreach and debate of public-private partnerships, contracts and procurement.
  10. Algorithmic impact assessments should include the impact of AI on climate, health and geographical displacement.
  11. ML researchers should consider potential risks and harms, as well as better document the origin of their models and data.
  12. Lawmakers should require informed consent for the use of any personal data in health-related AI.

doors-1690423_400jpg

Image by Arek Socha from Pixabay

Making the advances of AI and ML in a better life for humans means being vigilant that, along the way, people's dignity and freedom are preserved and asymmetries that deepen inequality are not created. We cannot trust that governments and companies will decide what is right, nor can we think that we are outside these issues, because the ease of implementing the technologies and the image of modernity involved in their adoption lead to them already in use in many countries, including Mexico.

In this sense, it is hopeful to see that the action of civil society generates changes of direction, is doing so in various places in the world. That is why in countries like Mexico, where the adoption of AI is incipient, it is important to stay informed and provoke discussion about projects undertaken by public and private institutions, in order to forge an AI-enabled future in which human beings expand our possibilities for personal and work development.

This is how we close this year 2019 in IF Future Intelligence. We wish you the best in the company of your family and friends for these holidays and the coming year 2020.


Did you enjoy this post? Read another of our posts here.

 
 

Visit our other sections