Artificial Intelligence: Diffuse apocalypse, tangible risks
Lawsuits for discrimination in all kind of algorithmic decisions, fatal accidents involving autonomous vehicles, as well as applications to generate fake texts, images and videos, account for concrete negative effects. On the other hand, the government's use of these tools to increase population surveillance, make decision regarding economic support, or the launching of social scoring systems is disturbing. All this has led to the emergence of institutions to study the implications in detail and raise their voices on concern issues.
Along these lines, in December the IA Now 2019 Report on the short-term social and economic implications of AI technologies was published, one of the most influential reports. It is published by the AI Now Institute of the University of New York, a research institute dedicated to understanding the social implications of AI technologies. It was founded in 2017 by Kate Crawford, an academic and researcher at Microsoft, and by Meredith Whittaker, an academic, activist and researcher at Google until July 2019, when she resigned due to differences with the company.
Among the most important aspects of this year, the report highlights the following:
- The spread of AI systems for employee control and management is increasing the asymmetry of power between employers and workers.
- The action of civil society groups, and not institutional ethical statements and policies, has pressured companies and governments to establish barriers to the harmful use of AI.
- Efforts are being made to regulate AI systems, but they are overtaken by the government's adoption of these tools for monitoring and control.
- AI systems continue to amplify disparities with techniques such as affection detection that, without a sound scientific basis, are implemented in classrooms and job interviews.
- The investment and development of AI has effects in areas ranging from patient rights and climate change, to geopolitics and inequality between countries.
On the other hand, the document mentions the identification of new issues of concern:
- Private automation of public infrastructure. The hiring of companies to implement AI systems in public infrastructure raises concerns about conflicts of interest and inadvertent privatization of public spaces and government functions.
- Built-in bias. Recent controversies in the AI industry draw attention to the problems of discrimination within companies, which reflect a biased culture that is transmitted to the systems and algorithms designed.
- AI and climate crisis. Despite announcements from the industry of measures to reduce its environmental impact, the absence of public validation mechanisms, the use of models that demand more computing capacity and the relationship with the energy industry raises concern.
- Faulty Scientific Foundations. The use of systems without a solid scientific foundation, such as affection detection, or attempts to diagnose psychological conditions from social media data, is spreading.
- Health. Risks are perceived in the proliferation of corporate partnerships with health institutions to share data to train AI models, as well as in the emphasis on risk prediction, which could affect access to health care or stigmatize people.
- Practices in the development of ML. It highlights the need to integrate the vision of the social sciences and humanities into the design decision-making of high-impact AI systems, as well as protection mechanisms against malicious designs or data contamination.
To address these risks, the report formulates twelve recommendations:
- Regulators should prohibit the use of affection detection in important decisions that affect people's lives and access to opportunities.
- Government and businesses should stop all use of facial recognition in sensitive social and political contexts, until risks are fully studied, and appropriate regulations are established.
- The AI industry needs to make significant structural changes to address systemic racism, misogyny and lack of diversity in businesses.
- Research into AI biases must go beyond technical corrections to address public policy issues and the consequences of AI use.
- Governments must demand public disclosure of the climate impact of the AI industry, as is done for the automotive and air industries.
- Workers should have the right to challenge exploitative and invasive use of AI for labour management, and unions can help.
- Employees of technology companies should have the right to know what they are building and to challenge unethical or harmful uses of their work.
- States should develop expanded biometric privacy laws, which regulate both public and private actors, to protect against unauthorized collection and use of biometric data and the gray and black markets that sell data.
- Lawmakers should regulate the integration of public and private surveillance infrastructures. We need transparency, accountability and oversight, as well as public outreach and debate of public-private partnerships, contracts and procurement.
- Algorithmic impact assessments should include the impact of AI on climate, health and geographical displacement.
- ML researchers should consider potential risks and harms, as well as better document the origin of their models and data.
- Lawmakers should require informed consent for the use of any personal data in health-related AI.
This is how we close this year 2019 in IF Future Intelligence. We wish you the best in the company of your family and friends for these holidays and the coming year 2020.
Did you enjoy this post? Read another of our posts here.