Based on materials from AI Now Institute
The AI Now Institute, based at New York University, has published its third annual report on the state of artificial intelligence in 2018. It also includes ten clauses that provide guidelines for governments, researchers and anyone associated with the industry.
2018 has been a very dramatic year for AI, from the perceived impact Facebook on ethnic cleansing in Myanmar to election manipulation attempts by Cambridge Analytica, Google's secret and censored search engine for China to what anger caused the employees Microsoft of her contracts with the US Migration Service, which separated migrant families. A huge number of high-profile headlines, and the reasons listed above are just a few examples among hundreds of similar ones.
At the heart of these avalanche-like AI scandals is a question of responsibility: Who is to blame if we are harmed by AI systems? How will we assess this harm and how to eliminate its consequences? Where should we intervene and what additional research should we conduct to be sure that this intervention is effective? So far, there are very few answers to these questions. And the existing regulatory mechanisms are very weak in comparison with what is required in this area. As the penetration, complexity and scale of these systems grow, there is growing concern about the lack of accountability and oversight, including related legal procedures.
So, building on previous reports from 2016 and 2017, the Institute of Artificial Intelligence prepared its 2018 report, where these issues took center stage. It provides a set of practical guidelines to help develop guidelines for managing these powerful technologies.
These are the guidelines.
1. Governments should address AI regulation by empowering relevant agencies and empowering them to oversee and monitor these technologies across applications.
AI systems are being deployed quickly, without adequate governance, oversight, and accountability. Areas such as health, education, criminal law and welfare have their own history, legal framework and specific risks. Generic national standards and certification models will certainly need to obtain a more thorough industry specification for more detailed regulation. We need a specific approach that focuses not on the technology itself, but on its application in a specific area. The US Federal Aviation Administration and the National Highway Traffic Safety Administration are good examples of this approach.
2. Facial recognition and emotion recognition need strict regulation to protect the public interest.
Such regulation should include national laws with strong oversight, clear limits and clarity to the public. People should have the right to refuse to implement these technologies in both the public and private spheres. A simple public announcement of their use is not enough. The threshold for any agreement must be high, given the dangers of constant and universal surveillance. Recognition of emotions requires special attention. It is a form of facial recognition that is designed to detect things like personality, feelings, mental health and employee 'engagement' based on images or videos. Such possibilities are not supported by comprehensive scientific research and are used irresponsibly and unethically, which often reminds of pseudosciences such as phrenology and physiognomy. And if emotion recognition is applied in hiring, insurance, education and the police, it will pose very serious risks on a personal and community level.
3. The artificial intelligence industry is in urgent need of new management approaches.
As the report shows, internal governance structures at most technology companies fail to provide accountability for AI systems. Government regulation is important, but the AI industry leaders also need internal accountability mechanisms that go beyond simple ethical standards. These should include the representation of frontline employees on the board of directors, external ethics boards, and independent monitoring and transparency measures. Third-party experts need to be able to audit and publish information on critical systems, and companies need to ensure that their AI infrastructure is clear and transparent, including how AI is deployed and applied.
4. Companies using artificial intelligence must abandon trade secrets and other legal requirements that make public sector accountability impossible.
Vendors and developers building AI and automated decision-making systems for government use must agree to discard trade secrets and other legal requirements that may prevent their software from being fully verified and understood. Corporate secrecy rules are an obstacle to legal safeguards, and they contribute to the 'black box effect', creating impenetrable and non-accountable systems. This makes it difficult to assess the degree of bias, challenge decisions, and correct errors. Anyone who creates technology for use in the public sector should require vendors to waive these requirements before entering into any agreements.
5. Technology companies must protect conscientious objectors, employee organizations, and ethics whistleblowers.
Self-organization and resistance from technology workers have acted as a force for responsibility and ethical decision-making. Tech companies need to protect the right of employees to organize, make disclosures, and make ethical decisions. This means a clear policy and protection of conscientious objectors, the right of workers to know what they are working on, and the ability to evade this work without subsequent punishment. Employees who raise ethical issues should also be protected, as should those who raise disclosures in the public interest.
6. Consumer advocacy organizations should apply advertising fairness laws to artificial intelligence-related products and services.
With the hype around AI growing steadily, the gap between marketing promises and actual product properties is only widening. And this increases the risks for both private buyers and business clients, often with fatal consequences. As with other products and services that have the potential to significantly influence or exploit people for their own purposes, those who sell AI must adhere to high standards and live up to their promises, especially when there is a lack of adequate scientific basis to fulfill those promises and the long-term implications are unclear.
7. Technology companies must commit to addressing workplace discrimination
Tech companies and artificial intelligence in general are an area in which they try to train and hire as many different employees as possible. And while this is important, there is a risk of overlooking cases where people are discriminated against. Companies should deal with these kinds of workplace issues and explore the links between exclusionary cultures and the products they create that can only perpetuate discriminatory practices. This shift in focus must be accompanied by practical recruitment action.
8. Integrity, responsibility and transparency in artificial intelligence require a complete understanding of the entire 'supply chain'.
To have meaningful responsibility, we must be able to better understand and track the building blocks of the AI system and the 'complete supply chain' on which it is based. This means the origin and use of training data, test data, models, APIs and other infrastructure components of the product lifecycle. We call this 'control over the complete supply chain' of AI systems, and it is a prerequisite for more demanding audits. A 'complete supply chain' also means real environmental and labor costs, which include the use of energy, labor in developing countries to generate training data, and clickworkers to maintain and develop AI systems.
9. Increased funding is required for litigation, labor management and community participation in AI liability issues
Those people who are most at risk of being harmed by AI systems are often the worst able to resist the consequences at the same time. We need increased support for remedy mechanisms and civic engagement. This means supporting community advocates to represent those cut off from social assistance by algorithmic decisions, supporting civic organizations and trade unions advocating for groups at risk of job loss and exploitation, and maintaining an infrastructure for public participation.
10. The study of artificial intelligence in universities should not be limited to computer and engineering disciplines
AI started out as an interdisciplinary phenomenon but has evolved over the decades into a technical discipline. And as AI is increasingly applied in the social sphere, it is necessary to expand the range of disciplines that it affects, that is, to draw expertise from the social sciences and the humanities. AI was originally planned for use in society, and it cannot be limited to the field of computing and engineering, preventing students from studying and exploring social life. Expanding the disciplinary spectrum in the study of AI will provide greater attention to the social context and will allow us to focus on the potential threats of using AI systems in human society.
Of course, being an official report, such a set of recommendations is simply doomed to be, on the one hand, too clerical, and on the other, too divorced from reality. How beautiful the clauses about combating discrimination sound (however, bearing a clear imprint of local, American specifics) or that companies should provide important information to literally everyone who wants to, is beautiful and completely unrealistic. And, nevertheless, a lot here makes you think. Regulatory regulation, as always, does not keep pace with advances in technology, and in the future this could lead to much more problems than one might suppose now.
Perhaps you see some ways of solving the indicated problems? Or would you add your items to this list? Which ones seem important to you, and which ones are far-fetched? Share in the comments!