Artificial intelligence doesn't have to be evil. We just need to teach him good.
Original material
This year, there have been sensational reports of DeepMind, Google's artificial intelligence, or rather, its 'extremely aggressive behavior' when left 'alone' with devices. Researchers at Google have pitted intelligent AI agents in 40 million rounds of a fruit-picking computer game. At a certain stage, the apples ran out, and the agents began to attack each other, eliminating competition, thereby reflecting the worst of human impulses.
It's not hard to find other examples of AI 'adopting' misbehavior, like the famous bot Tay from Microsoft. Launched in Twitter in early 2016, Tay was supposed to 'learn' through user interaction. But practically from the beginning, she was subjected to a flurry of racist, anti-Semitic and misogynistic comments. As she educated herself around the environment, Tay began to issue a series of provocative responses, including the famous 'September 11 – Bush's Handiwork, Hitler Would Be Much Better Than This Ape In Power.' The developers closed the project just 16 hours after the announcement of Tay.
The example is simple. But there is a challenge in it. Of course, billions of people express their thoughts, feelings, and experiences on social media every day. But training an AI platform using information from social media to replicate a 'human' user experience is risky. This is akin to raising a child and constantly feeding him information from FoxNews or CNN without any input from parents or social institutions. In any case, the end result can be a monster.
The reality is that information from social networks, which quite objectively reflects the digital footprint we leave, may absolutely not correspond to the real state of affairs or be not entirely pleasant. Some posts reflect extreme ambition, others are covered with a veil of anonymity and show an abomination rarely seen in life.
By and large, social information itself is not a snapshot of who we really are or who we should be. Whatever useful information from social media profiles is for learning AI, it lacks a sense of ethics or moral framework for evaluating such information. Which behaviors from the whole spectrum of human experience that we share in Twitter, Facebook should be modeled and which should be avoided? What is right and what is wrong? Where is good and where is evil?
A software layer for religion and ethics in AI
Understanding how to embed an ethical component into AI is not a new challenge. Already in the 40s of the last century, Isaac Asimov worked hard to draw up his laws of robotics. Law # 1: “A robot cannot harm a person or, by its inaction, allow harm to be done to a person.” But these problems are no longer science fiction. There is an urgent need for moral guidelines to guide the AI, with which we increasingly share our lives. The importance is growing as AI is already creating its own AI without human intervention, as is happening now with AutoML from Google. Tay has become a relatively harmless annoying character. Tomorrow she could easily start developing strategies for large corporations or heads of state. What rules should she follow and what should she ignore? Science does not yet have a clear answer, it cannot be pulled out of the set of social information, it cannot be obtained even with the best analytical tools, no matter how large the sample size.
But the answers can be found in the Bible. And in the Koran, Torah, Bhagavad Gita and Buddhist sutras. They are hidden in the works of Aristotle, Plato, Confucius, Descartes and other ancient and modern philosophers. For thousands of years, we have invented the rules of human behavior – basic principles that would ideally enable us to live in peace and prosperity. The most powerful prescriptions have crossed the millennium line with minor changes, thus showing their relevance. More importantly, these teachings are based on similar tenets of moral and ethical behavior, from the golden rule of morality and the idea of the sacredness of life to the value of honesty and the good of generosity.
As AI develops, we need more than ever the flowering of a corresponding religious, philosophical and humanitarian approach. In many ways, the nature of the potential of most disruptive technologies directly depends on how effectively we apply the wisdom that has passed through the centuries. The approach should not be doctrinal or lean towards a particular philosophy. But for AI to exist effectively, an ethical rationale is needed. Information alone is not enough. AI needs a religion: a code that won't change based on context or training set.
Instead of parents and priests, responsibility for ethical education will increasingly fall on the shoulders of direct software developers and scientists. Ethics has traditionally not been part of the education of computer engineers, and this may change. In the case of ethical consequences-oriented algorithms, it is not enough to simply understand the hard sciences. As AI lead researcher Will Bridewell emphasized, it is imperative that the developers of the future are “aware of the ethical status of their work and understand the social implications of their designs.” In his reasoning, he touches on the possibility of studying the ethical principles of Aristotle and Buddhism for the “development of intuition in terms of moral and ethical behavior.”
At a deeper level, responsibility rests with the organizations that employ these developers, the industry they belong to, the governments that regulate these areas, and ultimately us. Now public policy and regulation in the field of AI remains in its infancy, if it exists at all. But the voices of those who are not indifferent to this problem are already heard. Open AI – a company founded by Elon Musk and Sam Altman – seeks to control these aspects. Technology leaders have formed partnerships on artificial intelligence to explore ethical issues. Supervisors such as AI Now are emerging.
We're looking for an ethics framework that communicates exactly how AI translates information into decisions and makes them fair, sound, and represent the best side of humanity.
And this is not an illusion. A similar scenario is already awaiting us in the near future. It's worth emphasizing that in the case of Google's 'highly aggressive' fruit-picking artificial intelligence, the researchers ultimately changed the context. The algorithms have been deliberately tweaked to create a collaborative environment. As a result, the triumph was celebrated by those agents who learned to work together. The lesson is this: AI can reflect the best aspects of our nature if we show it how to do it.
Posted by Ryan Holmes, CEO Hootsuite
Slowly but surely, science fiction is moving from the category of book legends to everyday life. Artificial intelligence will inevitably collide with the intelligence of its creator and hopefully, at this stage, AI will already be 'taught' not to harm humans. Otherwise it will be SkyNet with all that it implies. On the other hand, the danger may not be on the side of high technologies, because there may be figures who are ready to use the breakthrough to the detriment and pursue some of their own selfish goals. And it's not about stealing a crypto-cat. So the same confrontation between good and evil awaits us ahead, only at a new level.