Based on materials from theverge.com
One of the most impressive moments of the latest Google I / O conference was the phone call to the hairdresser to make an appointment with the hairdresser. The call was not made by a person, but by the Google Assistant, which did an excellent job: asking the right questions, pauses in the right places, and even uttered the perfectly appropriate and realistic 'mmmm'.
The audience was amazed. But the most impressive thing was that the person on the other end of the line seemed to have no idea that he was communicating with artificial intelligence. This is undoubtedly a new technological advance for Google. But, in addition, a complex of ethical and social problems is associated with it.
For example, should Google tell people that they are talking to the car? Could technology that mimics humans destroy our faith in what we see and hear? Or will it become a kind of technological privilege, where those who own it can shift all this boring negotiations onto the machine, while those who take calls (mostly low-paid service workers) have to deal with a dumb robot?
In other words, viewers saw a typical demo from Google that makes them admire and worry alike.
Google Assistant makes calls
Google Assistant will be able to make actual phone calls for you.
Published by Circuit Breaker on May 8, 2018
But let's start with the simplest things. From the stage, Google hasn't been very vocal about the details of how a feature called Duplex works. But the accompanying blog post makes the context clear. First and foremost, Duplex is not some futuristic 'smart talker' capable of carrying on a conversation without restriction. As Google explains, it can only be applied to “limited limits” – in situations where the conversation is functional and has a clear framework for what can be said. 'Want to book a table? How many people? What day? What time? Okay, thanks, goodbye. ' Nothing complicated, right?
Mark Ridl, an associate professor of AI at the Georgia Institute of Technology who specializes in computer narratives, told The Verge that he thought the Google Assistant would only work 'well enough' in formal situations. “Going out of context in dialogue is a big challenge,” Ridl said. “But there are also a lot of tricks to cover up the AI misunderstanding or to get the conversation back on track.”
One of the Google demos shows perfectly how such tricks work. AI was able to overcome consistently emerging moments of misunderstanding, but did so by paraphrasing and repeating questions. This is common for computer programs designed to communicate with people.
Excerpts from their conversations demonstrate how 'smart' they are, but by analyzing what has been said, you realize that you are dealing with programmed patterns. The Google blog has a number of details on what tricks Duplex will use: confirmation ('Friday next week, the 18th'), sync ('Can you hear me?'), Abort ('Number 212-' – 'Excuse me, could you please repeat from the beginning?').
It's important to note that Google calls Duplex an 'experiment'. This is not a final product, and there is no guarantee that it will be widely distributed as it is – or at all. Duplex currently only works in three scenarios: reserving a table at a restaurant, scheduling a visit to the hairdresser and learning about the opening hours of various establishments. It will be available to a limited (and unknown) number of users this summer.
There is one more important feature. If something goes wrong in the conversation, the person intervenes. According to Google, Duplex has a 'self-checking ability' that allows it to recognize when a conversation is out of reach. In such cases, a signal is sent to the operator who can end the conversation. Looks like a personal assistant Facebook M: The company promised it would use AI to serve its customers, but ended up giving people some unknown part of the job instead. (In January Facebook closed this part of the service)
All of these additions give a clearer idea of the capabilities of Duplex, but do not answer the question of what effect its use will cause. And as the first company to showcase this kind of technology, Google is responsible for potential problems.
The obvious question is whether a company should notify people that a robot is communicating with them. Yossi Mathias told CNET that this is likely to happen. Google's answer to The Verge's question is more extensive: the company definitely has a responsibility to keep people informed. It is not clear why this was never said from the stage.
Many experts in the field agree with this, but how exactly do you tell this or that person that they are talking to AI? If the Assistant starts the call with the words 'Hello, I am a robot', the other end may simply hang up. Using less direct methods, you can, for example, limit the degree of realism of the AI voice or use a special tone in the call. Google said it hopes to naturally evolve a set of social norms that will help it figure out that the caller is AI, not human.
We should make AI sound different from humans for the same reason we put a smelly additive in normally odorless natural gas. https://t.co/2dYmeb70AC
– Travis Korte (@traviskorte) May 8, 2018
Joanna Bryson, an associate professor of AI ethics at the University of Bath, told The Verge that Google is clearly obligated to provide this information. If robots can freely portray humans, the spectrum of abuse becomes incredibly wide, from telephone fraud to serious fraud. Bryson says letting companies handle this on their own is clearly not enough, and new laws are needed to protect society. “Until we start to regulate this, any company that is less responsible than Google can benefit from this technology. Google may be doing the right thing, but not everyone will follow suit. '
And if this technology becomes widespread, it will have other, less obvious effects that cannot be dealt with by law. In an article for The Atlantic, Alexis Madrigal suggests that small talk, whether on the phone or on the street, has subtle social implications. He quotes urbanist Jane Jacobs as saying that “everyday public contact at the local level” creates a “network of public respect and trust.” What do we lose if we give people another way to avoid social interaction?
The effect of using AI in conversations can be a more rude reaction to calls. If we can't tell the difference between humans and machines by voice, won't that make us perceive all conversations as knowingly suspicious? This way we can start to cut off real people with the words 'Shut up and let me talk to a person!'. And if it becomes easier for us to book a table in a restaurant, will we start to abuse reservations without thinking about how likely we are going to actually visit a restaurant (As Google said, they will limit the number of calls that an institution can receive from an Assistant per day, and the number of calls the Assistant can make to prevent the use of the service for spam).
There are no obvious answers to all questions, but as Bryson rightly pointed out, at least Google is doing the world a favor by drawing attention to the technology. This is not the only company working on such a service, and she will not be the only one who will use it. “It's very important that they demonstrate that,” says Bryson. “It’s important that they create demos and videos that make people see what’s going on […] What we really need is an informed society.”
In other words, we need to discuss all this among ourselves, before the robots begin to speak for us.