Beginning of dialog window. Escape will cancel and close the window.

When Google (googl, -0.15%) unveiled its Duplex “AI” earlier this week, it sparked massive ethical concerns. Belatedly, the company seems to have recognized the problem.

The point of the new virtual assistant is to conduct phone calls on behalf of the user, making appointments and so on. Duplex is the culmination of a lot of Google’s work on machine-learning and natural-language technology—in other words, it sounds and comes across like a real person.

When Google CEO Sundar Pichai demonstrated the service at its I/O developer conference, playing recordings of interactions between Duplex and actual people at a hair salon and a restaurant, the demo rightly wowed a lot of people, but it also outraged many. The problem was that those on the other end of the line apparently had no idea they were talking to a robot—good tech; bad ethics.

As prominent sociologist Zeynep Tufekci put it: “Google Assistant making calls pretending to be human not only without disclosing that it’s a bot, but adding ‘ummm’ and ‘aaah’ to deceive the human on the other end with the room cheering it… horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing.” Yesterday, Google finally responded by saying it was “designing this feature with disclosure built-in, and [will] make sure the system is appropriately identified. Read more from…

thumbnail courtesy of