Critics have been urging companies involved in the creation of artificial intelligence to develop a code of ethics before it’s too late. Now Google is complying, following backlash over its work with the U.S. Pentagon developing a system to analyze military drone visuals.

After all, without any independent oversight, there’s little binding Google to its word. The need for oversight is particularly pressing with regards to militarized A.I., or autonomous weapons systems. What differentiates this category of weapons is their autonomy: combat drones, for example, that could eventually replace human-piloted fighter planes; robotic tanks that can operate on their own; and guns that are capable of firing themselves.

The argument in favour of this lethal breed of A.I. is that human operators aren’t put at risk — be it guns at border crossings, or planes or tanks on the front lines of conflict.

But the risk of accidental casualties when a machine is in charge of making life-or-death decisions has many concerned. As does the potential for the technology to fall into the wrong hands, such as dictatorships or terrorists.

The United Nations last year discussed the possibility of instituting an international ban on “killer robots” following an open letter signed by more than 100 leaders from the artificial intelligence community. The leaders warning that the use of these weapons could lead to a “third revolution in warfare,” likening it to a Pandora’s box: hard to close once opened. Google has been a major player in the development of A.I. Read more from cbc.ca…

thumbnail courtesy of www.cbc.ca