Unleashing a legal and ethical debate worldwide, AI (artificial intelligence) is progressing with leaps and bounds as it portends to change human society forever. For example, if a driverless car meets with an accident involving fatalities, it is the algorithm operator who faces “product liability” rules.
In the case of AI used in conventional war, machines killing humans is an ethically chilling concept. Carrying major implications, it is feared that the proto-AI technologies of today are going to evolve into true AI super-intelligence very rapidly without giving enough time for research into the pros and cons.
As apprehensions of a “hyper-war scenario” build up, the main challenge remains: how to place the human factor in AI and prevent a drastic downgrade in military security as combat involving the technology changes the dimensions of warfare. Every country today needs to re-evaluate its defense mechanisms and reinterpret its geostrategic defenses to fit in with the modern use of artificial intelligence.
Discussing the risks of “hyper-war,” August Cole, senior fellow at the Atlantic Council, predicts that, “The decision-making speed of machines is going to eclipse the political and civilian ability.” Being dual-use in nature, most AI algorithms can also be modified for security purposes and preparing for a “hyper-war” will soon be a priority. The US and China have already announced that they intend to harness AI for military use.
Recognising the military significance of AI, Russian President Vladimir Putin has termed it the future for all mankind that would introduce “colossal opportunities” and “threats that are difficult to predict.” Declaring that the country leading in artificial intelligence will rule the world, Putin felt the gravest of threats would be that involving nuclear stability. Every country today needs to re-evaluate its defense mechanisms and reinterpret its geostrategic defenses to fit in with the modern use of artificial intelligence Exploring the possibilities of nuclear mishaps due to AI, the RAND Corporation started a project known as Security 2040, one of the researchers, engineer Andrew Lohn, says, “This isn’t just a movie scenario, things that are relatively simple can raise tensions and lead us to some dangerous places if we are not careful.” Basically, the fear is that computer miscalculations could lead to nuclear annihilation if the machines taught to think and learn like humans suddenly go haywire and spin out into a ‘Terminator’ kind of nightmare. Read more from atimes.com…
thumbnail courtesy of atimes.com