Once the AI genie got out of the bottle, it wasn’t going back in. In April, Google employees very publicly protested the company’s participation in a Pentagon program that used AI to interpret images and improve the targeting of drone strikes.

Certainly, the risks and ethical concerns of AI weapons are very real (much like any new technology of war). Furthermore, most opponents of AI weapons usually point to the ethical problems of having a computer algorithm both selecting and eliminating human targets without any human involvement in controlling the process.

However, the risks associated with AI weapons stretch beyond the ethics of war. Some have pointed to crisis instability if AI weapons were to proliferate throughout the world.

If two states involved in a crisis have access to weapons that can so easily engage in such rapid destruction, the first mover advantage will likely push those states toward war rather than away from it. For instance, it is often argued that the first mover advantage was the cause of the start of World War I. The rapid mobility of troops and advancement in weapons led military planners to believe that those who moved first would have an insurmountable advantage.

Therefore, if you think someone else is getting ready to move, you have a higher incentive to move before they do. AI could create similar incentives. Read more from nationalinterest.org…

thumbnail courtesy of nationalinterest.org