Inside a Peacekeeper missile silo at Vandenburg Air Force Base, California. Photo: DoD In 1983, Soviet Lieutenant Colonel Stanislav Petrov sat in a bunker in Moscow watching monitors and waiting for an attack from the US.

If he saw one, he would report it up the chain and Russia would retaliate with nuclear hellfire. One September night, the monitors warned him that missiles were headed to Moscow.

But Petrov hesitated. He thought it might have been a false alarm. “I had a funny feeling in my gut,” Petrov later told the Washington Post.

“I didn’t want to make a mistake. I made a decision, and that was it.” It’s a question at the heart of a new report from the RAND Corporation, a non-partisan DC think tank.

RAND wanted to know what would happen if the Petrovs of the world were no longer in the room watching for missiles. “Artificial intelligence may be strategically destabilizing not because it works too well,” the report reads, “but because it works just well enough to feed uncertainty.” Read more: Experts: America Doesn’t Need All These Nukes The supposed benefits of machine learning are vast. Read more from…

thumbnail courtesy of