Today, we get an answer of sorts thanks to the work of John Kingston at the University of Brighton in the UK, who maps out the landscape in this incipient legal field. His analysis raises some important issues that the automotive, computing, and legal worlds should be wrestling with in earnest, if they are not already.

At the heart of this debate is whether an AI system could be held criminally liable for its actions. Kingston says that Gabriel Hallevy at Ono Academic College in Israel has explored this issue in detail.

Criminal liability usually requires an action and a mental intent (in legalese an actus rea and mens rea). Kingston says Hallevy explores three scenarios that could apply to AI systems. The first, known as perpetrator via another, applies when an offense has been committed by a mentally deficient person or animal, who is therefore deemed to be innocent.

But anybody who has instructed the mentally deficient person or animal can be held criminally liable. For example, a dog owner who instructed the animal to attack another individual.

  That has implications for those designing intelligent machines and those who use them. “An AI program could be held to be an innocent agent, with either the software programmer or the user being held to be the perpetrator-via-another,” says Kingston. Read more from technologyreview.com…

thumbnail courtesy of technologyreview.com