Biometric Mirror is an interactive application that shows how you may be perceived by others. But it’s flawed and teaches us an important lesson about the ethics of AI By Dr Niels Wouters and Professor Frank Vetere, University of Melbourne In 2002, the sci-fi thriller Minority Report gave us a fictionalised glimpse of life in 2054.

Initially, the movie evokes a perfect utopian society where artificial intelligence (AI) is blended with surveillance technology for the wellbeing of humanity. The AI supposedly prevents crime using the predictions from three precogs – these psychics visualise murders before they happen and police act on the information.

“The precogs are never wrong. But occasionally, they do disagree.” So says the movie’s lead scientist and these disagreements result in minority reports; accounts of alternate futures, often where the crime doesn’t actually occur.

But these reports are conveniently disposed of, and as the story unfolds, innocent lives are put at stake. Ultimately, the film shows us a future where predictions are inherently unreliable and ineffective and that is worth keeping in mind as we grapple with the ongoing advances in artificial intelligence.

Industry and government authorities already maintain and analyse large collections of interrelated datasets containing personal information. For instance, insurance companies collate health data and track driving behaviours to personalise insurance fees. Read more from…

thumbnail courtesy of