Calo and co are looking at the intersection of adversarial examples (blind spots in machine learning systems that make it trivial to trick them into miscategorizing their input, mistaking one face for another, or a stop sign for a sign telling a car to speed up, or a thinking a turtle is a rifle) and the Computer Fraud and Abuse Act, a ridiculously overbroad anti-hacking law inspired by a panic over the 1984 movie Wargames (seriously) that gives prosecutors almost unlimited authority to attack security researchers. There is a case to be made that the CFAA could apply to each of these scenarios.

The adversarial sound in the first scenario could constitute the “transmission” of a “command” to a “protected computer,” i.e., the victim’s phone. Assuming the revelation of the victim’s location leads to physical harm, perhaps in the form of violence by the perpetrator, the damage requirement of CFAA has been satisfied.

Similarly, by defacing the stop sign, the malicious competitor can be said to have caused the transmissions of “information” — from the stop sign to the car — that led to a public safety risk. In both instances, had the attacker broken into the phone or car by exploiting a security vulnerability and altered the firmware or hardware to cause the precise same harm, the CFAA would almost certainly apply.

On the other hand, a perhaps equally strong case could be made that CFAA does not apply. In neither scenario does the defendant circumvent any security protocols or violate a terms of service.

The transmission of an adversarial sound seemingly does not cause damage without authorization to a protected computer. Rather, it causes damage to a person through an authorized mechanism — voice control — of a protected computer. Read more from boingboing.net…

thumbnail courtesy of boingboing.net