That has implications for those designing intelligent machines and those who use them. The second scenario, known as natural probable consequence , occurs when the ordinary actions of an AI system might be used inappropriately to perform a criminal act. Kingston gives the example of an artificially intelligent robot in a Japanese motorcycle factory that killed a human worker. The key question here is whether the programmer of the machine knew that this outcome was a probable consequence of its use.
What Happens if a Robot Commits a Crime? | Greenspun Shapiro PC
The third scenario is direct liability , and this requires both an action and an intent. An action is straightforward to prove if the AI system takes an action that results in a criminal act or fails to take an action when there is a duty to act. The intent is much harder to determine but is still relevant, says Kingston. Then there is the issue of defense.
Who Is Responsible When Robots Kill?
If an AI system can be criminally liable, what defense might it use? Kingston raises a number of possibilities: Could a program that is malfunctioning claim a defense similar to the human defense of insanity? Could an AI infected by an electronic virus claim defenses similar to coercion or intoxication?
These kinds of defenses are by no means theoretical. Kingston highlights a number of cases in the UK where people charged with computer-related offenses have successfully argued that their machines had been infected with malware that was instead responsible for the crime. In one case, a teenage computer hacker, charged with executing a denial-of-service attack, claimed that a Trojan program was instead responsible and had then wiped itself from the computer before it was forensically analyzed.
To learn more, visit our Cookies page. This page was processed by aws-apollo5 in 0. Skip to main content. Copy URL. See all articles by Prof. Gabriel Hallevy Prof. Abstract The way humans cope with breaches of legal order is through criminal law operated by the criminal justice system. Hallevy, Prof.
Where were we on robots again?
Register to save articles to your library Register. Paper statistics. Feedback to SSRN. Norton determined that more than , Americans were likely killed in traffic accidents during the s, three-to-four times the number of traffic-related deaths in the prior decade. So, state and local legislatures joined in the rule-writing effort. They banned pedestrians from walking in the street. Ultimately, society tackled the challenge.
Today, robot makers — and their occasional victims — operate under a variety of ill-fitting legal theories. In other situations, robot makers are only liable when they are negligent. Another theory assigns liability where the perpetrator is reckless. Still another theory imposes criminal liability where the culprit intends to harm another — like the hacker who disables a pacemaker a possible concern, but mostly a new Netflix genre. The theories become a thicket when one considers new state laws. Alabama, California, Connecticut and North Dakota now allow pilot programs for autonomous vehicles.
Federal law is sparse. Congress enacted the Surface Transportation Reauthorization and Reform Act of , encouraging autonomous vehicle research, assessment and facilitation.
- Are there books related to this topic?.
- Foresight - The Box Set.
- Les Sourds existent-ils ? (La Philosophie en commun) (French Edition).
- Das Vermächtnis des Will Wolfkin (Boje digital ebook) (German Edition).
But there is no national framework regulating robots or punishing their misuse. Overall, the only clear thing is that we lack clarity.
Instead, investigators, prosecutors and courts apply old rules and decide who goes to prison — on a somewhat haphazard, unpredictable, case-by-case basis. The business world is just winging it. Some companies take matters into their own hands with their terms of service.
Typically, these rules are strikingly one-sided and incomprehensible. There are crumbling ancient Egyptian scrolls that are easier to follow.
- Legal scholars are furiously debating which laws should apply to AI crime.;
- We could soon face a robot crimewave the law needs to be ready.
- Gabriel Hallevy (Author of When Robots Kill).
- Rising Sun.
- Gabriel Hallevy.
- We could soon face a robot crimewave … the law needs to be ready | The Independent?