If you follow @RadioFarSide, then you know that I've been twisting off on robots for a while now. This has prompted some readers to ask what I have against robots.
Well, let's start with just about everything, and go from there. And it's not necessarily against robots, so much as against pushing ahead so fast without thinking things through.
For a little lite reading, there's the story about the "police robot" that was deployed in Dallas, and which killed the "shooter." Then, there's the repeated stories of driver-less cars crashing, resulting in one reported fatality.
On a Big Picture scale, I oppose robots paired with AI. The very best scenario here is that humanity will create a race of mechanical slaves. The very worst scenario is that those slaves get fed up and fight back.
In the best case, artificial intelligence (AI) has the potential to become sentient. At least that's the theory and plan. Even if that level is never achieved, humans who use the machines as servants or slave labor will grow accustomed to less-than-civil behavior with the machines. Since they presumably will resemble and act like humans, this behavior will become ingrained and transfer over to interactions with real people.
But we've already covered this ground in previous columns.
More to the point of today's thought experiment is the legal ramifications of this head-long rush to create a new race of machines.
Let's take the car story first.
You are cruising down the highway. You get tired. You turn on the AI autopilot and drop off to sleep. The autopilot fails or makes a poor choice and your car flies off the highway, does a triple roll and lands upside down, and your back is broken. You are confined to an iron lung for the rest of your life. Who is liable?
The car manufacturer sold the car as having artificial intelligence built into the autopilot. This AI system is capable of thinking for itself and learning from experience. It can even carry on conversations with the driver to discuss routes, road conditions, etc. The AI is making its own choices and decisions completely independently of the driver or the manufacturer.
After your horrible accident, it is determined that the AI was functioning completely within manufacturer's specifications. When you bought the car, you were informed of the system and were given a special class in how to operate it. You signed a release and received a certificate showing that you had received the training. You owned the car for more than a year and had used the AI autopilot a number of times with no problems.
The AI autopilot relies on Google Maps to navigate. An investigation showed that the map had an exit that was closed for repairs, but not noted on the map. The AI autopilot chose to use that exit and ignored sensor data in favor of the map notations leading to the terrible accident. You obviously received egregious bodily harm due to this choice.
Who is liable for your injuries? Was it the manufacturer, who sold you the AI system, informed you of its operations, trained you, and certified that the system was within operating parameters at the time of the crash?
Is Google liable for not maintaining its maps with the latest data on closures and construction? Is the AI system liable for making an independent choice? Are you liable for having bought the thing in the first place (caviat emptor)?
Can a machine ever be liable for its independent choices? If so, who pays damages? Does the manufacturer/programmer have any liability for a machine that is capable of thinking and acting of its own volition?
These are just some of the hundreds of legal questions that come to mind in this situation. Neither the law nor society have even begun to consider the implications of this technology, yet we are screaming down that highway with hardly a care for any of these issues.
Now, let's look at the "police robot" case.
There is an active shooter situation. Police have responded to the scene and there are several dead bodies lying around the area. Instead of risking a human police officer, an AI robot is sent in to remedy the situation.
The robot is unleashed and trundles into the building using its sensors and publicly available floor plans of the building to guide it to the suspect's location. It crashes through the door of the room where the "shooter" is located. A figure is squatting by the window with something that looks like a rifle in his hands. The robot makes no attempt to disable the "shooter," but rather decides on the spot to kill the suspect while recording all of its sensor data.
Later, it is determined that the "suspect" was a janitor hiding near the window to try and figure out what was happening. He had a broom in his hands. The police investigators, worried that this incident could turn into a major legal battle lasting several years and costing millions of dollars, decides to falsify the sensor data of the robot. Using CGI, they modify the video and still photos to show a gun in the "suspects" hands. All associated text files and decision trees generated by the AI are also modified accordingly and a real rifle was planted in evidence.
In this situation, the AI robot made its own determination on the spot. It is a machine, and so cannot give testimony in court as a witness, its sensor logs can be modified without a trace and can even be programmed to falsify data on the fly by some crafty programming (kind of like computer voting machines).
Even if the cover up in our scenario is discovered, are the police only guilty of modifying evidence to hide a crime? Did the machine make its own choice to kill and is it liable for that choice, or are the police? Can the manufacturer/programmer be held liable for the machine's choices, since the machine is AI and capable of making informed decisions and learning from its mistakes? If it comes to light that the janitor was a wrongful death, can the machine be punished or held accountable in any way for its actions?
Again, these are just a tiny sample of all the legal, moral and ethical questions that have not been addressed. In fact, very few voices in the public sphere are even bringing these questions up for debate. They will become an issue sooner or later.
It is very likely that the drivers of the AI cars will sue for damages. It is also likely (unless they are complete fools) that the Dallas "shooter's" family will sue for wrongful death. These and many more questions are going to come to public debate sometime in the near future and humanity has some serious issues to consider.
These are not just legal questions, they are very serious societal issues that we must deal with, and quickly. Whether we play god by creating a race of mechanical or biological (or combined) individuals, this choice comes with profound questions that we have never answered for ourselves, much less our creations.
These are not dumb objects, but fully autonomous creatures capable of independent thought and volition. After thousands of years, we seem incapable as a species of answering these questions about our own existence, and into this quagmire, we intend to introduce a whole new species of our own creation that is innocent and unknowing.
All of this can only end badly. In effect, by not considering these issues in detail before we create new species, we are collectively and individually responsible for the consequences. This is not like creating a toaster, this is new life, regardless of the form it takes.
It is sad that so few people have read Frankenstein; Or, the Modern Prometheus. The myriad movies that have been made from this novel hardly do justice to the serious moral and ethical questions it raises. It should be required reading every year from age 10 on. It is not a horror story, it is a cautionary tale, and we should be taking its lessons with deadly seriousness.
No comments:
Post a Comment
Feel free to leave your own view of The Far Side.