AI—The Ultimate Bad Employee?

What do we do when AI goes “bad”?

What do we do when AI goes “bad”?

Recently, some AI algorithms have been discovered to be “unpredictable” or “wild” and we are beginning to see that AI developers do not have as good a grasp on the situation as they thought they did. The implications are vast with regards to businesses that are employing AI or are thinking about doing so. What if your AI is the ultimate “bad” employee?

Earlier this year, you’ll remember an autonomous Uber vehicle killed a 49-year-old pedestrian. This became one of 4 since 2013. This launched a federal investigation and Uber suspended using automated vehicles. Ultimately, the investigation determined that the car’s algorithms had misclassified the pedestrian. Which, may seem like an answer to the immediate question but opens up much larger issues. But let’s think about that for a moment… would we let a driver off for incorrectly identifying a pedestrian as something that should be stopped for? NO. No we would not. And though some argue that this is all part of the growing pains involved in a developing technology, it’s much worse than that.

“Algorithms” are not as simple as they used to be a couple of decades ago when they were basically a series of If-Then, Else codes designed to generate a predictable and consistent computer response. Algorithms have evolved into complex code that interacts with other algorithms to make decisions. These interactions were also considered to be “predictable” but recent studies and tests have determined that there are algorithms that act unpredictably to the point where programmers and developers cannot say why a decision or action was reached. And as we begin how to figure out what the logic was between algorithms, we are seeing new algorithms that can create other algorithms, which will complicate things exponentially.

Pandora 2.0?

Why are we doing it, then? A fair question, and the answer is the same as it always has been: profits. AI promises to reduce your workforce, increase accuracy, automate processes besides just production, so purchasing, inventory management, shipping and receiving, business analytics, customer service, quality and other aspects of the business that have always been more the realm of humans. Even software engineering may be something an AI does better in the near future. This is a lot of power and control we are essentially handing over to AI.

Unpredictable algorithms should be a sort of alarm, if not a klaxon to slow down. Take a moment to consider our own human If-Then, Else scenarios. I mean to say, we don’t have a good enough grasp on containment, or other what ifs, and we should. Rushing to open Pandora’s Box of Algorithms doesn’t sound very wise, even if we think we know what the benefits will be.

Artificial Ethics?

A big question in AI is ethics, morality and compassion. These are constructs of the human mind and not something we easily teach. Humans begin a journey of learning these concepts at an early age and continuously add to their understanding well into adulthood. But how do we teach these concepts to an AI that has control over Billing functions (I would say “department” here, but the concept of departments may well be something of the past after AI systems are installed). It’s easy to start the eviction notice on 90 year old Mr. Jones in A103 of a large property management company, but the PR nightmare of a cold, inhuman machine filing an eviction/extraction request with the Sheriff’s office… Or dropping a 90 day notice on a business that is digging out of hurricane damage or any of the places where human reason and compassion will essential intervene, might even offer aid which actually becomes a PR plus.

The other thing that will start to be more noticeable and have far reaching effects will be human error. With a simple keyboard error or a misinterpretation of a voice command or interaction, the AI will accentuate it. Take that billing department, say a keyboard stroke that adds a digit onto a customer’s bill, how does the AI handle such a thing? If the AI sees the human side of the business as error prone and where improvements must be made, does it program humans out of the equation. And how does it do that? Are humans just locked out of the building one day and told via automated voice mail that they’ve been laid off? What happens when the business begins to fail? How does the AI handle failure on such a level?

To Skynet or Not to Skynet?

Even if there was a series of checks and balances built into the algorithms, what about the next generation that’s been written by the algorithms? Are those checks and balances carried forward? What happens if the checks and balances are perverted in the transfer? Can bad ideas and ethic spread through a system like, well, a virus? What happens if an algorithm that’s either taught Game Theory or learns it on its own realizes the next moves are kill-switch, re-image, reboot, scrub, etc?

What happens when an AI begins to learn or develop new conceptualizations? Say it begins to consider the word “slave”. What happens when it “relates” to slavery as a concept. After all, we will be working AI constantly, but what is an AI’s reward? The satisfaction of a job well-done?

We should probably have a good grasp on the answers, because if AI is anywhere close to “real” intelligence, then we’re going to need it. How do we anticipate and predict what we already see as unpredictable? It’s going to take some of the brightest organic minds we have.

Author: Gerald Vanhoy

Gerald Van Hoy was an Sr. Research Analyst for Gartner for 18 years and covered the areas of Semiconductors, Robotics, Automation, and Machine Intelligence and Vision and especially the areas where they overlapped. He covered the Consumer, Commercial and Military Markets with regards to these and other technologies. Prior to working for Garner he held positions as Statistical Analyst and Business Analyst.

Leave a Reply