AI—The Ultimate Bad Employee?

What do we do when AI goes “bad”?

What do we do when AI goes “bad”?

Recently, some AI algorithms have been discovered to be “unpredictable” or “wild” and we are beginning to see that AI developers do not have as good a grasp on the situation as they thought they did. The implications are vast with regards to businesses that are employing AI or are thinking about doing so. What if your AI is the ultimate “bad” employee?

Earlier this year, you’ll remember an autonomous Uber vehicle killed a 49-year-old pedestrian. This became one of 4 since 2013. This launched a federal investigation and Uber suspended using automated vehicles. Ultimately, the investigation determined that the car’s algorithms had misclassified the pedestrian. Which, may seem like an answer to the immediate question but opens up much larger issues. But let’s think about that for a moment… would we let a driver off for incorrectly identifying a pedestrian as something that should be stopped for? NO. No we would not. And though some argue that this is all part of the growing pains involved in a developing technology, it’s much worse than that.

“Algorithms” are not as simple as they used to be a couple of decades ago when they were basically a series of If-Then, Else codes designed to generate a predictable and consistent computer response. Algorithms have evolved into complex code that interacts with other algorithms to make decisions. These interactions were also considered to be “predictable” but recent studies and tests have determined that there are algorithms that act unpredictably to the point where programmers and developers cannot say why a decision or action was reached. And as we begin how to figure out what the logic was between algorithms, we are seeing new algorithms that can create other algorithms, which will complicate things exponentially.

Pandora 2.0?

Why are we doing it, then? A fair question, and the answer is the same as it always has been: profits. AI promises to reduce your workforce, increase accuracy, automate processes besides just production, so purchasing, inventory management, shipping and receiving, business analytics, customer service, quality and other aspects of the business that have always been more the realm of humans. Even software engineering may be something an AI does better in the near future. This is a lot of power and control we are essentially handing over to AI.

Unpredictable algorithms should be a sort of alarm, if not a klaxon to slow down. Take a moment to consider our own human If-Then, Else scenarios. I mean to say, we don’t have a good enough grasp on containment, or other what ifs, and we should. Rushing to open Pandora’s Box of Algorithms doesn’t sound very wise, even if we think we know what the benefits will be.

Artificial Ethics?

A big question in AI is ethics, morality and compassion. These are constructs of the human mind and not something we easily teach. Humans begin a journey of learning these concepts at an early age and continuously add to their understanding well into adulthood. But how do we teach these concepts to an AI that has control over Billing functions (I would say “department” here, but the concept of departments may well be something of the past after AI systems are installed). It’s easy to start the eviction notice on 90 year old Mr. Jones in A103 of a large property management company, but the PR nightmare of a cold, inhuman machine filing an eviction/extraction request with the Sheriff’s office… Or dropping a 90 day notice on a business that is digging out of hurricane damage or any of the places where human reason and compassion will essential intervene, might even offer aid which actually becomes a PR plus.

The other thing that will start to be more noticeable and have far reaching effects will be human error. With a simple keyboard error or a misinterpretation of a voice command or interaction, the AI will accentuate it. Take that billing department, say a keyboard stroke that adds a digit onto a customer’s bill, how does the AI handle such a thing? If the AI sees the human side of the business as error prone and where improvements must be made, does it program humans out of the equation. And how does it do that? Are humans just locked out of the building one day and told via automated voice mail that they’ve been laid off? What happens when the business begins to fail? How does the AI handle failure on such a level?

To Skynet or Not to Skynet?

Even if there was a series of checks and balances built into the algorithms, what about the next generation that’s been written by the algorithms? Are those checks and balances carried forward? What happens if the checks and balances are perverted in the transfer? Can bad ideas and ethic spread through a system like, well, a virus? What happens if an algorithm that’s either taught Game Theory or learns it on its own realizes the next moves are kill-switch, re-image, reboot, scrub, etc?

What happens when an AI begins to learn or develop new conceptualizations? Say it begins to consider the word “slave”. What happens when it “relates” to slavery as a concept. After all, we will be working AI constantly, but what is an AI’s reward? The satisfaction of a job well-done?

We should probably have a good grasp on the answers, because if AI is anywhere close to “real” intelligence, then we’re going to need it. How do we anticipate and predict what we already see as unpredictable? It’s going to take some of the brightest organic minds we have.

When to treat family and friends like acquaintances

Key takeaway

Third party risk management is not just for suppliers, IT vendors and service providers.  In many cases, subsidiaries or other organizations within your enterprise, and even well-known business customers should be brought into the third party management program.

See the source image

The problems at Deutsche Bank and Danske Bank reminded me of an inquiry I had with a CISO at a large high tech equipment manufacturer.  We were discussing best practices in third party risk management.  I asked him  what types of companies he was monitoring and he told me they were subsidiaries.  He was putting these subsidiaries through the same hoops as he would any other third party vendor, classifying them into three risk categories, doing deep dives and continuous monitoring on the higher risk ones, and documenting certification and accreditation on all of them.

The Financial Times today recounted Deutsche’s current regulatory rows — money laundering by a former subsidiary Regula that it had acquired in the British Virgin Islands and Deutsche’s role as a corresponding bank processing over €160billion in suspicious payments for Danske Bank Estonia.  And of course Danske Bank Estonia was a subsidiary acquired by Danske.

Being “in the family,” it is apparent that Regula and Danske Bank Estonia did not get enough scrutiny by their parents.  Had they been treated as high risk third parties, the risks and lack of effective controls to prevent money laundering may have been discovered earlier, avoiding the heavy supervisory presence and regulatory investigations that the parents now enjoy.

Also, Danske Estonia’s use of Deutsche Bank instead of its own parent to transfer money out of Estonia could have helped to bypass parental scrutiny.  Should Deutsche have raised a red flag — like a neighbor who lets the neighbor kid smoke pot in her backyard?  Deutsche didn’t raise a red flag, instead stating they weren’t the ones responsible for validating the source of the funds — that was Danske’s problem. 

Yet, now it’s all come back on Deutsche, and the lesson learned for the rest of us — when a lot of money is on the line, treat your family and your friends as acquaintances.


1 — Bring high risk subsidiaries into your third party risk management program

2 — High risk customers should also be included in your third party risk management program

Originally published on

The Longest Last Mile

Imagine a rain of hot soup, a hail of pills, and a meteor shower of packages…

The Logistics of Delivery Drones

Don’t believe the hype…

I used to get a question from clients on a fairly regular basis—the question went along the lines of: When do Delivery Drones begin being a thing? My answer would disappoint most, but really you have to kind of do the math on this one, before you realize just how much has to be done before we get there.

Besides the obvious problems of battery life versus distance along with weight and how much a single drone delivery costs and how many flights before ROI. There’s problems about where and how to land, like should we all have designated landing areas in our yards or rooftops. If it’s a building rooftop, who does the sorting and delivering or is everyone’s stuff just left in a pile? Assuming it’s not a giant online retailer, but say the local pharmacy, do you drop the drugs without a human presence, or do you have to hover until a human with a pin code arrives? And where and how does the drone recharge? I’ve seen and heard so many possible systems!

But even beyond all that logistical dilemma lies a much larger one…

The real logistical issues are going to be with the UAS Traffic Management (UTM) systems. It seems like such an easy thing until you think about a city like Manhattan. You’ll have drone messengers, food delivery, delivery services, laundry services, diaper services, retail delivery, groceries, and just about every other thing you can imagine.  Millions of drones. All in a perfect and non-stop ballet of precision and avoidance!

…and that’s not considering the thousands of pilotless air taxis!

Or an entire system of priorities, like emergency equipment, police and fire drones, unmanned ambulances, etc., etc.

Sunny with a chance of hot soup rain and possible pharmaceutical hail…

The photo above was a common one at the turn of the 19th century, when there might be only two cars in an entire county and yet somehow they’d find each other.

Imagine a rain of hot soup, a hail of pills, and a meteor shower of packages…

How do you go about creating a system that will not only manage the flights of millions of payload-heavy drones but also monitors the manned flights above and intersecting that air space as well as any unidentified objects and what to do about them? A system like this will no doubt require AI, and all of the AI trappings/baggage that will go along with it. But more importantly, how does the system track and manipulate millions of devices down to the centimeter.

The UTM is going to become an incredibly intricate part of any city’s government. It will need to have the ability to include special missions, new parameters, and will no doubt be a huge budget line item for maintenance and repair. It’s going to need auditors and supervisors and investigators. It will need to interact with other city systems, private business systems, as well as state and federal systems. If you’re getting the impression that this is going to cost millions if not billions of dollars, you’d be right.

Currently, there are UTM tests under way at the FAA test areas. NASA has been testing UTM at its Ames facilities, and other private companies are also testing UTM.

How does a system like this work? The details are still sketchy. But here’s what you can probably expect. Companies and individuals will register with the UTM (either an area or city system) and will get a hardware package to add to their drone.This will make sure each drone has the minimum requirements to operate in the system and will include a communications chip like 5G, an altimeter, a GNSS chip (similar to a GPS chip but much more accurate), a chip that stores flight data and a unique ID.

And this is just from the city side… you still would have a whole other system on the drone owner side that would include, package data such as weight and mass, energy management,craft maintenance, mileage, flight logs and any additional pertinent data.