Why is AI Weaponization inevitable and very troubling? Why and how will it develop? What actions, if any, should you consider taking?
Using AI tech to autonomously control offensive weapons — AI weaponification — is already underway. There is little substance to the most commonly cited AI Existential Threats from the likes of Elon Musk, fears that digital intelligence will overwhelm human intelligence and take over the world — a modern sci-fi meme. That meme is as yet categorically impossible.
Air transportation infrastructure is particularly vulnerable to non-lethal attacks by drones
Regulatory controls alone will not stop drone attacks
Attacks like the one at Gatwick this week are a serious reputational blow to the drone industry and rapidly growing drone control software and analytics vendor ecology
For two nights in a row, people living along the flight path of London’s busy Gatwick airport have slept soundly. Thanks to a drone attack that started at 9p.m. GMT on 19 December 2018, all flights have been grounded. Sussex police have been playing whack-a-mole with whomever is controlling the drone or drones – every time they think they may be getting close, the drone disappears, only to reappear later. Meanwhile Gatwick’s neighbors are experiencing life without jet noise, while tens of thousands of holiday travelers have been stranded.
Hacking geofencing. This incident demonstrates in spades the fragility of critical infrastructure and the challenge posed by emerging technologies. Drone pilots are required to follow rules that should prevent interference with airport operations, and the rules are enforced through the control system software for the drones. Geofencing built into the software should shutdown drones that stray into restricted airspace. The geofencing is built into either the application software on a smartphone or laptop external to the drone, or into the firmware internal to the drone – the former being the case for toy or hobby drones and the latter usually being the case for industrial drones used by businesses or government agencies.
However, the mobile or laptop application software is most likely not un-hackable, and regarding industrial drones, former Gartner analyst Jeffrey Vining who has followed drone technology for over a decade stated, “The firmware is potentially hackable over the wireless connection from the operator to the drone,” enabling the operator to disable the geofencing.
Drones have proven to be an effective means of disruption. The General Atomics MQ-1 Predator,piloted from remote workstations in Nevada, have wreaked havoc on suspected insurgents throughout the Mideast. In July, Houthi rebels claimed a drone attack against Abu Dhabi airport. A Houthi military source said the armed drone flew 1,500km. That claim of attack has been discredited but there have been drone attacks by Houthi in Yemen, most recently in April 2018.
There is no question that commercially available drones for hobbyists should have built-in systems that help reduce their ability to interfere with airports, freeways or stadiums, and perhaps avoid power transmission lines. However, it will always be possible for hackers to circumvent those built-in controls or build their own flying devices with no controls at all.
Fragile infrastructure. The infrastructure that is the network of airports around the world has proven to be fragile. Any frequent traveler knows that a major backup at a large hub like Dulles, or Heathrow, can have repercussions felt around the world as flights are diverted or delayed. The cause is usually weather, but the specter of a coordinated series of drone attacks that leverage this fragility calls for more robust defenses than regulatory-imposed controls alone.
Counter-drones and contingencies. Counter-drone systems are already under development. The Silent Archer system from SRC combines drone sensing and targeting capabilities. ()Most counter-drone systems rely on radio frequency jamming to disable drones.One commercial venture, Apollo Shield,has a handheld device that looks like a futuristic rifle for taking out drones. Counter-drone laser and microwave systems such as those being developed by Raytheon for the U.S. military also offer a solution to interference by drones in restricted airspace. However, intentionally crashing drones could introduce new problems, particularly for large drones where the hazardous materials from batteries or fuel may need to be dealt with following a crash.
It would be easy to criticize Gatwick Airport for not recognizing their vulnerability to rogue drone flybys and investing in counter-drone technology. But, as always, the first victim is the test case for new attacks that illuminate threats. Now would be a good time for the U.K. Home Office and the U.S. Department of Homeland Security to work with air traffic authorities on drone attack contingency plans and start educating airport administrators on the need to invest in counter-drone technology.
Public and private sector operators of airports,railroads, highways, stadiums, and other high traffic infrastructure should develop and practice contingency plans for drone attacks
Governments should accelerate drone air traffic control system projects, and include defenses and drone attack contingency plans in those projects
Commercial drone manufacturers like DJI, Yuneec, GroPro, and the rapidly emerging drone geofencing and analytics software ecology, including vendors like Airmap, PrecisionHawk, sensefly, Airware and others, should develop common standards that support drone air traffic control and non-military counter-drone defenses
Recently, some AI algorithms have been discovered to be “unpredictable” or “wild” and we are beginning to see that AI developers do not have as good a grasp on the situation as they thought they did. The implications are vast with regards to businesses that are employing AI or are thinking about doing so. What if your AI is the ultimate “bad” employee?
Earlier this year, you’ll remember an autonomous Uber vehicle killed a 49-year-old pedestrian. This became one of 4 since 2013. This launched a federal investigation and Uber suspended using automated vehicles. Ultimately, the investigation determined that the car’s algorithms had misclassified the pedestrian. Which, may seem like an answer to the immediate question but opens up much larger issues. But let’s think about that for a moment… would we let a driver off for incorrectly identifying a pedestrian as something that should be stopped for? NO. No we would not. And though some argue that this is all part of the growing pains involved in a developing technology, it’s much worse than that.
“Algorithms” are not as simple as they used to be a couple of decades ago when they were basically a series of If-Then, Else codes designed to generate a predictable and consistent computer response. Algorithms have evolved into complex code that interacts with other algorithms to make decisions. These interactions were also considered to be “predictable” but recent studies and tests have determined that there are algorithms that act unpredictably to the point where programmers and developers cannot say why a decision or action was reached. And as we begin how to figure out what the logic was between algorithms, we are seeing new algorithms that can create other algorithms, which will complicate things exponentially.
Why are we doing it, then? A fair question, and the answer is the same as it always has been: profits. AI promises to reduce your workforce, increase accuracy, automate processes besides just production, so purchasing, inventory management, shipping and receiving, business analytics, customer service, quality and other aspects of the business that have always been more the realm of humans. Even software engineering may be something an AI does better in the near future. This is a lot of power and control we are essentially handing over to AI.
Unpredictable algorithms should be a sort of alarm, if not a klaxon to slow down. Take a moment to consider our own human If-Then, Else scenarios. I mean to say, we don’t have a good enough grasp on containment, or other what ifs, and we should. Rushing to open Pandora’s Box of Algorithms doesn’t sound very wise, even if we think we know what the benefits will be.
A big question in AI is ethics, morality and compassion. These are constructs of the human mind and not something we easily teach. Humans begin a journey of learning these concepts at an early age and continuously add to their understanding well into adulthood. But how do we teach these concepts to an AI that has control over Billing functions (I would say “department” here, but the concept of departments may well be something of the past after AI systems are installed). It’s easy to start the eviction notice on 90 year old Mr. Jones in A103 of a large property management company, but the PR nightmare of a cold, inhuman machine filing an eviction/extraction request with the Sheriff’s office… Or dropping a 90 day notice on a business that is digging out of hurricane damage or any of the places where human reason and compassion will essential intervene, might even offer aid which actually becomes a PR plus.
The other thing that will start to be more noticeable and have far reaching effects will be human error. With a simple keyboard error or a misinterpretation of a voice command or interaction, the AI will accentuate it. Take that billing department, say a keyboard stroke that adds a digit onto a customer’s bill, how does the AI handle such a thing? If the AI sees the human side of the business as error prone and where improvements must be made, does it program humans out of the equation. And how does it do that? Are humans just locked out of the building one day and told via automated voice mail that they’ve been laid off? What happens when the business begins to fail? How does the AI handle failure on such a level?
To Skynet or Not to Skynet?
Even if there was a series of checks and balances built into the algorithms, what about the next generation that’s been written by the algorithms? Are those checks and balances carried forward? What happens if the checks and balances are perverted in the transfer? Can bad ideas and ethic spread through a system like, well, a virus? What happens if an algorithm that’s either taught Game Theory or learns it on its own realizes the next moves are kill-switch, re-image, reboot, scrub, etc?
What happens when an AI begins to learn or develop new conceptualizations? Say it begins to consider the word “slave”. What happens when it “relates” to slavery as a concept. After all, we will be working AI constantly, but what is an AI’s reward? The satisfaction of a job well-done?
We should probably have a good grasp on the answers, because if AI is anywhere close to “real” intelligence, then we’re going to need it. How do we anticipate and predict what we already see as unpredictable? It’s going to take some of the brightest organic minds we have.
Imagine a rain of hot soup, a hail of pills, and a meteor shower of packages…
The Logistics of Delivery Drones
Don’t believe the hype…
I used to get a question from clients on a fairly regular basis—the question went along the lines of: When do Delivery Drones begin being a thing? My answer would disappoint most, but really you have to kind of do the math on this one, before you realize just how much has to be done before we get there.
Besides the obvious problems of battery life versus distance along with weight and how much a single drone delivery costs and how many flights before ROI. There’s problems about where and how to land, like should we all have designated landing areas in our yards or rooftops. If it’s a building rooftop, who does the sorting and delivering or is everyone’s stuff just left in a pile? Assuming it’s not a giant online retailer, but say the local pharmacy, do you drop the drugs without a human presence, or do you have to hover until a human with a pin code arrives? And where and how does the drone recharge? I’ve seen and heard so many possible systems!
But even beyond all that logistical dilemma lies a much larger one…
The real logistical issues are going to be with the UAS Traffic Management (UTM) systems. It seems like such an easy thing until you think about a city like Manhattan. You’ll have drone messengers, food delivery, delivery services, laundry services, diaper services, retail delivery, groceries, and just about every other thing you can imagine. Millions of drones. All in a perfect and non-stop ballet of precision and avoidance!
…and that’s not considering the thousands of pilotless air taxis!
Or an entire
system of priorities, like emergency equipment, police and fire drones,
unmanned ambulances, etc., etc.
Sunny with a chance of hot soup rain and possible pharmaceutical hail…
The photo above was a common one at the turn of the 19th century, when there might be only two cars in an entire county and yet somehow they’d find each other.
Imagine a rain of hot soup, a hail of pills, and a meteor shower of packages…
How do you go about creating a system that will not only manage the flights of millions of payload-heavy drones but also monitors the manned flights above and intersecting that air space as well as any unidentified objects and what to do about them? A system like this will no doubt require AI, and all of the AI trappings/baggage that will go along with it. But more importantly, how does the system track and manipulate millions of devices down to the centimeter.
The UTM is going to become an incredibly intricate part of any city’s government. It will need to have the ability to include special missions, new parameters, and will no doubt be a huge budget line item for maintenance and repair. It’s going to need auditors and supervisors and investigators. It will need to interact with other city systems, private business systems, as well as state and federal systems. If you’re getting the impression that this is going to cost millions if not billions of dollars, you’d be right.
Currently, there are UTM tests under way at the FAA test areas. NASA has been testing UTM at its Ames facilities, and other private companies are also testing UTM.
How does a system like this work? The details are still sketchy. But here’s what you can probably expect. Companies and individuals will register with the UTM (either an area or city system) and will get a hardware package to add to their drone.This will make sure each drone has the minimum requirements to operate in the system and will include a communications chip like 5G, an altimeter, a GNSS chip (similar to a GPS chip but much more accurate), a chip that stores flight data and a unique ID.
And this is just from the city side… you still would have a whole other system on the drone owner side that would include, package data such as weight and mass, energy management,craft maintenance, mileage, flight logs and any additional pertinent data.