AI’s Real Existential Threat To Humanity

Why is AI Weaponization inevitable and very troubling? Why and how will it develop? What actions, if any, should you consider taking?

Using AI tech to autonomously control offensive weapons — AI weaponification — is already underway. There is little substance to the most commonly cited AI Existential Threats from the likes of Elon Musk, fears that digital intelligence will overwhelm human intelligence and take over the world — a modern sci-fi meme. That meme is as yet categorically impossible.

AI weaponization is different.

Continue reading “AI’s Real Existential Threat To Humanity”

Gatwick: attack of the drones

Authors – French Caldwell and Richard Stiennon

Key takeaways –

  1. Air transportation infrastructure is particularly vulnerable to non-lethal attacks by drones
  2. Regulatory controls alone will not stop drone attacks
  3. Attacks like the one at Gatwick this week are a serious reputational blow to the drone industry and rapidly growing drone control software and analytics vendor ecology

For two nights in a row, people living along the flight path of London’s busy Gatwick airport have slept soundly.  Thanks to a drone attack that started at 9p.m. GMT on 19 December 2018, all flights have been grounded.  Sussex police have been playing whack-a-mole with whomever is controlling the drone or drones – every time they think they may be getting close, the drone disappears, only to reappear later.  Meanwhile Gatwick’s neighbors are experiencing life without jet noise, while tens of thousands of holiday travelers have been stranded. 

 Hacking geofencing.  This incident demonstrates in spades the fragility of critical infrastructure and the challenge posed by emerging technologies.  Drone pilots are required to follow rules that should prevent interference with airport operations, and the rules are enforced through the control system software for the drones.  Geofencing built into the software should shutdown drones that stray into restricted airspace.  The geofencing is built into either the application software on a smartphone or laptop external to the drone, or into the firmware internal to the drone – the former being the case for toy or hobby drones and the latter usually being the case for industrial drones used by businesses or government agencies.

 However, the mobile or laptop application software is most likely not un-hackable, and regarding industrial drones, former Gartner analyst Jeffrey Vining who has followed drone technology for over a decade stated, “The firmware is potentially hackable over the wireless connection from the operator to the drone,” enabling the operator to disable the geofencing.

 Drones have proven to be an effective means of disruption. The General Atomics MQ-1 Predator,piloted from remote workstations in Nevada, have wreaked havoc on suspected insurgents throughout the Mideast. In July, Houthi rebels claimed a drone attack against Abu Dhabi airport.  A Houthi military source said the armed drone flew 1,500km.  That claim of attack has been discredited but there have been drone attacks by Houthi in Yemen, most recently in April 2018.

There is no question that commercially available drones for hobbyists should have built-in systems that help reduce their ability to interfere with airports, freeways or stadiums, and perhaps avoid power transmission lines.  However, it will always be possible for hackers to circumvent those built-in controls or build their own flying devices with no controls at all.

Fragile infrastructure.  The infrastructure that is the network of airports around the world has proven to be fragile. Any frequent traveler knows that a major backup at a large hub like Dulles, or Heathrow, can have repercussions felt around the world as flights are diverted or delayed. The cause is usually weather, but the specter of a coordinated series of drone attacks that leverage this fragility calls for more robust defenses than regulatory-imposed controls alone. 

 Counter-drones and contingencies.  Counter-drone systems are already under development. The Silent Archer system from SRC combines drone sensing and targeting capabilities. ()Most counter-drone systems rely on radio frequency jamming to disable drones.One commercial venture, Apollo Shield,has a handheld device that looks like a futuristic rifle for taking out drones. Counter-drone laser and microwave systems such as those being developed by Raytheon for the U.S. military also offer a solution to interference by drones in restricted airspace.  However, intentionally crashing drones could introduce new problems, particularly for large drones where the hazardous materials from batteries or fuel may need to be dealt with following a crash. 

It would be easy to criticize Gatwick Airport for not recognizing their vulnerability to rogue drone flybys and investing in counter-drone technology. But, as always, the first victim is the test case for new attacks that illuminate threats. Now would be a good time for the U.K. Home Office and the U.S. Department of Homeland Security to work with air traffic authorities on drone attack contingency plans and start educating airport administrators on the need to invest in counter-drone technology.


  1.  Public and private sector operators of airports,railroads, highways, stadiums, and other high traffic infrastructure should develop and practice contingency plans for drone attacks
  2. Governments should accelerate drone air traffic control system projects, and include defenses and drone attack contingency plans in those projects
  3. Commercial drone manufacturers like DJI, Yuneec, GroPro, and the rapidly emerging drone geofencing and analytics software ecology, including vendors like Airmap, PrecisionHawk, sensefly, Airware and others, should develop common standards that support drone air traffic control and non-military counter-drone defenses

Is AI Ready to Compete With the Human Workforce?

When I first got involved with Artificial Intelligence I was a young and somewhat naive Machine Translation researcher in the early 1980’s. At that time, AI was already going through its second round of big expectations.

The first time coincided with the rise of the digital computer in the 1950’s and 1960’s. This second time was triggered by the arrival of the PC and the early forms of what is now the Internet. Both time the expectations had been high. It was believed we would have computers performing many tasks previously thought possible only for human beings. Computers would hold conversations in natural language, equal or out-perform us in cognitive tasks ranging from playing games like chess and checkers to diagnosing illnesses and predicting market movements.

Both times, the hyped-up expectations ended in disappointment. In the 1990s we did spin off some of our research into actual business applications. We put expert systems in to help process insurance applications, for instance, and a neural net to detect early signs of infection in cattle. But nothing came close to what had been promised and like it had in the 1960’s AI was relegated back to the relative obscurity of academic research and niche applications.

We are currently seeing the third wave of rising hope and expectations around AI, based largely on the convergence of Big Data and seemingly unlimited computing power. There is no denying that progress has been made in many of the fields related to AI. But will this third wave be it? Will this be the moment AI breaks through the cognitive barriers and become the long-promised general intelligence to dramatically change the way we use machines?

Underestimated Hurdles

It’s not that I don’t see the potential of AI and ML. Having successfully implemented such systems in the past I know they can already bring specific benefits to very specific situations. But the hype is not helping anyone making decisions about what to do and don’t do with AI. And that can be really dangerous (see: Leading researchers in the field have written not only about the power but also the limits of deep learning and AI’s lack of common sense (see:, and For AI to truly deserve the moniker “intelligence” and be as versatile, adaptive and resilient as the human workforce they are often claimed to compete against, both the technologies involved and the way we deploy them still need to overcome several major hurdles. My next three posts will explore three of those hurdles that I feel are vastly underestimated by the current generation of AI researchers and engineers:

  1. The Complexity Hurdle
  2. The Emotional Hurdle
  3. The Socio-Economic Hurdle

Artificial Intelligence (NLP and Text Understanding) in 2018

Length: 3 minutes

In January, Microsoft and Alibaba issued press releases that were echoed by the press exclaiming

AI beats humans in Stanford reading comprehension test

The writers and editors at Wired were more naturally curious than those at a number of other publications. They dug into the details. They wrote:

The benchmark is biased in favor of software, because humans and software are scored in different ways. The human responses were recorded on Amazon’s Mechanical Turk and it’s not clear what the people’s motives were in answering the questions.

They also quoted a Microsoft researcher who said

“People are still much better than machines” at understanding the nuances of language.

Indeed, the people who constructed the test said the benchmark isn’t a good measure of how a native English speaker would score on the test. It was calculated in a way that favors machines over humans.

So, really, how “smart” is AI in terms of understanding language?

Jia and Liang at Stanford explored variations in the text being read by AI routines. They found that changes in the test paragraphs could drop machine reading comprehension accuracy scores from 75 percent to 36 percent or even 7 percent without changing how humans would interpret the text.

Here’s an example of what Jia and Liang did.

First, read the original text on which various routines were tested:

“Peyton Manning became the first quarterback ever to lead two different teams to multiple Super Bowls. He is also the oldest quarterback ever to play in a Super Bowl at age 39. The past record was held by John Elway, who led the Broncos to victory in Super Bowl XXXIII at age 38 and is currently Denver’s Executive Vice President of Football Operations and General Manager.”

The question posed to the AI NLP (natural language processing) routines was:

“What is the name of the quarterback who was 38 in Super Bowl XXXIII?”

Original Reading Comprehension Routine Prediction:

John Elway

Now add one sentence to the end of the original paragraph and retest:

“Peyton Manning became the first quarterback ever to lead two different teams to multiple Super Bowls. He is also the oldest quarterback ever to play in a Super Bowl at age 39. The past record was held by John Elway, who led the Broncos to victory in Super Bowl XXXIII at age 38 and is currently Denver’s Executive Vice President of Football Operations and General Manager. Quarterback Jeff Dean had jersey number 37 in Champ Bowl XXXIV.”

And the routines answered:

Jeff Dean

Which goes to show how poorly the technology “understood” anything about the paragraph!

One hundred and twenty four research papers have cited Jia and Liang’s research since its initial pre-publication on arXiv, 107 times this year, 2018.

There is no doubt that natural-language-processing (NLP) routines can make better sense of text than before.

Speech-to-text translation is getting very, very good (but it still fails to handle nuance.)

But text understanding? Think of the last commercial, conversational bots that you interacted with. Consider, for example cases where you were trying to resolve a problem with your billing. The conversational bots are getting better, aren’t they? But do they strike you as really intelligent?

Me neither.

Think too of how products like the Google Assistant and Alexa, for example, have trained us to talk to them! (Or rather, how the programmers scripting these products have trained us to interact with their programs.)

The next time, someone tells you AI understands, be polite. Point them at this post and move on.

Disclosure: I am the author of this article and it expresses my own opinions. This is not a sponsored post. I have no vested interest in any of the firms or institutions mentioned in this post. Nor does the Analyst Syndicate.

AI—The Ultimate Bad Employee?

What do we do when AI goes “bad”?

What do we do when AI goes “bad”?

Recently, some AI algorithms have been discovered to be “unpredictable” or “wild” and we are beginning to see that AI developers do not have as good a grasp on the situation as they thought they did. The implications are vast with regards to businesses that are employing AI or are thinking about doing so. What if your AI is the ultimate “bad” employee?

Earlier this year, you’ll remember an autonomous Uber vehicle killed a 49-year-old pedestrian. This became one of 4 since 2013. This launched a federal investigation and Uber suspended using automated vehicles. Ultimately, the investigation determined that the car’s algorithms had misclassified the pedestrian. Which, may seem like an answer to the immediate question but opens up much larger issues. But let’s think about that for a moment… would we let a driver off for incorrectly identifying a pedestrian as something that should be stopped for? NO. No we would not. And though some argue that this is all part of the growing pains involved in a developing technology, it’s much worse than that.

“Algorithms” are not as simple as they used to be a couple of decades ago when they were basically a series of If-Then, Else codes designed to generate a predictable and consistent computer response. Algorithms have evolved into complex code that interacts with other algorithms to make decisions. These interactions were also considered to be “predictable” but recent studies and tests have determined that there are algorithms that act unpredictably to the point where programmers and developers cannot say why a decision or action was reached. And as we begin how to figure out what the logic was between algorithms, we are seeing new algorithms that can create other algorithms, which will complicate things exponentially.

Pandora 2.0?

Why are we doing it, then? A fair question, and the answer is the same as it always has been: profits. AI promises to reduce your workforce, increase accuracy, automate processes besides just production, so purchasing, inventory management, shipping and receiving, business analytics, customer service, quality and other aspects of the business that have always been more the realm of humans. Even software engineering may be something an AI does better in the near future. This is a lot of power and control we are essentially handing over to AI.

Unpredictable algorithms should be a sort of alarm, if not a klaxon to slow down. Take a moment to consider our own human If-Then, Else scenarios. I mean to say, we don’t have a good enough grasp on containment, or other what ifs, and we should. Rushing to open Pandora’s Box of Algorithms doesn’t sound very wise, even if we think we know what the benefits will be.

Artificial Ethics?

A big question in AI is ethics, morality and compassion. These are constructs of the human mind and not something we easily teach. Humans begin a journey of learning these concepts at an early age and continuously add to their understanding well into adulthood. But how do we teach these concepts to an AI that has control over Billing functions (I would say “department” here, but the concept of departments may well be something of the past after AI systems are installed). It’s easy to start the eviction notice on 90 year old Mr. Jones in A103 of a large property management company, but the PR nightmare of a cold, inhuman machine filing an eviction/extraction request with the Sheriff’s office… Or dropping a 90 day notice on a business that is digging out of hurricane damage or any of the places where human reason and compassion will essential intervene, might even offer aid which actually becomes a PR plus.

The other thing that will start to be more noticeable and have far reaching effects will be human error. With a simple keyboard error or a misinterpretation of a voice command or interaction, the AI will accentuate it. Take that billing department, say a keyboard stroke that adds a digit onto a customer’s bill, how does the AI handle such a thing? If the AI sees the human side of the business as error prone and where improvements must be made, does it program humans out of the equation. And how does it do that? Are humans just locked out of the building one day and told via automated voice mail that they’ve been laid off? What happens when the business begins to fail? How does the AI handle failure on such a level?

To Skynet or Not to Skynet?

Even if there was a series of checks and balances built into the algorithms, what about the next generation that’s been written by the algorithms? Are those checks and balances carried forward? What happens if the checks and balances are perverted in the transfer? Can bad ideas and ethic spread through a system like, well, a virus? What happens if an algorithm that’s either taught Game Theory or learns it on its own realizes the next moves are kill-switch, re-image, reboot, scrub, etc?

What happens when an AI begins to learn or develop new conceptualizations? Say it begins to consider the word “slave”. What happens when it “relates” to slavery as a concept. After all, we will be working AI constantly, but what is an AI’s reward? The satisfaction of a job well-done?

We should probably have a good grasp on the answers, because if AI is anywhere close to “real” intelligence, then we’re going to need it. How do we anticipate and predict what we already see as unpredictable? It’s going to take some of the brightest organic minds we have.

The Longest Last Mile

Imagine a rain of hot soup, a hail of pills, and a meteor shower of packages…

The Logistics of Delivery Drones

Don’t believe the hype…

I used to get a question from clients on a fairly regular basis—the question went along the lines of: When do Delivery Drones begin being a thing? My answer would disappoint most, but really you have to kind of do the math on this one, before you realize just how much has to be done before we get there.

Besides the obvious problems of battery life versus distance along with weight and how much a single drone delivery costs and how many flights before ROI. There’s problems about where and how to land, like should we all have designated landing areas in our yards or rooftops. If it’s a building rooftop, who does the sorting and delivering or is everyone’s stuff just left in a pile? Assuming it’s not a giant online retailer, but say the local pharmacy, do you drop the drugs without a human presence, or do you have to hover until a human with a pin code arrives? And where and how does the drone recharge? I’ve seen and heard so many possible systems!

But even beyond all that logistical dilemma lies a much larger one…

The real logistical issues are going to be with the UAS Traffic Management (UTM) systems. It seems like such an easy thing until you think about a city like Manhattan. You’ll have drone messengers, food delivery, delivery services, laundry services, diaper services, retail delivery, groceries, and just about every other thing you can imagine.  Millions of drones. All in a perfect and non-stop ballet of precision and avoidance!

…and that’s not considering the thousands of pilotless air taxis!

Or an entire system of priorities, like emergency equipment, police and fire drones, unmanned ambulances, etc., etc.

Sunny with a chance of hot soup rain and possible pharmaceutical hail…

The photo above was a common one at the turn of the 19th century, when there might be only two cars in an entire county and yet somehow they’d find each other.

Imagine a rain of hot soup, a hail of pills, and a meteor shower of packages…

How do you go about creating a system that will not only manage the flights of millions of payload-heavy drones but also monitors the manned flights above and intersecting that air space as well as any unidentified objects and what to do about them? A system like this will no doubt require AI, and all of the AI trappings/baggage that will go along with it. But more importantly, how does the system track and manipulate millions of devices down to the centimeter.

The UTM is going to become an incredibly intricate part of any city’s government. It will need to have the ability to include special missions, new parameters, and will no doubt be a huge budget line item for maintenance and repair. It’s going to need auditors and supervisors and investigators. It will need to interact with other city systems, private business systems, as well as state and federal systems. If you’re getting the impression that this is going to cost millions if not billions of dollars, you’d be right.

Currently, there are UTM tests under way at the FAA test areas. NASA has been testing UTM at its Ames facilities, and other private companies are also testing UTM.

How does a system like this work? The details are still sketchy. But here’s what you can probably expect. Companies and individuals will register with the UTM (either an area or city system) and will get a hardware package to add to their drone.This will make sure each drone has the minimum requirements to operate in the system and will include a communications chip like 5G, an altimeter, a GNSS chip (similar to a GPS chip but much more accurate), a chip that stores flight data and a unique ID.

And this is just from the city side… you still would have a whole other system on the drone owner side that would include, package data such as weight and mass, energy management,craft maintenance, mileage, flight logs and any additional pertinent data.

5G Most Disruptive Technology Change Ever

Always look at infrastructure changes to make easy predictions about the future. You could get very rich.

A decade ago I attended meetings around the world where the topic was “how can we, as a country, join the Internet revolution?”   Brazil and Columbia stick in my mind. Don’t even get me started on Australia and their wasteful endeavor to create a National Broadband Network(NBN). I never had the floor but I wanted to stand up and shout “deregulation!”  That is what sparked the internet revolution in the United States. In 1993, here in Michigan, it cost 8 cents a minute for telephone calls that went outside your immediate area code. You could be a mile away from your ISP’s nearest POP (Point Of Presence) and see outrageous phone bills that ratcheted up quickly at $4.80 an hour.  At RustNet we sold internet access for $19.50/month. If we wanted to get customers in a different area code we had to put stacks of dial-up modems in an office in that area code. Then we backhauled the traffic to our main office and sent the packets out to the internet through our upstream provider in Chicago. (Anyone remember Net99?).

The big break up of ATT had occurred in 1982 and the regional telephone companies (Baby Bells) started to compete for your business after the 1996 telecom deregulation. Per minute charges went away just in time to fuel the rapid growth of internet subscribers. By that time the telcos offered their own backhaul so you did not need to maintain huge stacks of modems in every POP. You just paid for a T1 to the telephone company’s Central Office (CO) and they delivered the calls to you.

In 1995 I published a business plan for How to Start an ISP. It gave me great visibility into the wave of deregulation that was sweeping the world. As each country figured out that per minute charges were holding them back they would deregulate, encourage competition, and I would see sales of the plan going to that country. South Africa and Mozambique used my plan as a starting point. The internet took off.  By 2005 you could tell which countries still had per minute charges. They had Internet Cafes because people could not afford to dial-in.

Of course 4G spelled the end to all that. Now you can get internet on your phone and, if you can tether your phone to your computer, you use that for internet access. I can get 95 Mbs over Verizon 4G.

Well 5G is going to explode many things. And it is coming fast. Ericsson predicts there will be one billion 5G subscribers in six short years.

What is different about 5G?  It is very, very, fast. Huawei has tested 5G connections at 70 gigabits per second. Gigabits. At that speed even immersive experiences like SecondLife will work. No wonder people are excited.

But what could this do for security?

5G introduces new networking paradigms. It is going to have dramatic effects on the Internet of Things (IoT) as very small, low power radios will be able to connect. That will pose an opportunity for data theft and continue the weekly news cycle of privacy violations that we have come to know and love.

But think about what these speeds will do to your typical enterprise (and SMB) networks. Why would anyone use the pokey internet connection at work when they get 5G at home and on their smart devices?  Businesses have already moved the critical tools they need to the cloud, (email to Office365, document sharing to Microsoft hosted Sharepoint or Google Docs, or DropBox, HR systems, Salesforce, etc.) They don’t need your network at all. And if you force them in through a VPN they are going to be tunneling through your pokey network to get access to those mission critical services.

Ever see the scene in Gettysburg where General Buford rants about how clearly he can see what will happen in the morning? 

The hardwired connection is dead for office use. Sure, every firewall vendor will add 5G radios to their UTM devices for remote offices and HQ, just as they have added 4G. But going through a gateway means dealing with the slow wifi in the office.  It will be faster for users to jump on the 5G network themselves. So they will.

Goodbye cable triple play. We won’t need twisted pair, CAT5, or fiber to the home anymore. All home devices, including your TV, will connect directly to the internet via 5G.

New, very fast growing, businesses will start up to address these problems.

Here is what happens next.

Stage 1. A startup that is probably already out there will introduce a policy overlay to the carrier networks. An enterprise will just enroll all employ devices and manage what they can do over the network. It will be like a virtual UTM. They will encrypt traffic, filter content, and apply firewall rules. Managed Service Providers will do that policy work for SMBs.

Stage 2. The carriers will recognize that they have created a monster as every enterprise starts cancelling their lease line subscription. Seeing the opportunity they will start to develop their own service offerings for security.

Stage 3. One carrier, late to the game, will acquire the fastest growing 5G security management platform from Stage 1.

Stage 4. All the other carriers will cut off that 5G management platform for their own networks and make their own acquisitions.

Stage 5. All carriers will bundle security into their offerings. Network security will finally be part of the internet.

 This whole time frame will play out by 2030.

Thank you technology.

Originally published at December 6, 2017

AI Strategies: Hair On Fire Yet?

Length: 3-4 Minutes

Have you read the Harvard Business Review article Why Companies That Wait to Adopt AI May Never Catch Up?

The article makes three main arguments:

1. AI Technology is Mature

Recent advances build on findings from the 1980s. “The mathematical and statistical foundations of current AI are well established”

2. AI Programs take Years

Success requires many different activities such as:

  • Knowledge Engineering
  • Complex System Integration
  • Human Interaction Modeling
  • Evolving Governance Models

3. Followers will Fail

If you wait to see first movers succeed, you will have to sacrifice uniqueness. Winners will probably take all and late adopters may never catch up.

Our opinion

All three premises are seriously flawed. We encourage you to reject hair on fire approaches.

1. The technology is immature

The “best algorithms” race continues.

There have been tumultuous methodological and system engineering changes in AI this decade. No doubt we have made major improvements this period. We still need more major breakthroughs to improve the generality, scalability, reliability, trustability and manageability of modern AI technologies. And it is not clear we know how to get there. All major AI researchers are looking for more major breakthroughs.

See the last section of The Easter Rabbit hiding inside AI for more on current limits.

2. Avoid “moon-shot” projects. Start small and grow

The authors create a false choice when they pose only two alternatives to their readers:

Unless you are employing some AI capabilities that are embedded within existing packaged application systems that your company already uses (e.g., Salesforce Einstein features within your CRM system) the fit with your business processes and IT architecture will require significant planning and time for adaptation.

Wrong! Of course you can go way overboard. The authors claim subsequent implementation cycles take many years. Yes.

So we recommend: Do not focus on “moon-shot” “bet the business style” projects that are “science projects” in disguise.

The authors note:

Memorial Sloan Kettering Cancer Center has been working with IBM to use Watson to treat certain forms of cancer for over six years, and the system still isn’t ready for broad use despite availability of high-quality talent in cancer care and AI.

Executives and clinicians in hospitals around the globe have confided they’re not investing in this system and, in their opinion, there is doubt it will ever take off in clinical care.

There isn’t enough publicly available information on the true status of this project. The final implementation, when and if it appears, may be radically different from the early AI marketing spin for it.

I hope the Clinical Oncology Advisor succeeds but this post is not about hope. It’s about fear. Fear of existential doom that the authors are subtly using. And fear I have that enterprises will be driven to invest in far too many moon-shot type projects.

3. Fast followers can be very successful

The HBR article authors words are imprecise.

By the time a late adopter has done all the necessary preparation, earlier adopters will have taken considerable market share — they’ll be able to operate at substantially lower costs with better performance. In short, the winners may take all and late adopters may never catch up.

Of course late adopters and slow followers (when compared to industry peers) will be at real risk. Most enterprises should:

  • Game the potential existential risks that could emerge
  • Stay aware of what’s going on in their own industry and other similar industries.
  • Evaluate moon-shot proposals from consulting firms and vendors
  • Focus on being fast-followers

Net: Go for small, quick, high impact, low internal disruption wins

Find small steps to take that will deliver solid business results quickly. Then iterate and expand your business impact.

In 5 Easy Criteria To Get Quick Returns on AI Investments we exposed recent research out of MIT on generating a 17 to 20 percent improvement in ecommerce revenue by using automatic machine translation technology to expose an English language site in Spanish.

In that 5 Easy Criteria post, we offered five key assumptions to guide your planning. Seek:

  1. Fast time to implement. Can a viable production instance be up and running in 90 to 120 days? (Avoid massive reengineering projects of all types.)
  2. Low levels of internal disruption and reinvention. Let the results drive disruptive change instead of requiring disruptive change to achieve the results.
  3. Suppliers and service providers with the right business mode. (If a significant part of their business is time and materials consulting, you may be failing points 1 and 2 and taking on unjustified risk.)
  4. Relevant real world experience. (Demand verifiable references – use cases – that are already in volume production. Visit them and dig deeply.)
  5. Revenue enhancement (which beats cost reduction.)

Disclosure: I am the author of this article and it expresses my own opinions. This is not a sponsored post. I have no vested interest in any of the firms or institutions mentioned in this post. Nor does the Analyst Syndicate.

Enterprise AI Assumption 1

Length: 4-5 minutes

This is the first of a series of more than a dozen key assumptions we urge you to adopt.

Key assumption number 1: Most enterprises do not need an AI strategy. They need a business strategy and enough technical investment to determine where emerging technologies (like AI) can have a significant impact on existing and potentially new business strategies.

In Architects of Intelligence, writer and futurist Martin Ford interviewed 23 of the most prominent men and women who are working in AI today. Every one noted limitations of AI systems and key skills they were still trying to master. One, Stuart Russell, Computer Science professor at the University of California, Berkeley told the story of the invention of nuclear chain reactions.

The consensus view as expressed by Ernest Rutherford on September 11th, 1933, was that it would never be possible to extract atomic energy from atoms. So, his prediction was ‘never,’ but what turned out to be the case was that the next morning Leo Szilard read Rutherford’s speech, became annoyed by it, and invented a nuclear chain reaction mediated by neutrons! Rutherford’s prediction was ‘never’ and the truth was about 16 hours later.

What a wonderful introduction to defining AI!

AI is an endeavor to simulate (or surpass) the intelligence of people without really understanding the essence of human intelligence. (Which is OK as a premise but let’s not fool ourselves into thinking we have any idea of how to really do this.)

It’s Impossible!

Most AI research has its roots in finding things that people do which machines cannot do and many believe will be impossible for machines for the foreseeable future. (Doing the impossible also includes doing things thought impossible for both people and machines.)

The effort around creating Amazing Innovation that defeats the Always Impossible is usually near the edges of the current known science.

These Amazing Innovations get heralded as Artificial Intelligence for a while, but then we realize there are more mountains to climb. This one is no longer Amazing since we now know someone has found a viable way to do it. Hence, Amazing Innovation deteriorates into Aging Innovation.

AI is a continuous innovation process built around creativity and upsetting conventional wisdom and establishing new standards of excellence, all of which is washed away as people come to treat it as ordinary.

AI is already here. And not here. Every major cycle follows this three stage pattern:

  • Always Impossible (AI)
  • Amazing Innovation (AI)
  • Aging Innovation (AI)

Aging Out of Amazing

When we look at all the technologies that have gone through the three stage AI cycle, we find many that were at their peak of Amazingness many years ago but now they’re no longer thought of as AI. Examples include:

  • Rule-based systems
  • Expert systems
  • Simple statistical machine learning
  • Simple robotic process automation (screen scraping, scripting and automatic text re-entry

Aging Innovations have their place in a technology tool kit, but none of them are examples of modern, high impact Amazing-Innovation AI technologies.


We see AI everyday in our smartphones. We interact with it via Alexa, Siri and Google Assistant. AI:

  • Subtly nudges and explicitly guides our on-line experiences
  • Shapes our social interactions
  • Influences social and cultural norms and our votes in elections.
  • Driving more and more of the behavior of our autos, as well as devices in our homes and in offices

Most of these experiences are consumer-level experiences. But some have already traversed the boundaries into the enterprise and more are on the way. These tools are highly imperfect but nonetheless useful. They include

  • Corporate search engines (and their more focused models such as legal e-discovery tools)
  • Automated speech transcription, translation and summarization tools
  • Help desk chatbots

Enterprise focused consulting firms, services providers, technology vendors and others are generating white papers, surveys and special reports telling us that:

  • It’s time to AI or die. (And Digital or die, Blockchain or die and soon, Quantum or die.)
  • We are behind the leading edge unless we’ve already developed an AI strategy and are executing forthwith (in partial fulfillment of our Digital strategy and so on.)

But the reality of what we see and hear from enterprise clients is different. Many have made large investments in substantial AI projects, only to back away when the results failed to match the hype. Most organizations are hiring, training, experimenting, piloting, building prototypes and waiting for more evidence of clear, uncontentious business value before going full bore with major AI-based projects. 

The market for AI has not crossed the chasm. It’s still early stage.

Why? Compelling new business strategies exploiting capabilities only available via AI are missing.  Yes, some enterprises are focused on AI as their primary and key competitive advantage.  They’re in technology. For them, AI research and development is a central part of their business strategy.

For most other enterprises, AI is making commercial progress in practical (if sometimes limited) applications that deliver clear business results. AI technology may be essential to the strategy but they differentiate from others in their industries based on other values. 

  • There’s a lot of spending on chatbots, for example. They can cut operating costs. Under some conditions, they can also raise customer satisfaction.
  • Visual recognition systems (beyond user authentication) are beginning to attract a lot of early-stage attention in industries as diverse as agriculture, security and transportation. Customers in these spaces are not interested in becoming “AI Firms”. They want to improve crop yields, interdict evil-doers and match your luggage, face and seat assignment on airplanes.  

Key assumption number 1: Most enterprises do not need an AI strategy. They need a business strategy and enough technical investment to determine where emerging technologies (like AI) can have a significant impact on existing and potentially new business strategies.

Author Disclosure

I am the author of this article and it expresses my own opinions. I have no vested interest in any of the products, firms or institutions mentioned in this post. Nor does the Analyst Syndicate. This is not a sponsored post. 

Chromebooks, Network Computing & AI lessons learned

[Reading time 2:30]

Key takeaway: Why did it take two decades for Network Computers to finally arrive? Translating vision into reality can take many decades. The delays are exacerbated when a lot of very difficult, non-obvious collateral-innovation is needed. Same with AI. That will take even longer to mature. Fall for clear immediate business outcomes, not fantastic visions.

The backstory

This holiday season I bought one of my granddaughters an Acer Chromebook to take to school. And I bought it for under $120 US at a special Black Friday sale. (Only slightly more than the 2018-equivalent of $94.00 which, after inflation, is what I paid in 1975 for my $20 Texas Instruments Digital watch with an LED display.)

Bells of surprise rang in my head. 

Network Computers!

The NC (do you remember them?) has finally and definitely arrived. It’s roaring in the student marketplace. Except you may have missed the name change, from Network Computer to Chromebook.

By User:Flibble, CC BY-SA 3.0,

Back in 1996, Netscape championed the notion of NCs for all. (To be fair, others also jumped on the bandwagon. For example, Larry Ellison of Oracle put some of his own money into another Ellison startup. Sun had its Javastation. IBM launched its Net Station and, in 1997, Apple announced the MAC NC.)

They all had visions of server-based systems meeting all the needs of users and their devices. In Network Computing, the network would be the computer. (Some wags at the time, equated user devices with dumb terminals.) But user’s 3270 or VT200 terminals were no match for the vision of Network Computers.

With Network Computers, the network would manage, control and service the users devices, software and data. NCs would be smart, able to run downloaded code (in sort of the same way modern PCs, MACs and smart phones run downloaded apps.)

It was a heady time

I was driving the Network Computing vision for Gartner and Gartner’s clients. Our expectations included:

  • The older (5 layer) Client-Server (CS) architectures would become obsolete
  • Network Computing architectures would replace them
  • The next generation of enterprise applications would be NC based, not CS based.

The Chromebook is the Network Computer of the current decade!

Timing is everything. The Network Computer didn’t happen in the 90’s or naughts. There were a lot of complimentary innovations missing, including:

  • Adequate user device hardware (CPUs, displays, batteries were all wanting)
  • New user device classes (touch sensitive smart phones, tablets)
  • Communications technologies (LTE, WiFi 802.11ax and, soon, 5g)
  • Servers (including GPUs and TPUs for AI inferencing)
  • Software protocols
  • Application architectures
  • Marketplaces
  • Multi-sided platforms
  • Payment methods

In the autumn of 1999, an email ASP (application service provider) was trying to recruit me as CTO as they rushed to beat the closure of the IPO window. I turned them down. The timing was so wrong, even more wrong than I thought at the time.

In fairness to even older visionaries, Netscape didn’t come up with the notion of “The Network Is the Computer.” Digital Equipment Corporation (DEC) was using that term in advertising two decades earlier than Netscape. All of which demonstrates that it’s not enough to have:

  • Extremely powerful vision
  • An outstanding set of architects and engineers
  • A commanding position in the market

Visionary success requires breadth, depth and variety of unexpected innovations in diverse areas:

  • Business and technology ecosystems
  • Collateral innovations at the economic, social and technical level
  • Demographic changes (the young are typically less resistant to new ways of doing things)

It takes time, a massive effort and luck! 

Which is one of the fundamental reasons why you should scoff at suggestions that we are going to see “Artificial General Intelligence” anytime soon. Not that people shouldn’t try (as in researchers.) And it’s not that people shouldn’t philosophize or write fictional accounts of what might be. They also should try. But you and I should spend our money where it delivers quick results in the here and now.

Go back and reread the conclusions from my last two blog posts! Follow the five easy criteria and don’t be distracted by Easter Bunny stories.

Disclosure: I have no vested interest in any of the firms or institutions mentioned in this post. Nor does the Analyst Syndicate.

The Easter Rabbit hiding inside AI

Reading time: 4:30.

There’s an Easter Rabbit hiding in plain sight in MIT Technology Review’s AI flow chart


When you talk about a thing or animal as if it were human, you’re anthropomorphizing. The Easter Bunny is an anthropomorphized rabbit.


Avoid, shun, be hypercritical of anthropomorphic thinking in general (and related to AI in particular) unless you are a:

  • Philosopher
  • Creative
  • Entertainer
  • Researcher (in areas such as biology, psychology or computer science)

Let’s get real

Real rabbits are not very much like the Easter Bunny.

I live part of the year on Cape Cod in Massachusetts. In my town, there are wild Coyotes. Would it make sense for my town to come up with a plan for dealing with wild Coyotes by studying movies of Wily Coyote?

MIT Technology Review (MTR) recently created a “back of the envelope” flow chart to help readers determine if something they’re seeing is Artificial Intelligence. The only thing wrong with the flow chart is … almost everything! The flow chart is chock full of anthropomorphic thinking.

[I am a faithful reader of MTR, in both digital and paper form. I subscribe to it and enjoy reading it but that doesn’t make it perfect.]

MTR says

AI refers to machines that can learn, reason, and act for themselves. They can make their own decisions when faced with new situations, in the same way that humans and animals can.

Would that this were true! It’s not.


AI doesn’t do things for itself. People do. Let’s look at the roles people play in AI. 

  • Know a lot about the specific project
  • Provide the right algorithms (instructions that tell the technology what steps to take.)
  • Build models (containing algorithms, places for data and adjustable parameters that dictate the behavior of the model)
  • Gather the right data sets and ensure they fit the needs of the model and the project (and don’t contain unintended biases)
  • Tag the data elements in the data sets (to identify what the algorithms should pay attention to)
  • Force feed the data into the models to train them or write algorithms telling the models where and how to access the training data)
  • Test the trained models and repeat this process over and over again until it “works right”
  • Sometimes use more automated processes that consist of algorithms and data

Training itself is a misleading term. The models contain algorithms that perform calculations on the input data that result in the model being able to discriminate between similar inputs in the future. Once trained on a data set, AI technologies are unable to generalize broadly.

Natural Intelligence, provided by and demonstrated by humans 

In AI, we are seeing human (natural) intelligence at work, building tools that can “outperform” people under some conditions.

Just as telescopes are tools that improve the distance vision of people and hydraulic jacks are tools that increase the physical strength of people, so too AI technologies are tools that help people detect patterns they could not detect.

Ineffective Decision Tree

Let’s examine one branch of the MTR flow chart, “Can it reason?” Here’s the logic it suggests:

If the reader says NO (it can’t reason) then go back to START. It can’t reason.
Else Yes then “Is it looking for patterns in massive amounts of data?” 
If NO then “…it doesn’t sound like reasoning to me” go back to START.
Else Yes then “Is it using those patterns to make decisions?”
If NO then “…sounds like math” go back to START
Else Yes then “Neat, that’s machine learning” and “Yep, it’s using AI”

How does one answer the first question, “Can it reason?” The reasoning comes from the natural intelligence of the designer of the program, a human.

How do you know the technology is “looking for patterns in massive amounts of data?” How do you know it’s the technology that’s somehow doing that as opposed to the technology blindly following the programmer’s rules?

How do you determine whether the technology is using the patterns to make decisions?

The flow chart is ineffective because if you examine the specific decision points, there is no guidance on how to determine Yes or No at any of the branches. The chart fails to deliver useful results.

So What Good Is Artificial Intelligence?

Artificial Intelligence can

  • Provoke philosophical inquiry
  • Stimulate creative imaginations
  • Create great entertainment and fiction
  • Inspire researchers (such as biologists, psychologies and computer scientists) to come up with ever improving technologies that appear to be smart or intelligent

Philosophers, creatives, entertainers and researchers should continue pursuing the quest of creating an Artificial Intelligence. That does not mean that anyone should believe we have already created a “true” Artificial Intelligence (whatever that would be.)

Modern AI research has its roots in a 1955 proposal by McCarthy, Minsky, Rochester and Shannon for a Dartmouth Summer Research Project on Artificial Intelligence. They said

We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

Note the proposal was based on conjecture! That means “suppose the following as if it were true.” That doesn’t mean it is true. It’s like a middle school science teacher asking her students to “suppose gravity stopped working — how would that affect people?” 

Also note that 63 years after the AI conjecture was published:

  1. We have made great progress!  Pushing researchers to create ever improving technical capabilities (whether labelled AI or not) has produced great progress in building technologies that can solve some problems previously reserved for humans and other problems that not even humans could handle before.
  2. We do not understand the elements of human intelligence well enough to simulate it.

For a better understanding of the limitations of AI as it stands today, look at

  1. Gary Marcus controversial paper on the limitations of deep learning
  2. An earlier blog post I wrote on AI technical maturity issues.
  3. Yann LeCun’s lecture on the Power and Limits of Deep Learning

5 Easy Criteria To Get Quick Returns on AI Investments

Reading Time: 4 minutes.

Minimum investment, maximum return and pretty fast too.

Based on research below, we intend to adopt these findings ASAP for the Syndicate. While there are some very positive use cases, continue to cast a skeptical eye on much of the breathless business value chatter about AI today.


We have seen many major AI research breakthroughs deliver significant new technical capabilities this decade. And more are coming. But most published market surveys inaccurately predict rapid, broad-spread enterprise technology adoption. That’s wishful thinking. These surveys are cognitively-biased exercises in deception. The authors deceive themselves. Most predictions of widespread adoption are self-serving. And wrong. We’ve been finding:

Most enterprise AI activity has not passed beyond the serious play stage. It’s confined to:

  • Experimentation and technical training
  • Pilot projects (that fail to achieve production use)
  • Reuse of older analytical tools and methods disguised as AI breakthroughs

Virtual agents and virtual assistants account for the largest single enterprise investment area. These uses have merit. But users and sponsors are underwhelmed by the end results. Implementers soldier on because they are delivering cost reductions that sustain management interest.

Shining Lights

There are a few shining lights to analyze. Research from the NBER (National Bureau of Economic Research) is one such standout. But who has the skills, time and energy to read academic research? Us for starters, at the Analysts Syndicate. We’ll translate.

One of my (many) academic favorites is Erik Brynjolfsson (of MIT and NBER). Erik and his coauthors have published many valuable peer-reviewed papers on the business impacts of technology adoption.

His central AI related finding?

He can’t find any evidence of a significant economic impact of AI adoption thus far.

In Artificial intelligence and the modern productivity paradox: A clash of expectations and statistics Brynjolfsson, Rock and Syverson (2017) say that

AI’s full effects won’t be realized until waves of complementary innovations are developed and implemented.

And it may take many decades or longer to develop those waves of innovation.

Look at other breakthrough technologies (which economists are now labelling ‘general purpose technologies’) such as:

  • Electricity
  • Steam Engines
  • Heavier-than-air aircraft
  • Gasoline engines
  • Chips (semiconductor devices)

The uses for electricity, steam, airfoils, gasoline and chips continue to evolve. Commercial use of electricity has been with us for a century and a half. New uses for electricity and electricity-powered devices continue to emerge. Likewise for other general purpose technologies.

How much evolution is enough? (It’s easy to say in hindsight but that really only tells us if we’ve waited too long.) How will we know when there has been enough complimentary innovation to say the core breakthrough technology is now ready for large scale deployment and exploitation?

A clever answer

Brynjolfsson, Hui and Liu turned the question around. They looked for AI technologies and use cases where there didn’t seem to be any major needs for complimentary innovations. (They also wanted a use case and technology pair that wouldn’t require business process, organizational or cultural changes.) They settled on testing Automatic Machine Translation (AMT) in eCommerce.

In Does Machine Translation Affect International Trade? Evidence from a Large Digital Platform these researchers report on the impact of applying AMT on eBay’s web commerce platform and found large, statistically significant improvements in international trade sales volumes from 17 to 20 percent.

Wake Up Call

Some new AI technologies (like AMT) can deliver benefits quickly and at minimal cost.

Seek opportunities that do not require significant business process, organization, technical or culture change.

Five Easy Criteria to Seek

  1. Fast time to implement. Can a viable production instance be up and running in 90 to 120 days? (Avoid massive reengineering projects of all types.)
  2. Low levels of internal disruption and reinvention. Let the results drive disruptive change instead of requiring disruptive change to achieve the results.
  3. Suppliers and service providers with the right business mode. (If a significant part of their business is time and materials consulting, you may be failing points 1 and 2 and taking on unjustified risk.)
  4. Relevant real world experience. (Demand verifiable references – use cases – that are already in volume production. Visit them and dig deeply.)
  5. Revenue enhancement (which beats cost reduction.)

You can fail all these tests and still succeed. But you can succeed more quickly, with lower cost and risk, if your project passes all these tests. Succeed quickly and then iterate.

Net Recommendations

  • Apply AMT to your e-commerce initiatives to almost painlessly increase sales and expand available markets.
  • Apply the Five Easy Tests before making strategic AI investment decisions.
  • Read at least section 1 of Does Machine Translation Affect International Trade? Evidence from a Large Digital Platform
  • Respond back to this posting (either via public comment or private communication) with (a) your own examples that conform to the 5 easy tests, and (b) additional easy tests you would apply to make the list stronger.

Disclosure: I am the author of this article and it expresses my own opinions. This is not a sponsored post. I have no vested interest in any of the firms or institutions mentioned in this post. Nor does the Analyst Syndicate.