Is AI Ready to Compete With the Human Workforce?

When I first got involved with Artificial Intelligence I was a young and somewhat naive Machine Translation researcher in the early 1980’s. At that time, AI was already going through its second round of big expectations.

The first time coincided with the rise of the digital computer in the 1950’s and 1960’s. This second time was triggered by the arrival of the PC and the early forms of what is now the Internet. Both time the expectations had been high. It was believed we would have computers performing many tasks previously thought possible only for human beings. Computers would hold conversations in natural language, equal or out-perform us in cognitive tasks ranging from playing games like chess and checkers to diagnosing illnesses and predicting market movements.

Both times, the hyped-up expectations ended in disappointment. In the 1990s we did spin off some of our research into actual business applications. We put expert systems in to help process insurance applications, for instance, and a neural net to detect early signs of infection in cattle. But nothing came close to what had been promised and like it had in the 1960’s AI was relegated back to the relative obscurity of academic research and niche applications.

We are currently seeing the third wave of rising hope and expectations around AI, based largely on the convergence of Big Data and seemingly unlimited computing power. There is no denying that progress has been made in many of the fields related to AI. But will this third wave be it? Will this be the moment AI breaks through the cognitive barriers and become the long-promised general intelligence to dramatically change the way we use machines?

Underestimated Hurdles

It’s not that I don’t see the potential of AI and ML. Having successfully implemented such systems in the past I know they can already bring specific benefits to very specific situations. But the hype is not helping anyone making decisions about what to do and don’t do with AI. And that can be really dangerous (see: https://www.technologyreview.com/s/612072/artificial-intelligence-is-often-overhypedand-heres-why-thats-dangerous/). Leading researchers in the field have written not only about the power but also the limits of deep learning and AI’s lack of common sense (see: https://www.youtube.com/watch?v=0tEhw5t6rhc, https://arxiv.org/pdf/1801.00631.pdf and medium.com/@GaryMarcus/in-defense-of-skepticism-about-deep-learning-6e8bfd5ae0f1). For AI to truly deserve the moniker “intelligence” and be as versatile, adaptive and resilient as the human workforce they are often claimed to compete against, both the technologies involved and the way we deploy them still need to overcome several major hurdles. My next three posts will explore three of those hurdles that I feel are vastly underestimated by the current generation of AI researchers and engineers:

  1. The Complexity Hurdle
  2. The Emotional Hurdle
  3. The Socio-Economic Hurdle

The Easter Rabbit hiding inside AI

Reading time: 4:30.

There’s an Easter Rabbit hiding in plain sight in MIT Technology Review’s AI flow chart

Anthropomorphizing

When you talk about a thing or animal as if it were human, you’re anthropomorphizing. The Easter Bunny is an anthropomorphized rabbit.

Net:

Avoid, shun, be hypercritical of anthropomorphic thinking in general (and related to AI in particular) unless you are a:

  • Philosopher
  • Creative
  • Entertainer
  • Researcher (in areas such as biology, psychology or computer science)

Let’s get real

Real rabbits are not very much like the Easter Bunny.

I live part of the year on Cape Cod in Massachusetts. In my town, there are wild Coyotes. Would it make sense for my town to come up with a plan for dealing with wild Coyotes by studying movies of Wily Coyote?

MIT Technology Review (MTR) recently created a “back of the envelope” flow chart to help readers determine if something they’re seeing is Artificial Intelligence. The only thing wrong with the flow chart is … almost everything! The flow chart is chock full of anthropomorphic thinking.

[I am a faithful reader of MTR, in both digital and paper form. I subscribe to it and enjoy reading it but that doesn’t make it perfect.]

MTR says

AI refers to machines that can learn, reason, and act for themselves. They can make their own decisions when faced with new situations, in the same way that humans and animals can.

Would that this were true! It’s not.

Reality

AI doesn’t do things for itself. People do. Let’s look at the roles people play in AI. 

  • Know a lot about the specific project
  • Provide the right algorithms (instructions that tell the technology what steps to take.)
  • Build models (containing algorithms, places for data and adjustable parameters that dictate the behavior of the model)
  • Gather the right data sets and ensure they fit the needs of the model and the project (and don’t contain unintended biases)
  • Tag the data elements in the data sets (to identify what the algorithms should pay attention to)
  • Force feed the data into the models to train them or write algorithms telling the models where and how to access the training data)
  • Test the trained models and repeat this process over and over again until it “works right”
  • Sometimes use more automated processes that consist of algorithms and data

Training itself is a misleading term. The models contain algorithms that perform calculations on the input data that result in the model being able to discriminate between similar inputs in the future. Once trained on a data set, AI technologies are unable to generalize broadly.

Natural Intelligence, provided by and demonstrated by humans 

In AI, we are seeing human (natural) intelligence at work, building tools that can “outperform” people under some conditions.

Just as telescopes are tools that improve the distance vision of people and hydraulic jacks are tools that increase the physical strength of people, so too AI technologies are tools that help people detect patterns they could not detect.

Ineffective Decision Tree

Let’s examine one branch of the MTR flow chart, “Can it reason?” Here’s the logic it suggests:

If the reader says NO (it can’t reason) then go back to START. It can’t reason.
Else Yes then “Is it looking for patterns in massive amounts of data?” 
If NO then “…it doesn’t sound like reasoning to me” go back to START.
Else Yes then “Is it using those patterns to make decisions?”
If NO then “…sounds like math” go back to START
Else Yes then “Neat, that’s machine learning” and “Yep, it’s using AI”

How does one answer the first question, “Can it reason?” The reasoning comes from the natural intelligence of the designer of the program, a human.

How do you know the technology is “looking for patterns in massive amounts of data?” How do you know it’s the technology that’s somehow doing that as opposed to the technology blindly following the programmer’s rules?

How do you determine whether the technology is using the patterns to make decisions?

The flow chart is ineffective because if you examine the specific decision points, there is no guidance on how to determine Yes or No at any of the branches. The chart fails to deliver useful results.

So What Good Is Artificial Intelligence?

Artificial Intelligence can

  • Provoke philosophical inquiry
  • Stimulate creative imaginations
  • Create great entertainment and fiction
  • Inspire researchers (such as biologists, psychologies and computer scientists) to come up with ever improving technologies that appear to be smart or intelligent

Philosophers, creatives, entertainers and researchers should continue pursuing the quest of creating an Artificial Intelligence. That does not mean that anyone should believe we have already created a “true” Artificial Intelligence (whatever that would be.)

Modern AI research has its roots in a 1955 proposal by McCarthy, Minsky, Rochester and Shannon for a Dartmouth Summer Research Project on Artificial Intelligence. They said

We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

Note the proposal was based on conjecture! That means “suppose the following as if it were true.” That doesn’t mean it is true. It’s like a middle school science teacher asking her students to “suppose gravity stopped working — how would that affect people?” 

Also note that 63 years after the AI conjecture was published:

  1. We have made great progress!  Pushing researchers to create ever improving technical capabilities (whether labelled AI or not) has produced great progress in building technologies that can solve some problems previously reserved for humans and other problems that not even humans could handle before.
  2. We do not understand the elements of human intelligence well enough to simulate it.

For a better understanding of the limitations of AI as it stands today, look at

  1. Gary Marcus controversial paper on the limitations of deep learning
  2. An earlier blog post I wrote on AI technical maturity issues.
  3. Yann LeCun’s lecture on the Power and Limits of Deep Learning

5 Easy Criteria To Get Quick Returns on AI Investments

Reading Time: 4 minutes.

Minimum investment, maximum return and pretty fast too.

Based on research below, we intend to adopt these findings ASAP for the Syndicate. While there are some very positive use cases, continue to cast a skeptical eye on much of the breathless business value chatter about AI today.

Context

We have seen many major AI research breakthroughs deliver significant new technical capabilities this decade. And more are coming. But most published market surveys inaccurately predict rapid, broad-spread enterprise technology adoption. That’s wishful thinking. These surveys are cognitively-biased exercises in deception. The authors deceive themselves. Most predictions of widespread adoption are self-serving. And wrong. We’ve been finding:

Most enterprise AI activity has not passed beyond the serious play stage. It’s confined to:

  • Experimentation and technical training
  • Pilot projects (that fail to achieve production use)
  • Reuse of older analytical tools and methods disguised as AI breakthroughs

Virtual agents and virtual assistants account for the largest single enterprise investment area. These uses have merit. But users and sponsors are underwhelmed by the end results. Implementers soldier on because they are delivering cost reductions that sustain management interest.

Shining Lights

There are a few shining lights to analyze. Research from the NBER (National Bureau of Economic Research) is one such standout. But who has the skills, time and energy to read academic research? Us for starters, at the Analysts Syndicate. We’ll translate.

One of my (many) academic favorites is Erik Brynjolfsson (of MIT and NBER). Erik and his coauthors have published many valuable peer-reviewed papers on the business impacts of technology adoption.

His central AI related finding?

He can’t find any evidence of a significant economic impact of AI adoption thus far.

In Artificial intelligence and the modern productivity paradox: A clash of expectations and statistics Brynjolfsson, Rock and Syverson (2017) say that

AI’s full effects won’t be realized until waves of complementary innovations are developed and implemented.

And it may take many decades or longer to develop those waves of innovation.

Look at other breakthrough technologies (which economists are now labelling ‘general purpose technologies’) such as:

  • Electricity
  • Steam Engines
  • Heavier-than-air aircraft
  • Gasoline engines
  • Chips (semiconductor devices)

The uses for electricity, steam, airfoils, gasoline and chips continue to evolve. Commercial use of electricity has been with us for a century and a half. New uses for electricity and electricity-powered devices continue to emerge. Likewise for other general purpose technologies.

How much evolution is enough? (It’s easy to say in hindsight but that really only tells us if we’ve waited too long.) How will we know when there has been enough complimentary innovation to say the core breakthrough technology is now ready for large scale deployment and exploitation?

A clever answer

Brynjolfsson, Hui and Liu turned the question around. They looked for AI technologies and use cases where there didn’t seem to be any major needs for complimentary innovations. (They also wanted a use case and technology pair that wouldn’t require business process, organizational or cultural changes.) They settled on testing Automatic Machine Translation (AMT) in eCommerce.

In Does Machine Translation Affect International Trade? Evidence from a Large Digital Platform these researchers report on the impact of applying AMT on eBay’s web commerce platform and found large, statistically significant improvements in international trade sales volumes from 17 to 20 percent.

Wake Up Call

Some new AI technologies (like AMT) can deliver benefits quickly and at minimal cost.

Seek opportunities that do not require significant business process, organization, technical or culture change.

Five Easy Criteria to Seek

  1. Fast time to implement. Can a viable production instance be up and running in 90 to 120 days? (Avoid massive reengineering projects of all types.)
  2. Low levels of internal disruption and reinvention. Let the results drive disruptive change instead of requiring disruptive change to achieve the results.
  3. Suppliers and service providers with the right business mode. (If a significant part of their business is time and materials consulting, you may be failing points 1 and 2 and taking on unjustified risk.)
  4. Relevant real world experience. (Demand verifiable references – use cases – that are already in volume production. Visit them and dig deeply.)
  5. Revenue enhancement (which beats cost reduction.)

You can fail all these tests and still succeed. But you can succeed more quickly, with lower cost and risk, if your project passes all these tests. Succeed quickly and then iterate.

Net Recommendations

  • Apply AMT to your e-commerce initiatives to almost painlessly increase sales and expand available markets.
  • Apply the Five Easy Tests before making strategic AI investment decisions.
  • Read at least section 1 of Does Machine Translation Affect International Trade? Evidence from a Large Digital Platform
  • Respond back to this posting (either via public comment or private communication) with (a) your own examples that conform to the 5 easy tests, and (b) additional easy tests you would apply to make the list stronger.

Disclosure: I am the author of this article and it expresses my own opinions. This is not a sponsored post. I have no vested interest in any of the firms or institutions mentioned in this post. Nor does the Analyst Syndicate.