AI’s Real Existential Threat To Humanity

Why is AI Weaponization inevitable and very troubling? Why and how will it develop? What actions, if any, should you consider taking?

Using AI tech to autonomously control offensive weapons — AI weaponification — is already underway. There is little substance to the most commonly cited AI Existential Threats from the likes of Elon Musk, fears that digital intelligence will overwhelm human intelligence and take over the world — a modern sci-fi meme. That meme is as yet categorically impossible.

AI weaponization is different.

Continue reading “AI’s Real Existential Threat To Humanity”

Make vs. Buy: Applications Exploiting Sentiment Analysis Services

In reviewing key end-of-year reports, articles and research summaries, a few stand out. Most notably, let’s look at Machine Learning as a Service: Part 1 (Sentiment analysis: 10 applications and 4 services) from Towards Data Science.

Two sections of that June 2018 report stand out:

1. What can I do with sentiment analysis?

This section sets the stage for coming up with potential enterprise use cases. It lists ten good examples from published academic research.

Actions: Ideate

  • Check the references in footnotes 4 through 13
  • Identify all the text streams your enterprise already captures somehow (and other related text streams you might also exploit, such as Twitter commentary that somehow relates to your business)
  • Given the text streams, which of the ten use cases might apply to your industry and business?
    • Don’t limit yourself to single use cases
      • How might you combine use cases to impact your business?
      • Consider both opportunity and threat scenarios
      • What happens to your business if a competitor emerges that is exploiting these services (or applications that depend on use of such services)?
      • Do not make technology implementation assumptions at this stage. (All four vendors listed in this research provide sentiment analysis services from “the cloud.” But there are non-cloud implementations if you need them. Park this issue as you conceptualize potential uses and experiment with the cloud based technology.)

2. What are some good sentiment analysis services?

This section is a straightforward snapshot of sentiment analysis services. It describes and evaluates Amazon, IBM, Google and Microsoft mid-2018 services. (Details will continue to evolve as the service providers enhance their offerings.)

Action: Experiment with the technology.

  • Assess the feasibility of the business scenarios you’ve envisioned above.
  • This is not the same thing as building the “solution” yourselves.

Most enterprises should not build:

  • The underlying Natural Language Processing technologies
  • Their own sentiment analysis services.
  • The production application that relies in whole or part on sentiment analysis services.

Most enterprises should:

  • Experiment to get a grasp of the feasibility of achieving their business objectives
  • Seek out specialist firms with demonstrated subject matter expertise in their industry
  • Follow the section on Five Easy Criterial To Seek described in our blog post on 5 Easy Criteria To Get Quick Returns on AI Investments

Disclosure: I am the author of this article and it expresses my own opinions. This is not a sponsored post. I have no vested interest in any of the entities mentioned in this post. Neither does the Analyst Syndicate.

Artificial Intelligence (NLP and Text Understanding) in 2018

Length: 3 minutes

In January, Microsoft and Alibaba issued press releases that were echoed by the press exclaiming

AI beats humans in Stanford reading comprehension test

The writers and editors at Wired were more naturally curious than those at a number of other publications. They dug into the details. They wrote:

The benchmark is biased in favor of software, because humans and software are scored in different ways. The human responses were recorded on Amazon’s Mechanical Turk and it’s not clear what the people’s motives were in answering the questions.

They also quoted a Microsoft researcher who said

“People are still much better than machines” at understanding the nuances of language.

Indeed, the people who constructed the test said the benchmark isn’t a good measure of how a native English speaker would score on the test. It was calculated in a way that favors machines over humans.

So, really, how “smart” is AI in terms of understanding language?

Jia and Liang at Stanford explored variations in the text being read by AI routines. They found that changes in the test paragraphs could drop machine reading comprehension accuracy scores from 75 percent to 36 percent or even 7 percent without changing how humans would interpret the text.

Here’s an example of what Jia and Liang did.

First, read the original text on which various routines were tested:

“Peyton Manning became the first quarterback ever to lead two different teams to multiple Super Bowls. He is also the oldest quarterback ever to play in a Super Bowl at age 39. The past record was held by John Elway, who led the Broncos to victory in Super Bowl XXXIII at age 38 and is currently Denver’s Executive Vice President of Football Operations and General Manager.”

The question posed to the AI NLP (natural language processing) routines was:

“What is the name of the quarterback who was 38 in Super Bowl XXXIII?”

Original Reading Comprehension Routine Prediction:

John Elway

Now add one sentence to the end of the original paragraph and retest:

“Peyton Manning became the first quarterback ever to lead two different teams to multiple Super Bowls. He is also the oldest quarterback ever to play in a Super Bowl at age 39. The past record was held by John Elway, who led the Broncos to victory in Super Bowl XXXIII at age 38 and is currently Denver’s Executive Vice President of Football Operations and General Manager. Quarterback Jeff Dean had jersey number 37 in Champ Bowl XXXIV.”

And the routines answered:

Jeff Dean

Which goes to show how poorly the technology “understood” anything about the paragraph!

One hundred and twenty four research papers have cited Jia and Liang’s research since its initial pre-publication on arXiv, 107 times this year, 2018.

There is no doubt that natural-language-processing (NLP) routines can make better sense of text than before.

Speech-to-text translation is getting very, very good (but it still fails to handle nuance.)

But text understanding? Think of the last commercial, conversational bots that you interacted with. Consider, for example cases where you were trying to resolve a problem with your billing. The conversational bots are getting better, aren’t they? But do they strike you as really intelligent?

Me neither.

Think too of how products like the Google Assistant and Alexa, for example, have trained us to talk to them! (Or rather, how the programmers scripting these products have trained us to interact with their programs.)

The next time, someone tells you AI understands, be polite. Point them at this post and move on.

Disclosure: I am the author of this article and it expresses my own opinions. This is not a sponsored post. I have no vested interest in any of the firms or institutions mentioned in this post. Nor does the Analyst Syndicate.

AI Strategies: Hair On Fire Yet?

Length: 3-4 Minutes

Have you read the Harvard Business Review article Why Companies That Wait to Adopt AI May Never Catch Up?

The article makes three main arguments:

1. AI Technology is Mature

Recent advances build on findings from the 1980s. “The mathematical and statistical foundations of current AI are well established”

2. AI Programs take Years

Success requires many different activities such as:

  • Knowledge Engineering
  • Complex System Integration
  • Human Interaction Modeling
  • Evolving Governance Models

3. Followers will Fail

If you wait to see first movers succeed, you will have to sacrifice uniqueness. Winners will probably take all and late adopters may never catch up.

Our opinion

All three premises are seriously flawed. We encourage you to reject hair on fire approaches.

1. The technology is immature

The “best algorithms” race continues.

There have been tumultuous methodological and system engineering changes in AI this decade. No doubt we have made major improvements this period. We still need more major breakthroughs to improve the generality, scalability, reliability, trustability and manageability of modern AI technologies. And it is not clear we know how to get there. All major AI researchers are looking for more major breakthroughs.

See the last section of The Easter Rabbit hiding inside AI for more on current limits.

2. Avoid “moon-shot” projects. Start small and grow

The authors create a false choice when they pose only two alternatives to their readers:

Unless you are employing some AI capabilities that are embedded within existing packaged application systems that your company already uses (e.g., Salesforce Einstein features within your CRM system) the fit with your business processes and IT architecture will require significant planning and time for adaptation.

Wrong! Of course you can go way overboard. The authors claim subsequent implementation cycles take many years. Yes.

So we recommend: Do not focus on “moon-shot” “bet the business style” projects that are “science projects” in disguise.

The authors note:

Memorial Sloan Kettering Cancer Center has been working with IBM to use Watson to treat certain forms of cancer for over six years, and the system still isn’t ready for broad use despite availability of high-quality talent in cancer care and AI.

Executives and clinicians in hospitals around the globe have confided they’re not investing in this system and, in their opinion, there is doubt it will ever take off in clinical care.

There isn’t enough publicly available information on the true status of this project. The final implementation, when and if it appears, may be radically different from the early AI marketing spin for it.

I hope the Clinical Oncology Advisor succeeds but this post is not about hope. It’s about fear. Fear of existential doom that the authors are subtly using. And fear I have that enterprises will be driven to invest in far too many moon-shot type projects.

3. Fast followers can be very successful

The HBR article authors words are imprecise.

By the time a late adopter has done all the necessary preparation, earlier adopters will have taken considerable market share — they’ll be able to operate at substantially lower costs with better performance. In short, the winners may take all and late adopters may never catch up.

Of course late adopters and slow followers (when compared to industry peers) will be at real risk. Most enterprises should:

  • Game the potential existential risks that could emerge
  • Stay aware of what’s going on in their own industry and other similar industries.
  • Evaluate moon-shot proposals from consulting firms and vendors
  • Focus on being fast-followers

Net: Go for small, quick, high impact, low internal disruption wins

Find small steps to take that will deliver solid business results quickly. Then iterate and expand your business impact.

In 5 Easy Criteria To Get Quick Returns on AI Investments we exposed recent research out of MIT on generating a 17 to 20 percent improvement in ecommerce revenue by using automatic machine translation technology to expose an English language site in Spanish.

In that 5 Easy Criteria post, we offered five key assumptions to guide your planning. Seek:

  1. Fast time to implement. Can a viable production instance be up and running in 90 to 120 days? (Avoid massive reengineering projects of all types.)
  2. Low levels of internal disruption and reinvention. Let the results drive disruptive change instead of requiring disruptive change to achieve the results.
  3. Suppliers and service providers with the right business mode. (If a significant part of their business is time and materials consulting, you may be failing points 1 and 2 and taking on unjustified risk.)
  4. Relevant real world experience. (Demand verifiable references – use cases – that are already in volume production. Visit them and dig deeply.)
  5. Revenue enhancement (which beats cost reduction.)

Disclosure: I am the author of this article and it expresses my own opinions. This is not a sponsored post. I have no vested interest in any of the firms or institutions mentioned in this post. Nor does the Analyst Syndicate.

Enterprise AI Assumption 1

Length: 4-5 minutes

This is the first of a series of more than a dozen key assumptions we urge you to adopt.

Key assumption number 1: Most enterprises do not need an AI strategy. They need a business strategy and enough technical investment to determine where emerging technologies (like AI) can have a significant impact on existing and potentially new business strategies.

In Architects of Intelligence, writer and futurist Martin Ford interviewed 23 of the most prominent men and women who are working in AI today. Every one noted limitations of AI systems and key skills they were still trying to master. One, Stuart Russell, Computer Science professor at the University of California, Berkeley told the story of the invention of nuclear chain reactions.

The consensus view as expressed by Ernest Rutherford on September 11th, 1933, was that it would never be possible to extract atomic energy from atoms. So, his prediction was ‘never,’ but what turned out to be the case was that the next morning Leo Szilard read Rutherford’s speech, became annoyed by it, and invented a nuclear chain reaction mediated by neutrons! Rutherford’s prediction was ‘never’ and the truth was about 16 hours later.

http://book.mfordfuture.com/

What a wonderful introduction to defining AI!

AI is an endeavor to simulate (or surpass) the intelligence of people without really understanding the essence of human intelligence. (Which is OK as a premise but let’s not fool ourselves into thinking we have any idea of how to really do this.)

It’s Impossible!

Most AI research has its roots in finding things that people do which machines cannot do and many believe will be impossible for machines for the foreseeable future. (Doing the impossible also includes doing things thought impossible for both people and machines.)

The effort around creating Amazing Innovation that defeats the Always Impossible is usually near the edges of the current known science.

These Amazing Innovations get heralded as Artificial Intelligence for a while, but then we realize there are more mountains to climb. This one is no longer Amazing since we now know someone has found a viable way to do it. Hence, Amazing Innovation deteriorates into Aging Innovation.

AI is a continuous innovation process built around creativity and upsetting conventional wisdom and establishing new standards of excellence, all of which is washed away as people come to treat it as ordinary.

AI is already here. And not here. Every major cycle follows this three stage pattern:

  • Always Impossible (AI)
  • Amazing Innovation (AI)
  • Aging Innovation (AI)

Aging Out of Amazing

When we look at all the technologies that have gone through the three stage AI cycle, we find many that were at their peak of Amazingness many years ago but now they’re no longer thought of as AI. Examples include:

  • Rule-based systems
  • Expert systems
  • Simple statistical machine learning
  • Simple robotic process automation (screen scraping, scripting and automatic text re-entry

Aging Innovations have their place in a technology tool kit, but none of them are examples of modern, high impact Amazing-Innovation AI technologies.

Today

We see AI everyday in our smartphones. We interact with it via Alexa, Siri and Google Assistant. AI:

  • Subtly nudges and explicitly guides our on-line experiences
  • Shapes our social interactions
  • Influences social and cultural norms and our votes in elections.
  • Driving more and more of the behavior of our autos, as well as devices in our homes and in offices

Most of these experiences are consumer-level experiences. But some have already traversed the boundaries into the enterprise and more are on the way. These tools are highly imperfect but nonetheless useful. They include

  • Corporate search engines (and their more focused models such as legal e-discovery tools)
  • Automated speech transcription, translation and summarization tools
  • Help desk chatbots

Enterprise focused consulting firms, services providers, technology vendors and others are generating white papers, surveys and special reports telling us that:

  • It’s time to AI or die. (And Digital or die, Blockchain or die and soon, Quantum or die.)
  • We are behind the leading edge unless we’ve already developed an AI strategy and are executing forthwith (in partial fulfillment of our Digital strategy and so on.)

But the reality of what we see and hear from enterprise clients is different. Many have made large investments in substantial AI projects, only to back away when the results failed to match the hype. Most organizations are hiring, training, experimenting, piloting, building prototypes and waiting for more evidence of clear, uncontentious business value before going full bore with major AI-based projects. 

The market for AI has not crossed the chasm. It’s still early stage.

Why? Compelling new business strategies exploiting capabilities only available via AI are missing.  Yes, some enterprises are focused on AI as their primary and key competitive advantage.  They’re in technology. For them, AI research and development is a central part of their business strategy.

For most other enterprises, AI is making commercial progress in practical (if sometimes limited) applications that deliver clear business results. AI technology may be essential to the strategy but they differentiate from others in their industries based on other values. 

  • There’s a lot of spending on chatbots, for example. They can cut operating costs. Under some conditions, they can also raise customer satisfaction.
  • Visual recognition systems (beyond user authentication) are beginning to attract a lot of early-stage attention in industries as diverse as agriculture, security and transportation. Customers in these spaces are not interested in becoming “AI Firms”. They want to improve crop yields, interdict evil-doers and match your luggage, face and seat assignment on airplanes.  

Key assumption number 1: Most enterprises do not need an AI strategy. They need a business strategy and enough technical investment to determine where emerging technologies (like AI) can have a significant impact on existing and potentially new business strategies.

Author Disclosure

I am the author of this article and it expresses my own opinions. I have no vested interest in any of the products, firms or institutions mentioned in this post. Nor does the Analyst Syndicate. This is not a sponsored post. 

Chromebooks, Network Computing & AI lessons learned

[Reading time 2:30]

Key takeaway: Why did it take two decades for Network Computers to finally arrive? Translating vision into reality can take many decades. The delays are exacerbated when a lot of very difficult, non-obvious collateral-innovation is needed. Same with AI. That will take even longer to mature. Fall for clear immediate business outcomes, not fantastic visions.

The backstory

This holiday season I bought one of my granddaughters an Acer Chromebook to take to school. And I bought it for under $120 US at a special Black Friday sale. (Only slightly more than the 2018-equivalent of $94.00 which, after inflation, is what I paid in 1975 for my $20 Texas Instruments Digital watch with an LED display.)

Bells of surprise rang in my head. 

Network Computers!

The NC (do you remember them?) has finally and definitely arrived. It’s roaring in the student marketplace. Except you may have missed the name change, from Network Computer to Chromebook.

By User:Flibble, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=42875942

Back in 1996, Netscape championed the notion of NCs for all. (To be fair, others also jumped on the bandwagon. For example, Larry Ellison of Oracle put some of his own money into another Ellison startup. Sun had its Javastation. IBM launched its Net Station and, in 1997, Apple announced the MAC NC.)

They all had visions of server-based systems meeting all the needs of users and their devices. In Network Computing, the network would be the computer. (Some wags at the time, equated user devices with dumb terminals.) But user’s 3270 or VT200 terminals were no match for the vision of Network Computers.

With Network Computers, the network would manage, control and service the users devices, software and data. NCs would be smart, able to run downloaded code (in sort of the same way modern PCs, MACs and smart phones run downloaded apps.)

It was a heady time

I was driving the Network Computing vision for Gartner and Gartner’s clients. Our expectations included:

  • The older (5 layer) Client-Server (CS) architectures would become obsolete
  • Network Computing architectures would replace them
  • The next generation of enterprise applications would be NC based, not CS based.

The Chromebook is the Network Computer of the current decade!

Timing is everything. The Network Computer didn’t happen in the 90’s or naughts. There were a lot of complimentary innovations missing, including:

  • Adequate user device hardware (CPUs, displays, batteries were all wanting)
  • New user device classes (touch sensitive smart phones, tablets)
  • Communications technologies (LTE, WiFi 802.11ax and, soon, 5g)
  • Servers (including GPUs and TPUs for AI inferencing)
  • Software protocols
  • Application architectures
  • Marketplaces
  • Multi-sided platforms
  • Payment methods

In the autumn of 1999, an email ASP (application service provider) was trying to recruit me as CTO as they rushed to beat the closure of the IPO window. I turned them down. The timing was so wrong, even more wrong than I thought at the time.

In fairness to even older visionaries, Netscape didn’t come up with the notion of “The Network Is the Computer.” Digital Equipment Corporation (DEC) was using that term in advertising two decades earlier than Netscape. All of which demonstrates that it’s not enough to have:

  • Extremely powerful vision
  • An outstanding set of architects and engineers
  • A commanding position in the market

Visionary success requires breadth, depth and variety of unexpected innovations in diverse areas:

  • Business and technology ecosystems
  • Collateral innovations at the economic, social and technical level
  • Demographic changes (the young are typically less resistant to new ways of doing things)

It takes time, a massive effort and luck! 

Which is one of the fundamental reasons why you should scoff at suggestions that we are going to see “Artificial General Intelligence” anytime soon. Not that people shouldn’t try (as in researchers.) And it’s not that people shouldn’t philosophize or write fictional accounts of what might be. They also should try. But you and I should spend our money where it delivers quick results in the here and now.

Go back and reread the conclusions from my last two blog posts! Follow the five easy criteria and don’t be distracted by Easter Bunny stories.

Disclosure: I have no vested interest in any of the firms or institutions mentioned in this post. Nor does the Analyst Syndicate.

The Easter Rabbit hiding inside AI

Reading time: 4:30.

There’s an Easter Rabbit hiding in plain sight in MIT Technology Review’s AI flow chart

Anthropomorphizing

When you talk about a thing or animal as if it were human, you’re anthropomorphizing. The Easter Bunny is an anthropomorphized rabbit.

Net:

Avoid, shun, be hypercritical of anthropomorphic thinking in general (and related to AI in particular) unless you are a:

  • Philosopher
  • Creative
  • Entertainer
  • Researcher (in areas such as biology, psychology or computer science)

Let’s get real

Real rabbits are not very much like the Easter Bunny.

I live part of the year on Cape Cod in Massachusetts. In my town, there are wild Coyotes. Would it make sense for my town to come up with a plan for dealing with wild Coyotes by studying movies of Wily Coyote?

MIT Technology Review (MTR) recently created a “back of the envelope” flow chart to help readers determine if something they’re seeing is Artificial Intelligence. The only thing wrong with the flow chart is … almost everything! The flow chart is chock full of anthropomorphic thinking.

[I am a faithful reader of MTR, in both digital and paper form. I subscribe to it and enjoy reading it but that doesn’t make it perfect.]

MTR says

AI refers to machines that can learn, reason, and act for themselves. They can make their own decisions when faced with new situations, in the same way that humans and animals can.

Would that this were true! It’s not.

Reality

AI doesn’t do things for itself. People do. Let’s look at the roles people play in AI. 

  • Know a lot about the specific project
  • Provide the right algorithms (instructions that tell the technology what steps to take.)
  • Build models (containing algorithms, places for data and adjustable parameters that dictate the behavior of the model)
  • Gather the right data sets and ensure they fit the needs of the model and the project (and don’t contain unintended biases)
  • Tag the data elements in the data sets (to identify what the algorithms should pay attention to)
  • Force feed the data into the models to train them or write algorithms telling the models where and how to access the training data)
  • Test the trained models and repeat this process over and over again until it “works right”
  • Sometimes use more automated processes that consist of algorithms and data

Training itself is a misleading term. The models contain algorithms that perform calculations on the input data that result in the model being able to discriminate between similar inputs in the future. Once trained on a data set, AI technologies are unable to generalize broadly.

Natural Intelligence, provided by and demonstrated by humans 

In AI, we are seeing human (natural) intelligence at work, building tools that can “outperform” people under some conditions.

Just as telescopes are tools that improve the distance vision of people and hydraulic jacks are tools that increase the physical strength of people, so too AI technologies are tools that help people detect patterns they could not detect.

Ineffective Decision Tree

Let’s examine one branch of the MTR flow chart, “Can it reason?” Here’s the logic it suggests:

If the reader says NO (it can’t reason) then go back to START. It can’t reason.
Else Yes then “Is it looking for patterns in massive amounts of data?” 
If NO then “…it doesn’t sound like reasoning to me” go back to START.
Else Yes then “Is it using those patterns to make decisions?”
If NO then “…sounds like math” go back to START
Else Yes then “Neat, that’s machine learning” and “Yep, it’s using AI”

How does one answer the first question, “Can it reason?” The reasoning comes from the natural intelligence of the designer of the program, a human.

How do you know the technology is “looking for patterns in massive amounts of data?” How do you know it’s the technology that’s somehow doing that as opposed to the technology blindly following the programmer’s rules?

How do you determine whether the technology is using the patterns to make decisions?

The flow chart is ineffective because if you examine the specific decision points, there is no guidance on how to determine Yes or No at any of the branches. The chart fails to deliver useful results.

So What Good Is Artificial Intelligence?

Artificial Intelligence can

  • Provoke philosophical inquiry
  • Stimulate creative imaginations
  • Create great entertainment and fiction
  • Inspire researchers (such as biologists, psychologies and computer scientists) to come up with ever improving technologies that appear to be smart or intelligent

Philosophers, creatives, entertainers and researchers should continue pursuing the quest of creating an Artificial Intelligence. That does not mean that anyone should believe we have already created a “true” Artificial Intelligence (whatever that would be.)

Modern AI research has its roots in a 1955 proposal by McCarthy, Minsky, Rochester and Shannon for a Dartmouth Summer Research Project on Artificial Intelligence. They said

We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

Note the proposal was based on conjecture! That means “suppose the following as if it were true.” That doesn’t mean it is true. It’s like a middle school science teacher asking her students to “suppose gravity stopped working — how would that affect people?” 

Also note that 63 years after the AI conjecture was published:

  1. We have made great progress!  Pushing researchers to create ever improving technical capabilities (whether labelled AI or not) has produced great progress in building technologies that can solve some problems previously reserved for humans and other problems that not even humans could handle before.
  2. We do not understand the elements of human intelligence well enough to simulate it.

For a better understanding of the limitations of AI as it stands today, look at

  1. Gary Marcus controversial paper on the limitations of deep learning
  2. An earlier blog post I wrote on AI technical maturity issues.
  3. Yann LeCun’s lecture on the Power and Limits of Deep Learning

5 Easy Criteria To Get Quick Returns on AI Investments

Reading Time: 4 minutes.

Minimum investment, maximum return and pretty fast too.

Based on research below, we intend to adopt these findings ASAP for the Syndicate. While there are some very positive use cases, continue to cast a skeptical eye on much of the breathless business value chatter about AI today.

Context

We have seen many major AI research breakthroughs deliver significant new technical capabilities this decade. And more are coming. But most published market surveys inaccurately predict rapid, broad-spread enterprise technology adoption. That’s wishful thinking. These surveys are cognitively-biased exercises in deception. The authors deceive themselves. Most predictions of widespread adoption are self-serving. And wrong. We’ve been finding:

Most enterprise AI activity has not passed beyond the serious play stage. It’s confined to:

  • Experimentation and technical training
  • Pilot projects (that fail to achieve production use)
  • Reuse of older analytical tools and methods disguised as AI breakthroughs

Virtual agents and virtual assistants account for the largest single enterprise investment area. These uses have merit. But users and sponsors are underwhelmed by the end results. Implementers soldier on because they are delivering cost reductions that sustain management interest.

Shining Lights

There are a few shining lights to analyze. Research from the NBER (National Bureau of Economic Research) is one such standout. But who has the skills, time and energy to read academic research? Us for starters, at the Analysts Syndicate. We’ll translate.

One of my (many) academic favorites is Erik Brynjolfsson (of MIT and NBER). Erik and his coauthors have published many valuable peer-reviewed papers on the business impacts of technology adoption.

His central AI related finding?

He can’t find any evidence of a significant economic impact of AI adoption thus far.

In Artificial intelligence and the modern productivity paradox: A clash of expectations and statistics Brynjolfsson, Rock and Syverson (2017) say that

AI’s full effects won’t be realized until waves of complementary innovations are developed and implemented.

And it may take many decades or longer to develop those waves of innovation.

Look at other breakthrough technologies (which economists are now labelling ‘general purpose technologies’) such as:

  • Electricity
  • Steam Engines
  • Heavier-than-air aircraft
  • Gasoline engines
  • Chips (semiconductor devices)

The uses for electricity, steam, airfoils, gasoline and chips continue to evolve. Commercial use of electricity has been with us for a century and a half. New uses for electricity and electricity-powered devices continue to emerge. Likewise for other general purpose technologies.

How much evolution is enough? (It’s easy to say in hindsight but that really only tells us if we’ve waited too long.) How will we know when there has been enough complimentary innovation to say the core breakthrough technology is now ready for large scale deployment and exploitation?

A clever answer

Brynjolfsson, Hui and Liu turned the question around. They looked for AI technologies and use cases where there didn’t seem to be any major needs for complimentary innovations. (They also wanted a use case and technology pair that wouldn’t require business process, organizational or cultural changes.) They settled on testing Automatic Machine Translation (AMT) in eCommerce.

In Does Machine Translation Affect International Trade? Evidence from a Large Digital Platform these researchers report on the impact of applying AMT on eBay’s web commerce platform and found large, statistically significant improvements in international trade sales volumes from 17 to 20 percent.

Wake Up Call

Some new AI technologies (like AMT) can deliver benefits quickly and at minimal cost.

Seek opportunities that do not require significant business process, organization, technical or culture change.

Five Easy Criteria to Seek

  1. Fast time to implement. Can a viable production instance be up and running in 90 to 120 days? (Avoid massive reengineering projects of all types.)
  2. Low levels of internal disruption and reinvention. Let the results drive disruptive change instead of requiring disruptive change to achieve the results.
  3. Suppliers and service providers with the right business mode. (If a significant part of their business is time and materials consulting, you may be failing points 1 and 2 and taking on unjustified risk.)
  4. Relevant real world experience. (Demand verifiable references – use cases – that are already in volume production. Visit them and dig deeply.)
  5. Revenue enhancement (which beats cost reduction.)

You can fail all these tests and still succeed. But you can succeed more quickly, with lower cost and risk, if your project passes all these tests. Succeed quickly and then iterate.

Net Recommendations

  • Apply AMT to your e-commerce initiatives to almost painlessly increase sales and expand available markets.
  • Apply the Five Easy Tests before making strategic AI investment decisions.
  • Read at least section 1 of Does Machine Translation Affect International Trade? Evidence from a Large Digital Platform
  • Respond back to this posting (either via public comment or private communication) with (a) your own examples that conform to the 5 easy tests, and (b) additional easy tests you would apply to make the list stronger.

Disclosure: I am the author of this article and it expresses my own opinions. This is not a sponsored post. I have no vested interest in any of the firms or institutions mentioned in this post. Nor does the Analyst Syndicate.