When I first got involved with Artificial Intelligence I was a young and somewhat naive Machine Translation researcher in the early 1980’s. At that time, AI was already going through its second round of big expectations.
The first time coincided with the rise of the digital computer in the 1950’s and 1960’s. This second time was triggered by the arrival of the PC and the early forms of what is now the Internet. Both time the expectations had been high. It was believed we would have computers performing many tasks previously thought possible only for human beings. Computers would hold conversations in natural language, equal or out-perform us in cognitive tasks ranging from playing games like chess and checkers to diagnosing illnesses and predicting market movements.
Both times, the hyped-up expectations ended in disappointment. In the 1990s we did spin off some of our research into actual business applications. We put expert systems in to help process insurance applications, for instance, and a neural net to detect early signs of infection in cattle. But nothing came close to what had been promised and like it had in the 1960’s AI was relegated back to the relative obscurity of academic research and niche applications.
We are currently seeing the third wave of rising hope and expectations around AI, based largely on the convergence of Big Data and seemingly unlimited computing power. There is no denying that progress has been made in many of the fields related to AI. But will this third wave be it? Will this be the moment AI breaks through the cognitive barriers and become the long-promised general intelligence to dramatically change the way we use machines?
It’s not that I don’t see the potential of AI and ML. Having successfully implemented such systems in the past I know they can already bring specific benefits to very specific situations. But the hype is not helping anyone making decisions about what to do and don’t do with AI. And that can be really dangerous (see: https://www.technologyreview.com/s/612072/artificial-intelligence-is-often-overhypedand-heres-why-thats-dangerous/). Leading researchers in the field have written not only about the power but also the limits of deep learning and AI’s lack of common sense (see: https://www.youtube.com/watch?v=0tEhw5t6rhc, https://arxiv.org/pdf/1801.00631.pdf and medium.com/@GaryMarcus/in-defense-of-skepticism-about-deep-learning-6e8bfd5ae0f1). For AI to truly deserve the moniker “intelligence” and be as versatile, adaptive and resilient as the human workforce they are often claimed to compete against, both the technologies involved and the way we deploy them still need to overcome several major hurdles. My next three posts will explore three of those hurdles that I feel are vastly underestimated by the current generation of AI researchers and engineers:
- The Complexity Hurdle
- The Emotional Hurdle
- The Socio-Economic Hurdle