AI’s Real Existential Threat To Humanity

Why is AI Weaponization inevitable and very troubling? Why and how will it develop? What actions, if any, should you consider taking?

Using AI tech to autonomously control offensive weapons — AI weaponification — is already underway. There is little substance to the most commonly cited AI Existential Threats from the likes of Elon Musk, fears that digital intelligence will overwhelm human intelligence and take over the world — a modern sci-fi meme. That meme is as yet categorically impossible.

AI weaponization is different.

World Wars I and II demonstrated the impact of large-scale industrialization of warfare. Those horrors will be dwarfed by wars fought with autonomous automated weapons. The threat from AI weaponification is much greater than the existential threat from Nuclear Weapons because it will incorporate all kinds of weapons, not just nuclear.

The process of AI weaponization is unstoppable.

The knowledge and means for doing it are far more accessible than earlier weapons of mass, mechanical destruction.

  • Most of the basic science is not hidden. It’s available to anyone on the Internet.
  • The fundamental AI platforms for development is already available as a service from cloud based providers.
  • Edge-hardware for local (isolated) implementation inside autonomous weapons systems is already emerging from manufacturers. 
  • Special weapons aren’t necessarily needed. A commercial computer vision system to ensure a machine gun fired bullets at (human) targets with maximum speed, efficiency, effectiveness and economy.

Various organizations are planning. prototyping and producuing weapon systems that can be operated with no “humans in the loop” — weapons that, once released, will be able to autonomously accomplish their missions.

  • Most large governments are already working with industry and academia to marry AI and weapons and surveillance systems. They are not disclosing much of what’s going on. Sunshine rules don’t apply. They’re looking to keep planning, investing and throttling up. Examples:
  • Smaller nations and other entities — including rogue organizations — will be able to build their own. The knowledge and means are accessible and affordable and far less complex than building nuclear weapons and delivery systems.
  • Larger nations will seek to cut their expenditures on development, testing and manufacturing of Weaponized AI by allowing sales to selected smaller nations. 

The US military is looking to AI to control weapons and counter hostile fire. The current conservative approach is to keep a human in the loop but military planners, according to military.com (quoting Bruce Jette, assistant secretary of the Army for Acquisitions, Logistics and Technology — ASAALT) it may unacceptably throttle the military’s response to a number of different situations.

Let’s say you fire a bunch of artillery at me, and I can shoot those rounds down, and you require a man in the loop for every one of the shots. There are not enough men to put in the loop to get them done fast enough.

 

So how do we put not just the AI hardware and architecture and software in the background? How do I do proper policy so we [ensure] weapons don’t get to fire when they want and weapons don’t get to fire with no constraints, but instead we properly architect a good command-and-control system that allows us to be responsive and benefit from the AI and the speed of some of our systems? We are trying to structure an AI architecture that will become enduring and will facilitate our ability to allocate resources and conduct research and implantation of AI capabilities throughout the force.

So the intent here is maintaining a proper level of control of autonomous weapons systems. And that intent will be preserved by many actors in peacetime. What of the renegade states and organizations?

The underlying rationale of using autonomous weapons systems is to allow fewer humans to do more damage. Going autonomous for a period of time means the humans are not involved all the time. And time windows can stretch, from milliseconds to months or more. Intentions corrode.

In the fog of war (and terrorist activity), orderly intent will give way to a relaxation of constraints. When the choice is losing a key battle or war versus loosening the controls on weapons systems capable of fully autonomous operation, history suggests entities wiil not select the losing choice.

Ordinary people have suddenly developed fears of their pictures being used without authorization to train AI applications to surveil their activities (as in the ten year challenge controversy.) Employees of Google and shareholders of Amazon (and others) have gone public with their demands that those companies not sell facial recognition technology or services to governments.

The AI Genie is out of the bottle

Major institutions are racing ahead, seeking further AI tech breakthroughs. Throttling the basic science and engineering work that firms such as Google, Amazon, Microsoft, IBM, Baidu, Alibaba and Nvidia are doing doesn’t make sense because this work serves many purposes, not just AI weaponization. In the same light, it makes no sense to try to throttle technical advances in AI in:

  • Academic research
  • Free flow of research findings
  • Venture Capital investors
  • Other entities (such as startups) advancing the state of the science
  • Entities consuming and exploiting AI in search of business or social outcomes

We may choose to regulate (or not) specific uses of the stream of technical innovations but these regulations would operate at the level of specific use cases, not general science and engineering principles. But once the principles are freely available, it may be impossible to ban clandestine deployment of weaponized AI.

Social, legal, ethical and governance issues will turn out to be more difficult to deal with than the technologies themselves.

What should organizations (as in entities) and individuals do?

In particular, you? We might be tempted to discuss the possible political, social and govermental solutions but it’s not clear that any are likely to alleviate the problem (except perhaps to slow the slide, see below.)

Selected Key Actions

  1. Do not be distracted by talk about Artificial General Intelligence and Super Digital Intelligences taking over the planet. Weaponized AI is the existentail threat to be concerned about.
  2. Contribute to the discussion about Weaponized AI
  • What will the big 10 of AI (Google, Amazon, Microsoft, IBM, Baidu, Alibaba, Nvidia, et al) do and not do?
  • What strategies should enterprises other than the big 10 adopt? Will it matter?
  • What actions should these enterprises take and when in areas such as Internal policies, stakeholder communication, purchasing plans and investment approaches (R&D, M&A, consortia, …)
  • What might we have learned from the Nuclear Non-Proliferation Treaty and the Anti-Ballistic Missile Treaty impacts?
  • Are there other trends that fit into this one? (E.g., fragmentation of the Internet into a collection of national networks; rising xenophobia around the world; populist cries against the increasing power of dominant enterprises)
  • Is there a timeline for how this might develop? With what signposts that suggest the problem is (continuing) to escalate further?
  • Are there processes ongoing that could provide some safety cushion and slow what feels to be an inexorable path forward and downward?

What do you recommend?

Author: Tom Austin

Clearing the fog and misdirection in emerging technologies world wide. I founded TAG after over 40 years at Gartner, Inc. and DEC. Sought out by the press and executives world wide. In the most recent past, I was Gartner, Inc. lead for Artificial Intelligence research (2012-2018.) Deep background in brain sciences (biological constraints on learning.)

Leave a Reply