In the popular ‘Terminator’ film franchise, an army of autonomous robots threatens to overpower and destroy mankind. Though Arnold Schwarzenegger first uttered, “I’ll be back” a few decades before the U.S. military perfected its own rudimentary autonomous weapons, this science fiction classic remains on the minds of top Pentagon officials.

Vice chairman of the joint chiefs of staff Gen. Paul Selva is asking for an international debate of what he calls the “Terminator Conundrum.”

“We have proven that we can build and field unmanned underwater vehicles, unmanned surface vessels, unmanned wheeled vehicles, and remotely piloted air vehicles,” Selva said. “We can actually build ‘autonomous’ vehicles in every one of those categories.

“That gets us to the cusp of a question about whether or not we are willing to have unmanned autonomous systems that can launch on an enemy. What happens when that thing can inflict mortal harm and is empowered by artificial intelligence?”

It’s no mystery why pursuing advanced technology is at the top of the Pentagon’s agenda.The use of unmanned drones has given the U.S. military a huge advantage. Drone pilots can now devastate ISIS compounds without risking American lives. However, there will be a point when technology is advanced enough that drone pilots are no longer necessary because drones can drive themselves. If the exploits of the Terminator and Skynet taught us anything, it’s that crossing that line can have ethical complications and dire consequences.

In July 2015, Stephen Hawking, Elon Musk and 1,000 other scientists signed an open letter asking militaries around the world to ban AI weaponry. Their main concern was preventing a “global AI arms race” and keeping autonomous weapons out of the hands of evildoers.

“It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators,” the letter warns. “There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.”

And so we have the Terminator Conundrum. Should the United States use any weapons and technology it has at its disposal to protect its citizens? Or should it take a hard stance against AI technology in order to, maybe, protect the world?

Let us know what you think in the comments.

[Flight Global]

[Time]