Authored by Jay Syrmopoulos via ActivistPost.com,

United States Army Secretary Mark Esper recently revealed that the military has a strategic vision of utilizing autonomous and semi-autonomous unmanned vehicles on the battlefield by 2028.

“I think robotics has the potential of fundamentally changing the character of warfare. And I think whoever gets there first will have a unique advantage on the modern battlefield,” Esper said during a Brookings Institution event.

“My ambition is by 2028, to begin fielding autonomous and certainly semi-autonomous vehicles that can fight on the battlefield,” he added. “Fight, sustain us, provide those things we need and we’ll continue to evolve from there.”

In a preview of the U.S. Army’s strategic vision, released on June 6, Esper said the integration of these forces would become a critical strategic component, quoting from the document:

The Army of 2028 will be able to deploy, fight, and win decisively against any adversary, anytime, and anywhere … through the employment of modern manned and unmanned ground combat systems aircraft, sustainment systems and weapons.

When Esper was reportedly asked about concerns regarding autonomous robots being a threat to humanity, he replied in jest, “Well, we’re not doing a T-3000 yet,” referencing the Terminator movie series about self-aware AI threatening the existence of humanity.

Of course, while he jokes about the threat of autonomous killer robots, polymath inventor Elon Musk clearly takes the potential of such a threat much more seriously, as evidenced by his comments at the South by Southwest (SXSW) conference and festival on March 11, in which he said that “AI is far more dangerous than nukes.”

“I’m very close to the cutting edge in AI and it scares the hell out of me,” Musk told the SXSW crowd. “Narrow AI is not a species-level risk. It will result in dislocation… lost jobs… better weaponry and that sort of thing. It is not a fundamental, species-level risk, but digital super-intelligence is.”

I think the danger of AI is much bigger than the danger of nuclear warheads by a lot. Nobody would suggest we allow the world to just build nuclear warheads if they want, that would be insane. And mark my words: AI is far more dangerous than nukes,” Musk added.

As The Free Thought Project reported last month, the Pentagon reportedly plans to spend more than $1 billion over the next few years developing advanced robots for military applications that are expected to complement soldiers on the battlefield, and potentially even replace some of them.

While the development of this tech by the Army sounds like a movement toward better weaponry, and not a digital super-intelligence, as discussed by Musk—the creation of fully autonomous unmanned weapons systems clearly has implications given the potential future development of some type of “digital super-intelligence.”

Esper attempted to allay fears by noting that...

Read more from our friends at Zero Hedge