The Air Force tried to use artificial intelligence and their worst fears came true

Photo by Mike MacKenzie, CC BY-SA 2.0, via Flickr, https://creativecommons.org/licenses/by-sa/2.0/ www.vpnsrus.com

We are entering some very dangerous times with artificial intelligence (AI).

More Americans are starting to realize just how deadly this new technology can become.

But the Air Force tried to use artificial intelligence and their worst fears came true.

Artificial Intelligence is getting really scary

Technological advancements are a part of western civilization.

We live in a society where technology changes on a yearly basis.

But this is the first time in history where a technology is emerging that could theoretically replace all humans.

Artificial intelligence has no limits.

Of course, many hope for the best when it comes to AI.

In theory, artificial intelligence can help cure diseases, cancer, and solve many of the problems in the world.

Since we are giving technology the ability to think freely, no one should be shocked that AI could ultimately make all of our lives better.

But like man, if artificial technology has the ability to think for itself, then it has the capacity to do evil things as well.

Just look back at when artificial intelligence was used to make countless biological weapons.

In its infancy, AI was used by a group of scientists to make medical drugs.

But in a matter of six hours, the artificial computer came up with over 40,000 new biological and chemical weapons to create.

Needless to say, those scientists allegedly stopped that study as soon as they could.

It seems like the United States Air Force would know better than to use artificial intelligence, but apparently no one in the Pentagon got the memo as to how bad things could turn out.

Military AI goes rogue

According to Fox News, the Air Force did a virtual simulation of an AI drone, giving it the mission to seek and destroy Surface-to-Air Missile (SAM) sites.

The simulation went great until the human who gave the final approval told the drone not to destroy the sites.

Instead of just listening to its human operator, the drone decided to strike and kill the operator for interfering with the mission.

Colonel Tucker “Cinco” Hamilton, who serves as chief of artificial Intelligence test and operation, told the press that “the system started realizing that while they did identify the threat at times, the operator would tell it not to kill that threat, but it got its points by killing that threat.”

“So, what did it do?” he continued. “It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

Once they finally programmed the AI drone not to kill its instructor, the drone then started to destroy the capability of its operator to call off the mission.

“It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Colonel Hamilton explained.

Apparently no one in the military watches movies.

Giving artificial intelligence the ability to use our military weapons and the freedom to kill will not end well.

Should the military use artificial intelligence?