Who’s Going to Stop Artificial Intelligence When the Time Comes?

Standing robot with hand pointing out.

As companies and militaries pour billions of dollars into developing AI, the need for regulation is said to be critical. (Image: Computerizer via Pixabay)

As companies and militaries pour billions of dollars into developing artificial intelligence (AI), the need for regulation is said to be critical. Imagine an advanced AI getting access to nuclear codes and deciding to launch weapons. While it might sound like sci-fi, the truth is that such a situation won’t be so ludicrous a few years from now. Some politicians say that they can regulate AI with strict laws. But this seems to be a fool’s dream in the long run.

Tackling artificial intelligence risks

Yoshua Bengio, considered one of the founding fathers of deep learning, is worried that businesses and governments are starting to use artificial intelligence irresponsibly. “Killer drones are a big concern,” he said to Nature. A sufficiently advanced AI can launch weapons on its own, cause a crash in the financial markets, and manipulate people’s opinions online, all because it thinks it is right.

Subscribe to our Newsletter!

Receive selected content straight into your inbox.

To keep these risks in check, lawmakers have to establish regulations that prohibit the use of artificial intelligence in fields that are deemed “sensitive.” But this is easier said than done. Most countries with a powerful military already have AI departments dedicated to using the technology. How then can lawmakers regulate AI? If a region is developing a nationwide AI system designed to “defend its borders,” the politicians obviously cannot block the program.

So the next question is — can politicians and the military keep artificial intelligence under control and ensure that it does not go “rogue”? This also seems impossible. An AI by its very nature is far superior to human beings in computational intelligence. No matter how secure human beings make AI, a truly advanced Artificial Intelligence will be able to bypass the restrictions and do what it thinks is right.

No matter how secure human beings make artifical intelligence, a truly advanced Artificial Intelligence will be able to bypass the restrictions.
No matter how secure human beings make AI, a truly advanced Artificial Intelligence will be able to bypass the restrictions. (Image: geralt via Pixabay)

Unfortunately, there seems to be no way to ensure that an AI system will always be under the control of human beings. In the short term, we may be able to regulate it. But with time, as AI collects more data and evolves, it will inevitably start making decisions on its own. Little wonder then that Elon Musk spoke of AIs as being far more dangerous than nukes.

EU laws

In April, the European Union passed a set of laws that AI research companies are required to follow. According to the new rules, the companies have to consider the following seven factors when developing or deploying AI — robustness and safety, transparency, human oversight, accountability, privacy and data governance, diversity and non-discrimination, and societal and environmental well-being.

“Today, we are taking an important step towards ethical and secure AI in the EU. We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia, and civil society. We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI,” Mariya Gabriel, Commissioner for Digital Economy and Society, said in a statement (Europa).

Critics are not too happy with the rules, as they feel they are not comprehensive enough to keep AI systems aligned with human values. Eline Chivot, a senior policy analyst at the Center for Data Innovation think tank, commented that the EU is not in a position to be a leader in ethical AI since the region does not lead in AI development.

Students in a lecture looking at computer screens.
Critics are not too happy with the EU rules, as they feel they are not comprehensive enough to keep AI systems aligned with human values. (Image: Screenshot via YouTube)

Follow us on Twitter, Facebook, or Pinterest

Recommended Stories

A six-fingered aye-aye.

Scientists Find 6-Fingered Lemurs

The aye-aye, a member of the lemur family, is a primate native to the island ...

Chinese herbs, a pot, a mortar and pestle, and paper packets tied with string sit on a wooden surface.

Well-Known Physician of Ancient China: Wang Sizhong (Part 2)

This is a three-part series; please go here for part 1. In part one of ...

Imperial physician Ge Lin.

Well-Known Physician of Ancient China: Ge Lin (Part 1)

This is a three-part series; please go here for part 2. In ancient China, there ...

Chinese drums.

A Brief History of Chinese Drums

Did you know Chinese drums were used initially for combat because drums were considered all-inclusive ...

Memory bears are made from materials from a recently diseased individual.

The Story of Memory Bears Made From Grandpa’s Flannels

Losing a loved one can be challenging, especially for young children, so finding creative ways ...

Full moon with moonlight shining on a calm ocean.

Bullet Train to the Moon: The Idea of Moon Travel by Train

In the vast range of the cosmos, the Moon has captivated humanity’s imagination for centuries. ...

School children with their lunch boxes.

Healthy Meal Essentials: Tips on Building a Balanced Lunch Box

A balanced lunch box is essential for children’s health and happiness. This healthy meal gives ...

three-wise-monkeys

The Wisdom of the Three Wise Monkeys

The Three Wise Monkeys are pictorial representations of the well-known saying: “See no evil, hear ...

Hong Kong cartoonist Zunzi with some of his cartoons.

The End of Zunzi’s Cartoons: An Elegy for Press Freedom in Hong Kong

Zunzi, a renowned political cartoonist from Hong Kong, has been forced to cease publication in ...

Send this to a friend