Artificial Intelligence is rapidly changing the environments of just about everything – from translation to speech recognition to jobs to decision-making processes. AI, with its thousands of promised benefits, is positioned to affect just about every business around the globe. But with all the hype surrounding what AI can automate and relieve human beings from manually having to do themselves, is AI truly the blessing it’s presented itself to be – or a curse?

The question doesn’t arise from the skepticism on whether machines will become too intelligent or learn to the point of a robot takeover. While AI is fascinating and the stuff of science fiction, its emergence also raises many concerns, especially in its applications.

The late Stephen Hawking famously expressed his secret fears that AI might one day take over, saying that thinking machines “could spell the end of the human race.” There’s irony in his concerns considering that Hawking had to rely on AI to give him the voice that allowed him to interact with the world.

However, Hawking is not the only one speaking out on the implications of a world dependent on AI. Anja Kaspersen, former Head of Strategic Engagement and New Technologies at the International Committee of the Red Cross & former Head of Geopolitics and International Security at the World Economic Forum, has been interviewed extensively on her views on AI.

As a former career diplomat, geo-strategic analyst and security practitioner, she dedicates herself to helping governments and international organisations understand and adapt to new waves of innovation.

On the topic of AI, she talks about AI potentially becoming weaponizable. In fact, the US and Chinese militaries are already investing in AI and robotics, and it’s unlikely that they will reveal to the public how their AI weapons work. Russia, however, has already unveiled their “Iron Man” military robot that aims to minimise risk to soldiers.

However, Kaspersen argues that it’s not all bad. In an article she wrote on the artificial intelligence arms race, she says, “Many AI applications have life-enhancing potential, so holding back its development is undesirable and possibly unworkable. This speaks to need for a more connected and coordinated multi-stakeholder effort to create norms, protocols, and mechanisms for the oversight and governance of AI.”

When it comes to AI, the fear mostly roots from the lack of knowledge of how others will leverage the technology. With AI intended to automate work and collect data on a massive scale, citizens are concerned how their data will be managed and who will have access to it.

In his 2017 book “Origin”, Dan Brown portrays AI with Winston, a piece of artificial intelligence which is acquiring consciousness. Winston is an advanced AI device which is more intelligence than most humans, is loyal, can multitask, assists Prof. Robert Langdon, but also takes the initiative to kill people.

Now we arrived at the point where governments have decided to release directives with the intentions to regulate AI. I believe ethical behaviour is going to become even more of an issue as technological intervention in daily lives increases. Inevitably this is going to lead to regulation in some areas. Just take the current GDPR regulations as an example

In March 2018, the European Commission opened applications to join an expert group in artificial intelligence which will be tasked to:

  • advise the Commission on how to build a broad and diverse community of stakeholders in a “European AI Alliance”
  • support the implementation of the upcoming European initiative on artificial intelligence (April 2018)
  • come forward by the end of the year with draft guidelines for the ethical development and use of artificial intelligence based on the EU’s fundamental rights.

In the US, The White House Office of Science and Technology released the report Preparing for the Future of Artificial Intelligence (2016) on AI regulation and the issues of fairness and transparency, bringing up following concerns:

  • the need to prevent automated systems from making decisions that discriminate against certain groups or individuals.
  • the need for transparency in AI systems, in the form of an explanation for any decision.
  • the need for the workforce to become familiar with the ins and outs of the technology.

Later in the same year, the White House released a report on Artificial Intelligence, Automation, and the Economy. This report followed up on the Administration’s previous report, “Preparing for the Future of Artificial Intelligence”, which was released in October 2016, and which recommended that the White House published a report on the economic impacts of artificial intelligence by the end of 2016.

Both reports include recommendations to ready the U.S. for a future in which AI plays a growing role.

What are your views on artificial intelligence? Can we create machines with human consciousness? Do you believe that AI will need to be regulated or should the technology be allowed to push boundaries?


Pin It on Pinterest

Share This

Share this post with your friends!