Should a self-driving car be in a position to decide whether to run over a child crossing the street in order to save the person inside the vehicle, or to instead drive the car into the ditch thereby killing its owner?

Should AI decide for us, which products we can see online, based on our purchasing data?

Should AI be restricted from ‘working in certain professions that require empathy, such as nursing or law?  

These are the prevalent issues of ethics and AI that occupy the media. But do we have reason to fear AI’s encroachment, or is it more our input into these systems that is a greater cause for concern?

Human influence in AI development

 

It’s important that we don’t forget the powerful role we have to play in these formative years of AI development. Crowdsourcing responses to moral dilemmas have become important in teaching AI to behave ethically, where systems are designed to learn from the values of the average person.

But could that in itself be a problem?

As Han, a humanoid robot developed by Hanson Robotics points out, “humans are not necessarily the most ethical creatures”. In creating these complex, intelligent systems, how can we override bias and prejudice evident in human systems? What will happen once we have laid our imperfect groundwork for machines when haven’t adequately examined human flaws and limitations?

Then there is the problem of human intent.

Even well-meaning AI developments can have negative consequences. Take Stanford University researchers’ use of deep neural networks to detect sexual orientation in human faces, or the NamePrism app that analyses names for their ethnicity and nationality.

The idea behind both tools was to hold up the black mirror and show discrimination in action. But these systems can easily be replicated, and simply by flipping the end goal – to discriminate rather than to protect – then they can become dangerous.

What we can do

 

Considering technology’s rapid development, it stands to reason for many that we should build ‘newer and better’ as soon as we can. But if we are to keep control of these systems, we must continue to hold AI to high standards.

One way to (start) to pursue this, is that there should, for example, be a long period of testing an idea, and of consultation as to its potential with experts across the humanities and sciences, before it is applied globally. This is key to comprehending the widespread problems we may face if we unleash an underdeveloped idea.

AI development, after all, has the capacity to wreak global havoc. This is why AI experts including Elon Musk have written an open letter to the UN requesting a ban on ‘killer robots’.

“Once this Pandora’s box is opened,” says the letter, “it will be hard to close.”

Establishing an ethical structure is essential to guide our approach to developing AI. Giving it specific goals that can be scaled up will help us to carry out assessments while the ethical debates around it catch up.

If it is the case that we will eventually be taken over by AI and that it will shape the future according to its own preferences, then we must learn how to align those preferences with the best possible qualities of humanity.

Kirill Eremenko is the founder of the data science academy SuperDataScience and the author of new book Confident Data Skills, published by Kogan Page, priced £14.99. The book will help you master the fundamentals of working with data and supercharge your career.

 

Prepared and edited by @EdinaZejnilovic, Journalism Student at DCU


If you would like to have your company featured in the Irish Tech News Business Showcase, get in contact with us at [email protected] or on Twitter: @SimonCocking

Pin It on Pinterest

Share This