Ethics in AI are about the unexpected and unwanted behaviour of the algorithms and training data used in AI systems. Simply put: Is the artificial intelligence we use doing the right thing? And what is right?
3 perspectives on Ethics
The answer can come from different perspectives. The three main perspectives on Ethics in AI are:
- humans design, develop and use (or misuse) AI. What is their moral behaviour?
- machines are trained with a goal in mind. This training for a specific job, on existing data, is called machine learning or deep learning. Is their behaviour as intended or expected? Or is there a bias in the algorithms or the training data?
- singularity: is it possible that AI becomes smarter than humans? And what will be the consequences of that?
Human Design example
Social media like YouTube and Facebook are designed to make money from advertisements. The goal is to keep the users as long as possible on the platform, so they can look at more adds.
Side effects as addiction and unwanted profiling are now hot items. Questions like: is the Brexit and the election of Donald Trump in 2016 a result of influencing voters by social media? What kind of algorithms and advertisements have been used in the campaigns?
Machine Bias example
Imagine the design of an AI system to eliminate cancer in the world. After a lot of training, the algorithm finds a solution to really rule out cancer: kill everyone on the planet!
This example shows that AI can solve problems, but we alway need what is called – a human in the loop- to check the results and the side effects.
There are several examples where things really went wrong. Like the predictive policing initiative in England where AI found out in which areas there was more crime. When the police force started with extra surveillance in those areas they found more criminality. The result was unexpected: crime rates went up instead of going down. The side effect was due to the fact that more surveillance found more criminality, as it will in every area where there is more police surveillance!
The US tax office that started using AI for detecting tax fraud used a very expensive system. Unfortunately the system made mistakes and people were accused of tax fraud while in reality nothing was wrong. This lead to severe human problems, like law suits, high penalties, loss of jobs and homes and even cases of people committing suicide.
Soon the civil servants found out the system was not performing well, but for a long time they covered the facts in silence. They were afraid to loose their job as soon when they had to tell the management that their very expensive system in fact was misperforming.
Medical doctors can detect an illness on a photo or an MRI scan. But some AI systems can now do the work of 12 doctors at the same time and to better in detecting an illness. The same is true for lawyers studying old cases, when looking for arguments to build their current cases.
In fact medical doctors and lawyers are now helped a lot with powerfull AI systems for single tasks. This gives them the ability to focus on more complex tasks and have more time for interaction with their clients.
An example where AI does not only do the research, but also generates the result is in the writing of articles. Many Wikipedia articles are written with the help of AI. And a system called GPT-3 can write complete articles, based on some key words as input.
AI is here to stay and is already influencing our daily lives. When we leave for work we use our navigation and trust it to find the fastest route during traffic jams. Meanwhile we listen to music on Spotify, where algorithms suggest songs based on your preferences.
When you communicate with your frineds on Facebook or use LinedIn for your work related contacts, algorithms are used to decide what is on your timeline, while most of the information is never shown. Are these systems working in your best interest? Or for the interest of the companies delivering these free services?
With every system or service you should ask yourself: what are the goals this systems wants to achieve? And how do their owners make (or save) money with it?
You can read more about designing and investigating AI systems with ethics and fairness in mind here.