AI Ethics: dealing with discrimination and bias

Many AI systems are using black box models. What happens inside? How do we deal with unexpected and unwanted discrimination and bias?

Many algorithms used in AI are black box models. We do not know what happens inside these black boxes. How do we deal with unexpected and unwanted discrimination and bias?

AI is using Statistics

In many cases AI is based on machine learning: an algorithm uses mathematical models to achieve a certain goal. We have to keep in mind that the goals are chosen. AI is making an effort to achieve those goals.

Goals can be:

  • trying to explain a current situation, by looking at the data
  • make a prediction on a future situation, based on historical data

In all situations AI is not telling the truth, but makes assumptions based on the training data. Most assumptions are true for a certain percentage of all occurring outcomes. Not all possible outcomes need to be occurring.

The algorithm makes a prediction based on a statistical model and input data. Both the model and the data are chosen by the developer. This is done with the best intentions (we hope), but can create a side effects – called discrimination and bias, because of the assumptions made during the developement process and the training of the algorithm.

How to avoid bias and discrimination?

In the design phase, it is possible to add design principles to avoid discrimination and bias.

  • work with domain experts. A combination of experts in the field of where the AI system will be used and data scientists who know how to develop the system is essential. A good result strats with asking the right questions in the design phase.
  • work in a diverse team. A team with people from different genders and cultural backgrounds helps to understand many possible situations the AI system can encounter in the future.
  • test in a small scale. If something goes wrong, you can detect it in an early phase, before a lot of people are involved or affected.
  • scale up. Only when the model works well, scale up bit by bit and check if the system keep behaving as expected.

Human-in-the-Loop

When the system is in a live environment, keep the human in the loop. It means the involvement of humans to check if what the AI system is doing, makes sense. And does it keep making sense when the system is learning on new data.

XAI: Explainable AI

Instead of the above mentioned black box models, eXplainable AI wants to use models and show results that can be understood by humans.

Most of the AI systems have input data and output data that we can investigate, but it is not clear what happens inside the box. And when you don’t know the goals of the algorithm, it is difficult to measure it’s performance.

In eXplainable AI or XAI, the developers of the model explain the goals of the algorithm and also the models and training data. Using data for training of an algorithm is called – fit the model to the data.

Self eXplainable AI

One step further is the capability of the algrithm itself to explain what it is trying to achieve. This is called Self eXplainable AI or SXAI.

AI Fairness 360

IBM is working on what they call – Trusted AI. The AI Fairness 360 is an open source toolkit from IBM. It can help a development team to examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle.

As this is a toolkit, the developers still have to learn how to use it wisely.

Model of the Fairness 360 Toolkit

Cape Privacy

Input data for machine learning algorithms can contain privacy sensitive information. Cape Privacy is a data privacy toolbox for collaborative data science and machine learning. It can remove privcay sensitive data, while keeping an interesting dataset for creating overviews during training of the AI model.

Ethics Bounty Program

The Ethics Bounty Program can be compared with a Bug Bounty Program in software. On Wikipedia it is defined as:

A bug bounty program is a deal offered by many websites, organizations and software developers by which individuals can receive recognition and compensation for reporting bugs, especially those pertaining to security exploits and vulnerabilities.

In the same way we can describe the Ethics Bounty Program as:

The Ethics Bounty Program is a deal offered by many websites, organizations and AI system developers by which individuals can receive recognition and compensation for reporting Ethical issues in AI systems, especially those pertaining to discrimination and bias in algorithms and input data.