Bias and discrimination can be ‘hard-wired’ into AI

The British Computer Society (BCS) has told the Committee for Standards in Public Life (CSPL) that a lack of diversity in teams developing artificial intelligence (AI) can lead to in-built bias and discrimination in its decisions.

The comments are included in a report published by CSPL which examines whether the existing frameworks and regulations around machine learning are sufficient to ensure high standards of conduct are upheld as technologically assisted decision-making is adopted widely across the public sector.

According to Dr Bill Mitchell OBE, Director of Policy at BCS: “Lack of diversity in product development teams is a concern as non-diverse teams may be more likely to follow practices that inadvertently hard-wire bias into new products or services.” Another problem is that sampling errors can produce discriminatory outcomes.

This has been shown in a number of areas already. For example in late 2018 it was reported that Amazon had scrapped a machine-learning algorithm that had been designed to eliminate gender bias in their recruiting processes; trained on the data from existing (generally male) successful candidates the algorithm "learned" the discriminatory behaviour it was deigned to eliminate.

Even more worrying were the reports in 2019 that US law enforcement agencies had been trialling AI with very mixed results. For instance, the Los Angeles Police Department stopped using its “chronic offender” database, designed to monitor people considered at high risk of committing violent crimes, following many claims about its inaccuracy and a damning audit which pointed out that "the majority of people identified as chronic offenders had few, if any, actual contacts with the police.”

The old computer saying "Garbage in, garbage out" is significant here. If AI algorithms learn from flawed data then they are certain to produce flawed results. Dr Mitchell puts it like this: "if you put poor, partial, flawed data into a computer it will mindlessly follow its programming and output poor, partial, flawed computations. If we allow AI systems to learn from ‘garbage’ examples, then we will end up with a statistical-inference model that is really good at producing ‘garbage’ inferences.”

Alex Guillen, Technology Strategist at Insight, makes a similar point. "Regardless of how AI is used in the public sector, it needs to be treated as an employee – and given the training and information it needs in order to do its job." He also makes an important point about accountability: "Helping public sector workers make more informed decisions, or act faster, will not only improve public services. It will help satisfy the 69 percent of people polled in the report who said they would be more comfortable with public bodies using AI if humans were making the final judgement on any decision."

The ethical issues surrounding the use of AI go beyond the need to ensure that they are fair and accountable. I propose an ethical framework for AI, and other forms of digital technology, that contains the following principles:

  • Fairness: An AI system should produce results that are fair and equitable to all who use or are affected by the system; in other words they should not produce biased outputs

  • Accountability: There should always be a human being of appropriate seniority who can be held accountable for the decisions an AI system makes; it should always be possible for people faced by "Computer says No" to have a human being to appeal to

  • Transparency: The way that an AI system reached its conclusions should be trackable; this will inevitably be a challenge with machine learning systems which should therefore laydown an audit trail of the way that decisions are changed based on the outcome of previous decisions

  • Autonomy: The decisions that AI systems reach should not impact on the ability of individual humans to take their own autonomous decisions about life; people should always be allowed choice beyond the outputs from an AI system

  • Safety: AI systems should take care to preserve the physical, mental and emotional safety of people who use them or are impacted by them

  • Privacy: AI systems should be designed so that they do not allow personal data to be stolen or misused in any way

  • Security: AI systems should be proof against tampering which might result in them failing to comply with any of these principles

This ethical framework is not only suitable for AI but for other digital technologies such as virtual reality and even brain-computer interfaces. And it is important that developers accept the need for some form of ethical framework when developing new applications. Without such guidelines, it's hard to know whether risks from digital technology to individuals, and indeed to the whole of humanity, will adequately be managed.

This article was originally published in Business Reporter

Previous
Previous

An ethical framework for emerging technologies

Next
Next

Digital innovation in manufacturing