An ethical framework for emerging technologies

Emerging technologies such as AI present enormous opportunities for humanity. But they may well come with substantial risks as well, some of which may as yet be hard to detect. Agreeing an ethical framework to guide the development of applications in an area of emerging technology seems sensible: such a framework won’t eliminate risks but could well reduce their likelihood and potential impact.

Ethics vs compliance

An ethical framework is a set of principles that can provide a solid base for the development applications that are consistent with the accepted social norms and moral principles in the society in which they are developed. In the UK at least, these include honesty, fairness and human rights. In other words, ethical frameworks are about “doing good”, or perhaps more accurately “not doing harm”.

In business, ethics involves how an organisation and its employees conduct themselves. It’s not necessarily the same as compliance (with regulations or standards). You can be compliant and still be unethical for instance. An in an area of emerging technology there may well not be regulations and standards available to comply with. At this stage, all an organisation has to guide them may be an ethical framework.

Ethics for technology

If an ethical framework is to be useful in an area of emerging technology, it needs to be accepted prior to any business activity that uses the technology. It’s needed when the initial business case for a new product or service that uses an emerging technology is being developed. It’s needed during the development phases. And it’s needed when the final product or services is rolled out, or when it is bought by a third party. It’s not something that can be bolted on as an afterthought.

You might argue that any product or service in an emerging area of technology needs to be “ethical by design”. Facebook might look very different today had they take this approach, for instance Admittedly they would probably be rather smaller, but profit is generally not an excuse for unethical behaviour. And Cambridge Analytica would probably still be operating successfully.

An example framework: Artificial intelligence

Obidos Consulting is proposing an ethical framework for the development and implementation of AI technology that contains nine fundamental principles:

Fairness. Any AI application needs to be fair to people who use it or who are impacted by it. It must be free of bias or discrimination, whether intended or not. This means for instance that any data used to “train” it must be representative of the desired outcome and not an incomplete data set that would produce skewed results.

Accountability. The developers of an AI system should not be able to blame the system they have developed if things go wrong. A legal entity - an organisation or an individual - must have clear accountability.

Transparency. It must be possible, and ideally simple, to understand how the AI system operates, for instance how it makes decisions. Because AI systems can “learn”, the way they make decisions could very quickly become obscure unless some form of decision audit trail is maintained so that it becomes possible to track the way previous decisions have impacted on current decisions.

Security. AI systems must be secure against tampering, especially if they are being used on tasks that may affect individual people’s lives. This means (using cyber security’s “CIA” model) that they must be:

  • Confidential: Their outputs and any data they hold must be available only to authorised people

  • Integrious (with integrity): Their algorithms and the data they hold must be safe from interference and alteration by unauthorised people

  • Accessible: They must be secure from attempts to prevent them from undertaking their legitimate activities and the data and outputs they provide must be available to authorised users

Agility. The world of AI is fast moving. Any system needs feedback loops so that the way it operates can be improved over time or changed to take account of changed circumstances. This is especially true of an AI system which uses data from the past to project actions into the future: this is because it’s important to accept that the past does not always indicate what the future will look like. Systems should not be built with the assumption that they will always operate in the same way.

Diligence. During development and implementation anyone associated with building or managing the system should take due care to ensure that all relevant factors that could cause harm to others are taken into account. For instance, potential threats to people’s privacy or physical safety should be identified and mitigated. Of course, not all threats will be identifiable. In such a case a failure to predict and mitigate a threat won’t be an ethical failure – although it may be a business risk. But people who own AI systems should take care that they are not negligent in terms of how they look for and map out potential harms arising from the system.

Autonomy. AI systems should not prevent people from taking their own decisions about their lives: they should not reduce the “agency” that people have. This isn’t to say that an AI system shouldn’t deny someone a bank loan for example. But it should not be able to prevent people from going about their legitimate business or making their own choices of how to behave.

Safety. AI systems should always be designed so that they avoid causing physical or mental harm to users or people impacted by them. Any emotional harm should be avoided unless an inevitable result of a lawful decision made by the AI system (such as refusal of a bank loan) in which case it should be minimised as far as possible.

Privacy. AI systems should be developed an operated in line with the principles surrounding the protection of personal data. Note that it is important to accept that the harms that AI can cause go far beyond privacy breaches and that privacy, while important, should not be used a s proxy for the harms that AI systems can create.

If organisations that are engaging with AI consider each of these principles, then the potential for harms from AI will be reduced substantially. This should be true whether they are planning to develop new AI applications or planning to use those developed by third parties.

Originally published in Business Reporter

Previous
Previous

Enhancing the human

Next
Next

Bias and discrimination can be ‘hard-wired’ into AI