RESEARCH
  • Operations

Keeping AI Ethical in Business

Oxford Saïd study offers a guide to maintaining ethical boundaries in the use of artificial intelligence

 

By downloading this resource your information will be shared with its authors. Full privacy statement.

Like it or not AI is playing an increasingly important role in business—informing strategy and enhancing operational performance. While much of this is positive there are ethical pitfalls, and organizations should use these early days of AI adoption to guard against the unprincipled misuse of the technology.

To assist in this effort, a team of researchers from Oxford University’s Saïd Business School have developed a clear and understandable managerial framework for ensuring the human concepts of right and wrong are applied in steering machine-based AI technologies towards ethical outcomes.

In a recent report, co-sponsored by the Oxford Future of Marketing Initiative and the International Chamber of Commerce, the Saïd researchers propose an action plan that identifies a hierarchical set of principles, assesses the risks of unethical misuse, and finally advises on practical steps to ensure the ethical application of AI.

Identifying Ethical Principles

This is tricky. Ethical principles tend to be intangible, grand ideas with rather fuzzy boundaries, which makes them perhaps easy to grasp in general, but difficult to pin down—e.g., should ethical AI provide ‘justice’ or ‘fairness’? Are they the same thing?

With ethical at the highest level in the hierarchical set of principles, the researchers alight on two next level overriding principles: responsibility, which refers to the means of execution of one’s duties and responsibilities faithfully in relation to AI-driven processes; and accountability, which reflects one’s ability and willingness to explain and justify actions and decisions with reference to the outcomes of AI-related activities.

Taken together responsible means, and accountable outcomes will ensure the proper functioning of the AI systems that organizations develop, set within applicable regulatory frameworks. They will also demonstrate the organization's commitment to ethical use of AI.

At the third layer in the hierarchy of principles, responsibility is underpinned by human-centric, fair, and harmless. At the fourth layer these three somewhat vague terms are supported by more easily measurable principles. E.g., ‘human centric’ is achieved via transparent, intelligible, and sustainable systems, but also includes to concept of beneficence. ‘Fair’ processes are those that can be classified as just, inclusive, and non-discriminatory. ‘Harmless’ systems are safe, robust, and private.

For accountability, the supporting principles are simpler: proactive leadership, reporting, contesting, correcting, and liability. Proactive leadership, as it relates to accountability, is not just about reacting when something goes wrong or being accountable to stakeholders, but is also a forward-looking focus on leading the business in the new AI development space.

Assessing Ethical Risks

The study defines three ‘risk buckets’:

1. Data. The ethical risk here is that the selection of data may be discriminatory or invade the privacy of individuals.

2. Algorithms. Wrongly conceived, the set of instructions at the heart of AI can be unduly influenced by the biases of those developing the algorithms.

3. Business use. This is perhaps the most blatant misuse of AI, where the technology is used to achieve unethical business or anti-social goals.

Applying AI Ethically

To firmly commit to the ethical deployment of AI, organizations should make a statement of intent defining its ethical AI values, policies and practices. Allied to this should be an implementation plan that focuses on each application of AI to identify potential risks associated with data, algorithms and business use; includes management and mitigation strategies for each risk; and details all actions and decisions related to the management and mitigation of ethical AI risks.

With increased use of the technology, often in new ways, it will be necessary to monitor and regularly update this original application plan—ensuring the organization is able to quickly identify problem areas and when they arise take preventive action.

With the ethics surrounding AI on the public and the government’s radar, it is important businesses take action to stay ethical, both for altruistic reasons and also in the hope that they stave off over-regulation that can inhibit access to the many AI processes and resources that can greatly improve business performance.

………………………………………………………………………………………………………

Access the full report ‘Ethics in AI in Business,’ Felipe Thomaz, Natalia Efremova, Francesca Mazzi, Greg Clark, Ewan MacDonald, Rhonda Hadi, Jason Bell and Andrew Stephen, 2021.  Saïd Business School, University of Oxford.


The Saïd Business School is Europe’s fastest growing business school. An integral part of the University of Oxford, it embodies the academic rigour and forward thinking that has made Oxford a world leader in education.





 
Close
Google Analytics Alternative