Putting Ethical AI into Practice

Thursday, 15 February 2024
By Brian Ball

Putting Ethical AI into Practice

Artificial Intelligence (AI) is booming. After the pioneering (symbolic) work of the 20th century, a ‘second wave’ of (sub-symbolic) AI development has occurred in the 21st. Drawing on big data and machine learning techniques, the new AI systems – including generative models such as ChatGPT and Midjourney - are now ubiquitous. The opportunities afforded by these systems are real. But there are risks – and indeed, actual harms (see here for a recent report on the current state of AI).

This is why the Lord Mayor’s Ethical AI Initiative is so welcome. The Lord Mayor of London, Professor Alderman Michael Mainelli has been working with delivery partners (CISI, the Chartered Institute for Securities and Investments, and BCS, the Chartered Institute for IT) to provide courses, leading (upon successful completion) to certificates, in ethical AI development and (financial sector) deployment –deployment certificates in other domains of application may follow. The aim is to have these courses act as standards that will support ethical AI practice within the City of London, and far beyond.

It is important to recognise, however, that such ethical practice requires an ecosystem of support. In my view, this ethical AI environment must involve three levels: legal regulation, corporate governance, and professional practice.

At the uppermost level, there must be a regulatory framework enshrined in law. Such frameworks are being developed across a number of jurisdictions, and can differ one from the next. For organisations operating out of London, for instance, both UK and EU regulations are likely to prove crucial: yet in the UK, this framework will be principles based, and context-sensitive, while the EU is taking a more rules based approach, targeting AI as a technology, and articulating risk levels associated with various types of applications.

The Lord Mayor’s certificate courses are designed to support those working within the more fluid UK legal context – in part by serving to establish independent professional standards – though they will, without doubt, help developers and deployers of AI systems to navigate the difficult terrain of ethical AI practice in other jurisdictions as well. The key point for present purposes, however, is simply to note that appropriately designed regulatory constraints can facilitate the implementation of ethical AI (through the simple requirement of legal compliance – provided appropriate enforcement methods are in place).

At the intermediate level, the tools of corporate governance can be leveraged to support ethical AI practice. In particular, appropriate corporate structures, policies, and procedures, are needed to enable individuals to follow best practice. For example, AI ethics committees might be appropriate structures in certain contexts (see here), helping with the production of organisational policies, or being consulted within a variety of procedures. These corporate governance tools will facilitate regulatory compliance (and thereby ethics, in appropriately oriented jurisdictions) but they will also typically go beyond this minimum threshold, allowing organisations using AI to put their ethical values and strategies into practice in concrete and consistent ways.

Finally, at the lowest, most personalised level, individual AI practitioners and their teams will need to follow best practice in ethical AI. This is easier said than done, however – and so, together with colleagues, I have been thinking about what good practice might look like in this space, and how it can be implemented. There is a growing recognition that, when it comes to AI ethics, we need to move ‘from what to how’. and a few points stand out, from my perspective.

First, a variety of areas of expertise, deriving from different academic disciplines, are needed to implement ethical AI development. Computer science skills are of course a component, as is some philosophical understanding of ethics. Other areas of expertise many also be useful such as legal, medical, or social scientific knowledge, depending on the application.

Second, these areas of expertise need to be integrated within a ‘value-sensitive’ design process, which affords opportunities for recognition of, and responsiveness to, relevant ethical considerations.

Third, that process is itself a loop, or cycle – further reinforcing the need for an entire ecosystem of support for ethical AI (see here and here for further elaboration of these points.)

It is therefore great to see the opportunity for ethics training for computing professionals afforded by the Lord Mayor’s ethical AI developer certificate – as well as its potential for integration within the broader ecosystem comprising (amongst other things) the deployer certificate. The introduction of these certificates constitutes an important step towards putting ethical AI into practice.

About Dr Brian Ball

Dr Brian Ball is Associate Professor of Philosophy, and AI and Information Ethics Research Lead, at Northeastern University London. A Senior Fellow of the Higher Education Academy, he has been instrumental in developing both the MA Philosophy and Artificial Intelligence and (the predecessor of) the MSc AI and Ethics. He is currently conducting research on misinformation (in connection with the Royal Society supported PolyGraphs project), which has been independently identified as posing the most significant short term threat from AI.

svg.lf_footer_svg{ height: 30px; width: 30px; }
Search