The debate around trust and ethics relating to Artificial Intelligence has been an ongoing theme for the past few years – and in all likelihood will continue to be so for many more to come. Attempts to devise solutions to the issue range from international efforts to come up de jure standards as well as de facto offerings from real world users. But the great thing about standards is, of course, there are so many to choose from.
Last week as part of London Tech Week, engineering giant Rolls-Royce came up with what sounds like a highly promising initiative to, as it puts it, “help gain society’s trust” of AI and in the process “accelerate the next generation of industrialisation”.
The announcement is based around two main elements. The first is an AI ethics framework that any organization across multiple sectors can use to make ethical decisions around the deployment of critical and non-critical applications. The second aspect is a five-layer checking system focused on the output of algorithms, with the intention of preventing biases developing undetected and building faith that they are trustworthy.
In a significant move, both the framework and the checking system will be published in full under Creative Commons licence later this year on the Rolls-Royce.com website. Warren East, CEO of Rolls-Royce, says that the firm believes its work has a wider applicability that will be beneficial across sectors:
I believe the impact of our work has the potential to go far beyond just our own field of expertise into broader industrial AI applications and well beyond industrial application…During our peer review process with experts from big tech companies, academia, automotive and pharmaceutical industries, we were genuinely surprised to learn that our work – which began as a way to address an internal challenge – stands at the leading edge of the practical and ethical application of AI in a critical industrial context. And as we went on, we came to recognise that its implications could extend way beyond that.