In fact, the World Economic Forum published an article recently about rethinking risk management and compliance in an age of AI. As algorithms run more business processes, it makes sense that we want to audit their performance and fairness. But what is really tricky about AI algorithms is that they learn from data. Their outputs can change over time based on their interactions with the world, so the audits can’t be a one-time static thing, they need to be ongoing.
Havard Business Review published an article a couple of years ago about “Why We Need To Audit AI Algorithms”, and I think this quote from the article sums up the problem nicely:
Indeed, we see many succumbing to what Microsoft’s Kate Crawford calls “data fundamentalism” — the notion that massive datasets are repositories that yield reliable and objective truths, if only we can extract them using machine learning tools. A more nuanced view is needed. It is by now abundantly clear that, left unchecked, AI algorithms embedded in digital and social technologies can encode societal biases, accelerate the spread of rumors and disinformation, amplify echo chambers of public opinion, hijack our attention, and even impair our mental wellbeing.
Wharton published a piece last year stating that Audits Are The Way Foward For AI Governance. I think this is the right path, and I think that is why we will see more companies like Superintelligent, focused on monitoring and audit of various types of machine learning and AI models. These tools will become integrated into AI workflows, and help us make sure societal rules and values are encoded in the algorithms. We have rules we expect our employees to follow. As AI systems take over more and more of our work, we should expect them to be subject to similar ideas.