Why is regulatory compliance a elephant in the room for most organisations?

Technology needs to be ethical, or in more clearer terms, it needs to be used ethically. And ethics can mean a lot of different things — it can mean being transparent with how it’s used to solve a customer challenge, being able to explain and interpret the results and decisions made by that technology, being fair and unbiased in those decisions and also being accountable for the use of that said technology.
Share on linkedin
Share on twitter
Share on facebook

By and large it depends on the context — the problem you’re trying to solve, and also how you’re solving that problem.

Where is the AI going to be used (context/industry/problem), and how it is going to be used (solution/usecase) is important in determining what kind of regulatory compliance and scrutiny your product/service needs.

You should still follow good governance practices in transparency and data consent in all scenarios, however the level of regulatory compliance is incrementally tiered depending on the impact intensity of the challenge you’re trying to solve — hence financial and healthcare industries tend to be tightly regulated due to the very nature of their customer challenges.

Regulatory bodies want to understand about your —

data infrastructure & systems architecture — how robust are the internal systems to handle data hygiene, integration and robustness; where will the data be stored — on premise or on cloud, where will the AI models run — inhouse or third party vendor API integrations and such, how legacy systems interact with each other..

data gathering & distribution practices — what are the sources of data, validity of that data set, which datapoints are being extracted or asked for, how will they be used, where will they be used..

user privacy and consents — are consent checks in place, are user notified explicitly about the usage of their data, are those consents /terms and conditions digestible; basically are you following privacy by design?

machine learning models & algorithms — how do we verify the validity and accuracy of the decision made by these AI models? Are we able to accurately explain and interpret the decisions made? Is there a human cross-checking these results? are domain experts involved during curation and labeling of data and during designing of the AI models? Is the algorithm or the data fed to the AI fair and unbiased? Is the dataset expansive and inclusive? what modelling techniques were used?

risk management — risk mitigation in event of inaccurate or bad results; what are the ramifications and the corresponding plan of action due to the inaccuracy of the data and results?

Organizations have to go through intense regulatory hurdles (especially in financial and healthcare industries) to create safe and reliant AI-based systems, hence more often than not, these checks and balances might seem like an impediment to innovation.

However, rather than looking at regulation as a barrier to business success, we need to look at it as a way to evolve our business practices to be progressively ethical and humane.

If you would like to contribute to our blog, or have something to say please get in touch.

Latest Jobs