As different countries mull different approaches to regulating AI, questions arise about whether there can be a global consensus on the technology.
In this week’s edition of The GRC Story, we explore how moves in the UK and EU to tackle AI might shape the world’s response.
As artificial intelligence and machine learning grow rapidly, countries are increasingly responding with legislation. We can expect to see many different regulatory forces moving in on AI with different countries at different stages of evolution of their approach to the tech.
But different compliance standards across countries will pose several challenges to the governance, risk, and compliance (GRC) sector, with the potential for loopholes and regulatory arbitrage.
The UK’s preferred approach is ‘technology neutral’, which sets out the core characteristics of AI to inform the scope of the AI regulatory framework.
Indraneel Basu Majumdar , senior financial services solicitor at Harper James Solicitors, explains that this allows regulators to set out and evolve more detailed definitions of AI as required.
“This is in line with the government’s view to regulate the use of AI rather than the technology itself – and a detailed universally applicable definition is therefore not needed.”
The EU has taken a different approach. It is a complex and multi layered approach spanning GDPR, the AI Act, Digital Services Act, Digital Markets Act, among others.
“Assessing which rules apply will be critical for GRC professionals – and complying with the most stringent standards applicable may well be the default position,” Majumdar explains.
“Given regulators are currently not displaying an appetite for common standards for AI, risks of regulatory arbitrage and complex conflicts of laws cannot be ruled out. Extensive legal analysis/ expensive compliance frameworks and divergent rules will pose a challenge to GRC professionals.”
The Devil in the Detail
PJ Di Giammarino, founder and CEO of independent think-tank JWG Group says the AI devil is in the detail of compliance policies, procedures, and controls.
“AI is amongst a raft of measures that create new problems for senior managers in the UK and now Ireland that have accountability regimes which require identification, recording and monitoring that duties have been fulfilled. This includes oversight of APIs, Cloud, Cyber, third-party risk management, and AI.”
These new rules force firms to spell out a lot more about what they are doing with AI. Di Giammarino says the EU AI Act classifies certain monitoring as ‘high risk’ which means managers need to make sure it is well understood, documented, and declared to the AI authorities, which even includes describing the system design, code and test data.
And Europe’s new Digital Operational Resilience Act will require firms to manage their technology supply chain much more closely with third parties under much tighter contracts.
“This means that firms require global technology risk control frameworks that take account of the very specific obligations for the activity in each jurisdiction and map back their internal policies to a single version of the firms’ truth. What might be a good enough policy in the UK may not be in Europe or the US,” explains Di Giammarino.
The Dream of a Consensus
So how can the GRC sector navigate the complex labyrinth of compliance standards around AI?
Andrew Pattison, head of GRC Consultancy Europe of IT Governance Europe, says when an organisation deals with data protection and operates in multiple locations, they must comprehend the various aspects and impacts of the different standards.
“The approach and handling of these standards are primarily determined by the regulatory environment, including the consequences of non-compliance,” he adds.
Majumdar says one strategy could be to start by assessing which rules apply, and then apply the more onerous set of rules as default.
The GRC sector can address challenges related to different compliance standards around AI by developing comprehensive risk management strategies.
Vicki Utting, managing executive at Vigilant Software agrees, says the GRC sector can also encourage international collaboration, implement robust governance structures, invest in education and training, and engage in dialogue with regulatory bodies.
She goes on to say: “There is no doubt that AI will, in due course, face regulatory forces – take Italy banning ChatGPT over privacy concerns as an example. However, until those forces come into play – as we know, this can often take years, organisations must be proactive in applying due diligence and establishing robust compliance systems to handle the adoption of AI and the increased risk it can pose. They too, though, should also explore the potential benefits that AI can offer – in a reasoned, risk management-based approach.”
Dr Clare Walsh, director of education at the Institute of Analytics, says firms could look to use the IoA’s Model Cards, which guide experts in data science through the process of communicating all the risks that their models entail.
“People immediately grasp the intended benefits [of AI] but can be less aware of the risks that they are taking, and great communication goes a long way. Model Cards are a good starting point and a way to stay ahead of the many, many changes in the law. The Model Cards explain what you’ve done, when you did it and who you need to contact to be responsible. It lays out the intended uses of the algorithm or the data set.”
Despite countries creating their own policies to regulate AI, there are signs of collaboration between some jurisdictions.
“If each government practices their democratic and nation nations and rights to create their own policies, from a data science perspective this does not recognise the reality of what we’re working with,” says Dr Walsh. “But we are seeing coalition groups forming between Europe, the UK and America and will soon publish their agreed, negotiated guidelines on data transfers between countries. Some countries have decided that they will try to work together, and I think that’s a great solution.”