Insights

28 August 2023

Why the UK & EU have different plans to regulate med-tech AI

The UK government and the European Medicines Agency have both published draft roadmaps to regulate the use of AI in the medical technology industry.

In this edition of The GRC Story, we explore the key similarities and differences in plans, and what they’re hoping to achieve.


The use of artificial intelligence (AI) and machine learning (ML) in the medical technology industry has grown rapidly in recent years, but the constant development of programs has meant regulating its use is becoming increasingly challenging.

The UK and EU have both this year published roadmaps setting out guidance ahead of upcoming legislation.

The UK’s Software and AI as a Medical Device Change Programme roadmap is separated into 11 work packages, with eight to reform the software as a medical device (SaMD) lifecycle and another three to consider the challenges AI as a medical device (AIaMD) can pose.

The EU’s draft proposal, a reflection paper on the use of Artificial Intelligence (AI) in the medicinal product lifecycle, examines the use of AI across seven stages of medicinal product development: drug discovery; non-clinical development; clinical trials; precision medicine; product information; manufacturing; and post-authorisation.

Following a consultation period, the European Medicines Agency (EMA) is set to finalise its reflection paper, which will provide additional guidance on risk management and update existing guidance to address AI/ML-specific issues. The timeline matches with the European Parliament’s plans for MEPs to reach an agreement on the EU AI Act, the world’s first comprehensive AI law, by the end of 2023.

The key differences

While the overall aim of both approaches is to keep patients safe by regulating the use of AI, there are important differences between the UK and EU over what they are hoping to achieve, as well as how they are planning to regulate.

The UK government has stated that its ‘change programme’ will not only protect patients and the public, but also ensure the UK is globally recognised as a home of responsible innovation for medical device software. 

Alison Dennis and Nicholas Vollers at global law firm Taylor Wessing point out that this recognition will be particularly important in a “post-Brexit UK”. They add that the focus of the regulations will need to remain in line with existing international requirements, as too much divergence would make it difficult for manufacturers to enter the UK’s AI ecosystem.

The most notable disparity between the EU and UK regulations is that the EU places the responsibility – of ensuring algorithms, models and datasets are in line with the regulations – on the marketing authorisation applicant or marketing authorisation holder. 

In contrast, the UK regulations place this responsibility on the manufacturer. Its roadmap includes plans to introduce regulatory guidance on the definition of a manufacturer to address grey areas, such as when open-source code has been modified. 

The UK has made more progress than the EU on the path to introducing AI regulation, as the UK’s change programme builds in response to the government’s already-closed future regulation of medical devices consultation, while the EU’s draft proposals will be open to consultation for the rest of the year.

Where is the overlap?

Despite some differences between the UK and EU approaches, there are some similarities between the roadmaps. Both roadmaps emphasize a ‘human-centered’ approach, focusing on keeping human users in mind when designing systems and software and prioritizing human factors when assessing devices.

Both roadmaps also focus on reforming existing regulations, with the idea that AI-based technologies should primarily fall into the same category as existing medical devices. As such, both are opting to primarily introduce new guidance for companies on issues such as what medical device requirements mean in the context of software and AI.

Both also highlight the importance of inclusive innovation in the sector, as the use of AI in other contexts has highlighted that software can have a racial bias if inappropriately designed. The UK roadmap includes plans to build on wider work that examines health inequalities in medical device regulation.

The EU proposals note that AI/ML models’ data-driven foundations make them vulnerable to the integration of human bias: “All efforts should be made to acquire a balanced training dataset, considering the potential need to over-sample rare populations, and taking all relevant bases of discrimination as specified in the EU principle of non-discrimination and the EU fundamental rights into account.”

The reaction

Commentators have been widely positive about both sets of regulations. 

Brett Lambe, senior associate in the technology team at law firm Ashfords LLP, praised  the new EU regulations, suggesting it could help to accelerate the realization of AI’s potential in the medtech sector and emphasizing the importance of regulation in a field where the dramatic speed of progress by providers has led to anxiety for some groups.

The UK’s commitment to ensure the reforms are in line with international regulations was highlighted by Charlotte Harpin, Partner, Health and Social Care at law firm Browne Jacobson. She claimed complying with regulations across jurisdictions will be “a key concern for many developers.”

The UK government and the European Medicines Agency have both published draft roadmaps to regulate the use of AI in the medical technology industry.

In this edition of The GRC Story, we explore the key similarities and differences in plans, and what they’re hoping to achieve.


The use of artificial intelligence (AI) and machine learning (ML) in the medical technology industry has grown rapidly in recent years, but the constant development of programs has meant regulating its use is becoming increasingly challenging.

The UK and EU have both this year published roadmaps setting out guidance ahead of upcoming legislation.

The UK’s Software and AI as a Medical Device Change Programme roadmap is separated into 11 work packages, with eight to reform the software as a medical device (SaMD) lifecycle and another three to consider the challenges AI as a medical device (AIaMD) can pose.

The EU’s draft proposal, a reflection paper on the use of Artificial Intelligence (AI) in the medicinal product lifecycle, examines the use of AI across seven stages of medicinal product development: drug discovery; non-clinical development; clinical trials; precision medicine; product information; manufacturing; and post-authorisation.

Following a consultation period, the European Medicines Agency (EMA) is set to finalise its reflection paper, which will provide additional guidance on risk management and update existing guidance to address AI/ML-specific issues. The timeline matches with the European Parliament’s plans for MEPs to reach an agreement on the EU AI Act, the world’s first comprehensive AI law, by the end of 2023.

The key differences

While the overall aim of both approaches is to keep patients safe by regulating the use of AI, there are important differences between the UK and EU over what they are hoping to achieve, as well as how they are planning to regulate.

The UK government has stated that its ‘change programme’ will not only protect patients and the public, but also ensure the UK is globally recognised as a home of responsible innovation for medical device software. 

Alison Dennis and Nicholas Vollers at global law firm Taylor Wessing point out that this recognition will be particularly important in a “post-Brexit UK”. They add that the focus of the regulations will need to remain in line with existing international requirements, as too much divergence would make it difficult for manufacturers to enter the UK’s AI ecosystem.

The most notable disparity between the EU and UK regulations is that the EU places the responsibility – of ensuring algorithms, models and datasets are in line with the regulations – on the marketing authorisation applicant or marketing authorisation holder. 

In contrast, the UK regulations place this responsibility on the manufacturer. Its roadmap includes plans to introduce regulatory guidance on the definition of a manufacturer to address grey areas, such as when open-source code has been modified. 

The UK has made more progress than the EU on the path to introducing AI regulation, as the UK’s change programme builds in response to the government’s already-closed future regulation of medical devices consultation, while the EU’s draft proposals will be open to consultation for the rest of the year.

Where is the overlap?

Despite some differences between the UK and EU approaches, there are some similarities between the roadmaps. Both roadmaps emphasize a ‘human-centered’ approach, focusing on keeping human users in mind when designing systems and software and prioritizing human factors when assessing devices.

Both roadmaps also focus on reforming existing regulations, with the idea that AI-based technologies should primarily fall into the same category as existing medical devices. As such, both are opting to primarily introduce new guidance for companies on issues such as what medical device requirements mean in the context of software and AI.

Both also highlight the importance of inclusive innovation in the sector, as the use of AI in other contexts has highlighted that software can have a racial bias if inappropriately designed. The UK roadmap includes plans to build on wider work that examines health inequalities in medical device regulation.

The EU proposals note that AI/ML models’ data-driven foundations make them vulnerable to the integration of human bias: “All efforts should be made to acquire a balanced training dataset, considering the potential need to over-sample rare populations, and taking all relevant bases of discrimination as specified in the EU principle of non-discrimination and the EU fundamental rights into account.”

The reaction

Commentators have been widely positive about both sets of regulations. 

Brett Lambe, senior associate in the technology team at law firm Ashfords LLP, praised  the new EU regulations, suggesting it could help to accelerate the realization of AI’s potential in the medtech sector and emphasizing the importance of regulation in a field where the dramatic speed of progress by providers has led to anxiety for some groups.

The UK’s commitment to ensure the reforms are in line with international regulations was highlighted by Charlotte Harpin, Partner, Health and Social Care at law firm Browne Jacobson. She claimed complying with regulations across jurisdictions will be “a key concern for many developers.”

0
Search
Recent posts
LATEST INSIGHTS
28 November 2023
SEC Drops Hammer on Conflicts in Securitization Deals
The Securities and Exchange Commission (SEC) recently adopted Rule 192, spotlighting conflicts of interest in assembling and selling asset-backed securities (ABS). Here are 5 key takeaways for industry players to fully grasp what the SEC deems over the compliance line
10 November 2023
Key points in the Federal Reserve's November Supervision and Regulation Report
Federal regulators are turning up the heat on big banks, proposing an array of tougher capital, debt, and operational standards.
2 November 2023
SEC rule changes are driving a race for compliance talent
Enhanced regulation of private funds is fueling competition for compliance professionals. In the latest edition of our webcast, MBK Talks, marketing director Michael Oliver interviewed MBK Search’s chief executive officer Spencer Knibbe, to find out how firms should prepare for the new regime.
31 October 2023
Biden's AI executive order: 10 takeaways for CROs and CCOs
President Biden issued a sweeping executive order establishing a national framework for the development and use of AI technology in the United States. For chief risk officers and chief compliance officers in both the public and private sectors, there is much to consider. Here are ten essential takeaways to guide your thinking.
css.php