Regulating artificial intelligence

Dr Emma Lawrence

Artificial intelligence (AI) is becoming embedded in our daily lives, and that includes the life science sector. Both the UK Government and the EU are grappling with how to regulate this innovative technology.

Dr Emma Lawrence, Senior Policy and Public Affairs Manager at BIA, outlines the approaches being taken by the UK and EU and how these might affect BIA members.


The UK’s pro-innovation approach

The UK Government is currently consulting on its approach to regulating AI. In contrast to the EU, which is developing a single piece of legislation, the UK Government proposes handing sector-level regulators a series of cross-sectoral principles. These can then be applied to specific sectors in a way that fits with the existing regulatory framework. The Department for Digital, Culture, Media & Sport (DCMS) is suggesting the following principles:

  1. Ensure AI is used safely
  2. Ensure AI is technically secure, and functions as designed
  3. Make sure that AI is appropriately transparent and explainable
  4. Embed considerations of fairness into AI
  5. Define legal persons’ responsibility for AI governance
  6. Clarify routes to redress or contestability

They will also be asking regulators to focus on applications of AI that result in real, identifiable levels of risk, rather than imposing controls on all uses of AI, including those that pose low or hypothetical risk. This is to avoid stifling innovation so that the framework is “proportionate, light-touch and forward-looking”, in line with other efforts to regulate data use in a pro-innovative way. The UK’s context-based approach allows risk to be identified at the application level. The Government is therefore seeking to regulate the use of AI rather than the technology itself.

Regulators will be asked to consider light-touch options such as guidance and all regulatory principles will be non-statutory at first. The MHRA already regulates medical AI as ‘software as a medical device (SaMD)’. It has consulted on changes to the regulatory framework as part of its Software and AI as a Medical Device Change Programme and the new legislative framework for medical devices in the UK. The response to the consultation on medical devices outlined the Government’s decision not to include a specific definition of AI as a medical device in the new regulations. The MHRA has also decided not to set specific legal requirements for AI beyond those being considered for SaMD as it does not want to be overly prescriptive. Instead, the MHRA intends to publish relevant guidance. It appears that the same approach will be used here. AI used in the life science sector that is not currently regulated by the MHRA, for example in early drug discovery that does not fall into the definition of medical devices, is likely to remain unregulated.

The EU’s AI Act

The EU is aiming for the first AI-specific regulation, giving it a wide impact in setting a global benchmark. A proposal was published in April and it is currently being debated by the EU institutions. The EU’s AI act would cover any AI system providing outputs in the EU, regardless of where the provider is located.

The EU will also be taking a risk-based approach to regulation, with restrictions, requirements and oversight varying depending on risk level. AI systems with unacceptable risk will be banned, and minimal or no-risk AI systems will have no restrictions. Those with ‘limited’ or ‘high’ risk would be subject to requirements before going on the market, such as risk assessments and human oversight. Conformity assessment may be dealt with by an existing central EU regulator, but AI systems would also have to comply with the Act. This means that many AI systems in the life sciences would have to comply with several rules, likely causing more confusion for the industry. Like the UK, there will also be an emphasis on understanding how the AI system works, which is likely to pose another challenge for developers.

The challenge of defining AI

Despite being pivotal to many processes, legislators can fall at the first hurdle when it comes to defining AI. DCMS itself explains, “there is currently little consensus on a general definition of AI, either within the scientific community or across national or international organisations”.

This is because AI is a general technology that can be used differently in different circumstances. The UK Government has proposed using core characteristics of AI to inform the regulatory framework. These characteristics are:

  • Adaptiveness – the ability to ‘learn’ rather than to be purely programmed
  • Autonomy – the ability to make decisions, strategise or react

Regulators are then able to develop definitions of AI relevant to their sector. More specifically, the MHRA is proposing to use the following definition of ‘software’ in the UK medical devices regulation (which would include AI): “A set of instructions that processes input data and creates output data”.

In contrast, the EU has been grappling with the definition of AI for months. The current draft AI Act defines it as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. Critics argue this will exclude some types of AI and may not keep up with innovation, but discussions on the definition are still ongoing.

What’s next

In the EU, it will be at least a year before the EU AI act is agreed upon and at least two more before it comes into force.

In the UK, DCMS is currently consulting on its policy paper ‘Establishing a pro-innovation approach to regulating AI’. The deadline is 26 September, contact Dr Emma Lawrence if you are interested in responding.

Join us for the TechBio UK conference on 13 October in London, which includes a workshop on regulating AI.


New AI and Digital Regulations Service

The Care Quality Commission, Health Research Authority, Medicines and Healthcare products Regulatory Agency, and National Institute for Health and Care Excellence have launched a new AI and Digital Regulations Service. The service offers guidance and advice on the development and widespread adoption of safe, innovative, value-adding technologies in health and social care.

 

More within