AI in life sciences: future-proofing strategies

headshots of two authors

In the final piece of the 'AI in life sciences' series, Jenny Yu, Marsh’s Life Sciences industry practice leader, and experts from law firm Kennedys - Paula Margolis, Corporate Affairs Lawyer and Samantha Silver, Partner - emphasize essential steps for life sciences companies to develop and implement a successful AI business strategy. They delve into effective management of outcomes from AI technologies and strategies to mitigate potential risks.


Test tubes

As artificial intelligence (AI) continues to gain considerable momentum across the life sciences sector, companies must take steps to understand how AI technologies operate and the risks that they present, the extent to which they can transform and add value to their business, and implement a framework to provide for their effective incorporation into the organisation.

In the final article in our series, ‘Artificial intelligence in life sciences’, we highlight some of the key steps that life sciences companies should consider adopting in order to build and deploy a successful AI business strategy, effectively manage outcomes generated by AI technologies, and safeguard against the potential risks arising.

Key steps:

  1. Implementation of a robust and effective data strategy: As data is the fuel that powers AI processes and decision making, the quality, volume, and integrity of that data is fundamental to achieving non-biased and reliable outcomes. In the life sciences sector, bias in product design, testing, and clinical trials may result in some healthcare products not being as effective on certain patient groups. A robust and effective data strategy is therefore critical to ensure that complete and accurate data sets are collated and maintained.
     
  2. Re-evaluation of privacy and cybersecurity risks: With the EU and UK focusing hard on evaluating and updating existing product safety laws and regulations to encompass a legislative framework for AI, life sciences companies should take steps to evaluate the safety of their AI-powered products, particularly from a cyber and privacy perspective.  This would include the assessment of the safety of products both in isolation and when connected to other products, ensuring all parties in the supply chain are aware of and trained on, the obligations that will be imposed on them, and the adoption of current European and/or national standards for assessing the cybersecurity of products.
     
  3. Development of a modern governance and risk management framework: In view of the risk profile, the existing regulatory and legal framework and the speed and depth at which AI-driven technologies are being utilised, life sciences organisations are having to appropriately adapt existing governance and risk management frameworks to harness the power of AI. Historically, and notwithstanding the demands of evolving regulatory changes, organisations have typically depended upon relatively static risk management frameworks and systems, which relied upon key individuals within the organisation updating risk registers according to their responsibilities.

    The use of AI technologies provides an opportunity for a step-change in risk management through connectivity between AI and key risk indicator information, such as complaints and adverse events data. Collaborative discussions and appropriate planning with risk management, information technology specialists, engineers, and other key stakeholders are crucial to reducing such risks. 
     
  4. Management of employee skill sets and adapting the company workforce: Employees are a key consideration in making AI an integral part of business operations. Life sciences businesses will need to invest in robust learning and development programmes to allow existing and future employees to acquire the necessary skillsets to develop and integrate AI-linked solutions.  Businesses may also consider the creation of new roles within the company to manage the risks arising from the increasing adoption of AI technologies within business operations. For example, with the life sciences sector becoming increasingly vulnerable to privacy and cybersecurity related risks, companies may wish to deploy personnel with expertise in these areas in order to safeguard their products.

Conclusion

Companies play a crucial role in ensuring the implementation of appropriate safeguards to prevent and respond to risks arising from the potentially negative consequences of AI technologies. While the role of risk professionals within the life sciences industry will undoubtedly change going forward, the combined forces of AI-enabled risk management with subject matter direction and oversight will create a future in which questions such as “how much risk should I take?” can be informed and updated in real-time.

With changes to existing regulatory and liability regimes on the horizon, life sciences companies, and in particular, manufacturers in the sector, should seek to ensure that their products comprising AI-based technologies have undergone rigorous testing and safety checks and comply with existing laws and regulations before release to market.

 

This article was originally posted by Marsh.

More news and updates

Somnus Scientific announces partnership with Life Science Group and Creonate

Somnus Scientific has agreed a collaborative partnership with Life Science Group (LSG) and Creonate Manufacturing for the next phase of product development.

Coulter Partners DEI: Views from leadership series - Ankit Mahadevia

In the latest instalment of its Diversity, Equity & Inclusion Leadership series, Coulter Partners spoke to Ankit Mahadevia, MD, Founder and Chair of Spero Therapeutics, and author of 'Quiet Leader, Loud Results'.

Discovery Park: Asymchem secures former Pfizer Sandwich, UK, development

Asymchem will operate the former Pfizer U.K. small molecule API pilot plant and part of the development laboratories through a new lease agreement between Asymchem and Discovery Park, establishing Asymchem’s first manufacturing footprint in Europe.

CEO Update - 20 May 2024

The UK continues to lead the global discussion on tackling the antibiotic emergency. Last week saw leading global figures convene at the Royal Society to pledge money and agree actions on the AMR agenda.

More within