Chemistry of Active Substances Guideline: EMA Request Feedback
Published Jan 20, 2025
Published 03rd February 2025
Artificial Intelligence (AI) has been a hot topic for the last few years. Still, with a recent increase in general access and processing power, tools powered by Large Language models (LLM), such as ChatGPT and its ilk, provide a whole new world of possibilities. Developers can now train AI tools using an ever-growing repository of human knowledge. Integrating AI tools into medicine development and patient care can potentially increase the quality of patient outcomes. This article will explore some of the possible ways AI is or can support drug development and healthcare.
Google DeepMind’s MedPaLM 2 serves as one current example. The company fine-tuned this large language model on medical datasets to answer medical questions and provide clinical advice. Google DeepMind trained MedPaLM 2 using medical textbooks, research papers, doctor-patient dialogues and medical panel opinions. MedPaLM 2 scored 85.4% on the USMLE medical exam, demonstrating its ability to provide well-researched answers to medical questions.
AI-powered tools can be employed to automate the processing of regulatory documents, including extracting relevant information from large volumes of textual data. This can support patient recruitment and screening for clinical trials, early automated signal detection by analysing pharmacovigilance data, and market data to provide insights for marketing strategies, potentially creating new market opportunities, to name a few. Any recommendations made by an automated system would need to be verified by an experienced reviewer, but it can speed up processes and highlight potential opportunities early.
LLMs can be enhanced with information available online and in Excel for data analysis and pattern matching. They can also gather data on a particular condition or topic. A good example could be using a tool to monitor and analyse regulatory changes and updates in real-time. AI systems can provide timely alerts and insights by continuously scanning regulatory databases or published research papers. This helps regulatory affairs professionals stay informed and adapt to evolving situations. Although information can be missed, this method is suitable for a first sweep or confirming that your information is current.
AI tools such as LLMs can summarise larger or multiple documents. Medical writers save time when using AI to create a suggested first draft or outline of a summary document. This allows writers to assess the conclusions for accuracy and develop the document further. Writers must include this human review step for all AI-generated text because AI can “*1 hallucinate” or guess at details. AIs often go off-topic, include irrelevant information, or lose *2 concentration when ‘reading’ the source document. They can fail to capture all required data, making their generated content unreliable at face value. In short, while AI can help create a rough first draft, it exhibits many human foibles during the authoring process.
Teams can also enhance their communication and coordination using AI-powered collaborative platforms. These platforms help teams share documents, control versions, and collaborate in real-time, making workflows more efficient. Most sharing platforms include an AI element (e.g., Copilot for Microsoft) that teams can use to track documentation, send reminders, and identify potential workflow issues. While developers currently limit these platforms to pre-defined tasks, future versions could suggest process improvements based on a project’s needs.
By analysing historical drug development data, AI tools can identify potential roadblocks with planned drug development. They can also assist in determining strategies to minimise risks and improve the likelihood of receiving Health Authority approvals. To do this, *3 Natural Language Processing (NLP) technologies can be employed for efficient analysis of regulatory texts. This includes extracting key information from regulatory documents, contracts, and guidelines, making it easier for regulatory affairs professionals to promptly interpret and act upon relevant data.
AI tools can play a role in monitoring compliance with regulatory standards. By implementing AI-driven compliance checks, regulatory affairs teams can ensure that documents and processes adhere to the latest regulatory requirements. Some Health Authorities already utilise tools. For example, in March 2024, the European Medicines Authority introduced an AI-enabled knowledge mining tool called Scientific Explorer for EU regulators. The tool enables the easy, focused and precise search of regulatory and scientific information from network sources to support decision-making and simplify processes.
AI can review documents against each other to ensure a consistent message and identify simple text errors, incorrect documents, and miscalculations in data analysis. AI can also be applied to optimise regulatory operations. This is done by automating routine tasks, such as data entry, document tracking, and reporting. This allows drug development professionals to focus on more complex and strategic aspects of their work.
As mentioned above, AI tools have already been developed to assist healthcare professionals with questions and advice (e.g. MedPaLM 2 by Google DeepMind). The NHS has been piloting an AI tool named Mia. This was used to review 10,000 women’s mammograms alongside 2 qualified radiologists. It successfully identified cancer in all patients with symptoms and an additional 11 patients on top of those identified by the overseeing doctors. These patients were confirmed to have early cancerous tumours despite not having any other symptoms. Tools like this can provide earlier diagnosis and help prevent reviewer fatigue. While this sounds extremely promising, as with everything, there are drawbacks. Tools like this require a lot of investment to develop and require extensive training on example cases. This can be challenging to access as not everyone is happy for their medical information to be used in AI trials.
However, provided you have a large pool of data, tasks in pattern matching are where AI tools can really shine. Given enough information, they can highlight treatment regimens that have been successful in previous similar cases. Ultimately, this means that the number of medications an individual may need to take can be reduced, leading to fewer risks and overall better patient outcomes.
As AI tools become more readily available to the casual user, caution must be given. All AI tools that are currently publicly available collect the information that they process. So, consider this carefully before entering your private information or commercially sensitive data. The internet contains a lot of information, but not all of it is relevant or true. Commercial AI tools offer more protection but are often expensive and unavailable outside a commercial setting.
AI is often seen as an emotionless program that is entirely analytical. However, any AI tool can suffer from the same preferences and prejudices as its creators/editors. All tools come with programming and guidelines, which will, in most cases, reflect the options/preferences of their developer. So, any outputs would still need to be assessed for bias.
AI has a lot to offer the medical and pharmaceutical industry. If used carefully, has the potential to improve many aspects of medicine. At this point in time, all outputs from AI tools need to be reviewed by an experienced human reviewer. This may change in the future, but we mustn’t lose the human experience, which is key to keeping patients at the heart of drug development.
Integrating Artificial Intelligence into drug development and regulatory processes represents a significant leap forward in pharmaceutical innovation. While AI tools, particularly LLMs, offer impressive capabilities across research automation, document authoring, risk analysis, and even medical diagnosis, they currently serve best as powerful assistants rather than replacements for human expertise. The technology’s ability to process vast amounts of data, identify patterns, and streamline workflows can significantly reduce time-to-market for new drugs and improve regulatory compliance. However, the limitations – including potential hallucinations, data privacy concerns, and inherent biases – remind us that human oversight remains crucial.
Looking ahead, the key to successful AI implementation lies in striking the right balance. Leveraging AI’s computational power and efficiency while maintaining the irreplaceable human elements of judgment, creativity, and ethical consideration. As these technologies evolve, organizations must carefully evaluate their needs, considering the potential benefits and the resources required for implementation. A synergistic relationship between AI and humans will likely shape the future of drug development and regulatory affairs, where each complements the other’s strengths, ultimately creating more efficient processes and better patient outcomes.
Hallucination refers to the phenomenon where the model generates inaccurate, nonsensical, or entirely fabricated information, even though it sounds plausible or authoritative. Hallucinations occur because LLMs generate text based on patterns learned from vast amounts of data but do not properly understand facts or real-world knowledge. Instead, they predict the next word or phrase based on statistical likelihood, which can lead to erroneous or fictional outputs.
Concentration loss refers to a model’s decreasing ability to maintain coherence, relevance, or focus over long text sequences. This happens due to limitations in the attention mechanism, context window size, and gradual build-up of errors. It manifests as off-topic, repetitive, or inconsistent outputs, especially in longer conversations or documents.
Natural Language Processing (NLP) is a field of computer science that helps machines understand, interpret, and respond to human language, whether spoken or written. It’s the technology behind voice assistants (e.g., Siri or Alexa), automatic translations, and even spell-checkers. It allows computers to “read” or “listen” to language, figure out what it means, and then act based on that understanding.
Published Jan 20, 2025
Published Dec 20, 2024
Published Dec 18, 2024
Published Dec 06, 2024
Published Dec 03, 2024
Published Dec 02, 2024
Published Dec 02, 2024
Published Nov 28, 2024
Published Nov 27, 2024