- The US Food and Drug Administration (FDA) and the European Medicines Agency (EMA) announced a joint set of principles for “Good Machine Learning Practice” (AI) in drug development in January 2026.
- The primary goal is to foster innovation and shorten time-to-market while ensuring the highest level of patient safety.
- AI is defined as a system-level technology used to generate or analyze evidence throughout the drug lifecycle, from pre-clinical research and clinical trials to post-market surveillance and manufacturing.
- Both agencies emphasize that drug approval must still be based on proven quality, efficacy, and safety; AI must not undermine these criteria.
- The use of AI in drug development has surged in recent years, requiring strict management to ensure accurate and reliable outputs.
- The document outlines 10 core principles, including human-centric design, a risk-based approach, GxP compliance,data governance, risk-based performance assessment, and AI lifecycle management.
- AI is expected to reduce drug development time, enhance pharmacovigilance, and decrease reliance on animal testing through better prediction of human toxicity and efficacy.
- This initiative originated from a 2024 FDA-EU bilateral meeting, paving the way for international regulatory harmonization.
- Many major pharmaceutical companies are ramping up AI applications and signing agreements to access advanced technical capabilities.
📌 Conclusion: The US FDA and European EMA announced a joint set of principles for “Good AI Practice” in drug development in January 2026. For the first time, the US and Europe have established a common framework for AI in pharmaceuticals, focusing on 10 pillars ranging from ethics and data to lifecycle management. This risk-based approach and GxP standards balance innovation with patient safety. This move not only accelerates drug development but also solidifies the global leadership of the EU-US in the generative AI race for biomedicine, while significantly reducing animal testing and post-market risks.
