A rapidly evolving regulatory landscape
The regulatory framework for AI in pharmacovigilance is not defined by a single guideline, but rather by a convergence of regulations, reflection papers, and consensus reports.
At the heart of this ecosystem is the EU AI Act, the first comprehensive legal framework for AI in the EU, making the EU a pioneer and global standard-setter in AI regulation. It introduces a risk-based approach, classifying systems based on their potential impact on health, safety, and fundamental rights. While pharmacovigilance systems are not explicitly listed, many AI applications used in PV are likely to be considered “high-risk,” particularly when they influence safety decision-making or regulatory reporting.
At the same time, pharmacovigilance regulations continue to evolve through updates to Good Pharmacovigilance Practices (GVP) modules and related guidance from authorities such as the European Medicines Agency. These updates increasingly emphasize higher data quality and standardization, stronger signal detection in pharmacovigilance, and greater interoperability across regulatory systems, collectively driving a transformational shift in how safety is monitored and managed.
Where AI is already used
AI is already delivering value across key pharmacovigilance processes, not replacing professionals but augmenting their capacity. This augmentation, however, comes with a critical requirement: accountability must remain human-led.
We can already observe AI applications in case processing (ICSRs), literature and data monitoring, and signal detection and analysis. As for automation in pharmacovigilance, emerging use cases, including large language models, are further expanding the scope of automation and insight generation within pharmacovigilance systems.
No single AI–PV rulebook
Despite rapid progress, there is still no fully prescriptive regulatory guideline dedicated exclusively to AI in pharmacovigilance. One of the most influential references today is the CIOMS report on Artificial Intelligence in Pharmacovigilance. This document does not impose strict rules but instead defines a principle-based framework designed to remain relevant despite the fast pace of technological evolution. The CIOMS framework, together with guidance from regulators such as the FDA, EMA, and WHO, converges around a set of foundational principles. These include a risk-based approach, validity and robustness, transparency, data privacy, fairness and equity, and governance and accountability. Among these, human oversight emerges as one of the most critical and consistently emphasized elements.
Human oversight is non-negotiable
One of the most important discussions in AI governance concerns the level of human involvement. Two models are often described: Human in the Loop (HITL) and Human on the Loop (HOTL).
Human in the Loop refers to continuous human involvement, where AI supports decision-making but final decisions remain human-driven. In contrast, Human on the Loop describes systems where AI operates more autonomously, with humans supervising and intervening when necessary.
In pharmacovigilance, regulators consistently emphasize a risk-based balance where human accountability remains paramount. High-risk applications require stronger human control, and even in highly automated systems, final responsibility always remains with qualified professionals.
A clear pattern emerges across global frameworks and guidance, from WHO to EMA, FDA, CIOMS (and others!), where human oversight is the most consistently emphasized principle. This is critical for several reasons.
AI systems are probabilistic rather than deterministic, meaning they can produce errors, particularly when exposed to new data patterns, biased training data, or rare edge cases. AI and patient safety are inextricably linked, as pharmacovigilance decisions influence the benefit–risk balance, public health outcomes, and regulatory actions following signal detection. Accountability must also remain clear and traceable. Regulatory expectations require a defined chain of responsibility, while AI systems themselves cannot be held accountable. This reinforces the need to align AI outputs with established data integrity principles such as ALCOA++. Examples of key questions that must be addressed: who validated the model, who approved the output, and who is accountable in case of failure?
The formula: Human-backed AI + Pharmacovigilance = Trust
Rather than replacing human expertise, a hybrid model is emerging. AI enables scale, speed, and advanced pattern recognition, while humans provide judgment, contextual understanding, and accountability.
This balance becomes even more important as systems evolve toward continuous learning models, real-time signal detection, and integration of diverse data sources such as real-world data and scientific literature. In this context, human oversight evolves from purely operational control to strategic supervision.
The current regulatory evolution, particularly with the EU AI Act and ongoing GVP developments, is driving a fundamental shift toward a central question: can these systems be trusted? The answer is yes - but only if trust is built through transparency, robust validation, ethical design, and, above all, visible and effective human control.
Conclusion: innovative and responsible pharmacovigilance
AI has the potential to transform pharmacovigilance, making it faster, more predictive, and more proactive. However, as consistently reflected in guidelines, reports, and regulatory expectations, this innovation must be grounded in trust and reinforced by professional human oversight.
The regulatory direction is clear: principles over prescriptive rules, a strong emphasis on risk-based approaches, and human oversight at the core of all AI-enabled pharmacovigilance systems.