AI governance in pharmacovigilance: Marie Flanagan calls for safety-specific frameworks to safeguard patient risk
The integration of artificial intelligence into pharmacovigilance is prompting a rethink of traditional governance frameworks, according to Marie Flanagan, a leader in AI-driven safety innovation. She argues that pharmacovigilance teams require safety-specific oversight structures to support compliance, trust and patient safety.
“Traditional AI governance frameworks do not account for dynamic or autonomous AI,” Flanagan said. “The opaque nature of generative AI models challenges us to consider explainability and risk analysis in new ways in the context of patient safety.”
Flanagan believes that as AI becomes more prevalent in safety operations, standard governance policies need to adapt. Applying generic frameworks without modification, she warned, could erode confidence and lead to setbacks in AI implementation.
“If you apply a general-purpose governance model to PV, you risk overlooking key operational safeguards, which erodes trust among users and leads to detrimental setbacks for your AI journey,” she said.
Flanagan said the regulatory landscape is currently undergoing critical changes that will shape the role of AI in pharmacovigilance. “We’re entering a pivotal period during which regulators are actively drafting the regulations that will influence how AI is developed, validated and deployed within this domain,” she explained.
The EU AI Act, as well as evolving guidance from the WHO, FDA, EMA and MHRA, alongside the CIOMS Working Group XIV report, are providing “early direction” to help organizations build frameworks that are both compliant and innovative, she added.
Flanagan said that in order for AI systems to be trusted in regulated environments, they must meet expectations around governance and reliability. “AI systems must align with the framework of principles and good practices for developing and using AI in PV,” she noted. These principles include “a risk-based approach with human oversight, validity and robustness, transparency, data privacy, governance and accountability.”
At IQVIA, Flanagan said, the team builds technologies according to these principles. “We develop, validate and deploy technology in accordance with these guiding principles,” she said.
She also warned that making AI tools transparent and usable for non-technical pharmacovigilance professionals is essential. “We need to start by democratizing the language surrounding AI and drawing more practical parallels to recognizable PV use cases,” she said.
According to Flanagan, AI governance frameworks must be built with PV professionals in mind. “PV teams need to know testing, validation and documentation is robust and that the necessary controls are in place to govern AI,” she said. “In day-to-day routine use, they need to interact with the model and understand where the human-in-the-loop checkpoints are and what escalation pathways will serve them as they exercise judgment and control over the outputs.”
On the topic of oversight, Flanagan said human involvement remains critical, particularly with emerging generative models. “Human oversight is not only foundational to the success of any AI system, it is also mandated for high-risk use cases pertaining to patient safety,” she said.
She explained that oversight is evolving from traditional analyst-machine collaboration to more complex human-in-the-loop models. “Effective oversight defines fit-for-purpose levels of performance for the intended task and an ongoing quality assessment that triggers a fine-tuning of the AI model, or a fall back to manual process when confidence thresholds are not met,” she said.
Flanagan expects AI governance models to continue evolving as technologies mature. “I foresee changing roles and competencies on PV teams and changing regulation governing data privacy,” she said. “Although AI technology is advancing at breakneck speed, implementation of AI technology in PV will not.”
She anticipates “more calculated risk-taking and overall positive (but cautious) advances” in the coming years, especially in the use of AI for “unstructured ICSR data extraction, case intake, narrative and report generation, causality and medical assessment, reporting, [and] signal management.”
Reflecting on early learnings from the integration of generative AI into safety systems, she said the importance of built-in protections is already clear. “One of the biggest lessons is that guardrails, or intuitive AI controls, are non-negotiable, and open and transparent communication is critical,” she said.
She added that successful adoption hinges on cross-functional collaboration. “When safety scientists, technologists and business process owners work together from the start, adoption is smoother and outcomes are stronger,” she said.
Looking ahead, Flanagan sees strong potential for AI in streamlining routine safety processes. “Intake and triage are prime candidates for targeted innovation in AI,” she said. “These areas are often labor-intensive and rely on consistency and speed, and broadly speaking, are low-risk.”
She also identified signal detection as a promising focus area, particularly as more real-world data enters the picture. “Advanced AI systems could take PV beyond these boundaries with augmented decision-making, reasoning and assigning causality or in situations where context-aware or medical judgment is required,” she said.
Ultimately, Flanagan believes success will depend on how well the PV community balances innovation with oversight. “The future of PV will be defined by its ability to evolve alongside AI, and safety-specific AI governance will ensure we do so without losing sight of patient safety,” she said.




