EXCLUSIVE: How agentic AI Is transforming medical affairs: A conversation with Sorcero CEO Dipanwita Das
Artificial intelligence (AI) is reshaping the pharmaceutical industry, with agentic AI frameworks leading the charge in enhancing medical affairs. In this exclusive Q&A, Das, also co-founder of Sorcero, discusses how their agentic AI platform addresses challenges like data hallucinations and information overload, ensuring reliable and actionable insights for life sciences professionals.
Let’s start with the problem – you mention that LLMs hallucinate in around 27% of outputs. Can you clarify what types of hallucinations you’re seeing in a life sciences context? Are these minor factual errors, or are they potentially dangerous misinterpretations?
Hallucinations in life sciences AI aren’t just minor factual hiccups; they’re often errors with the potential to directly and negatively impact patient care. Generic LLM systems, not purpose-built with deep medical domain expertise, struggle with the complexity and precision required for our industry.
For instance, we see them fabricate clinical study results that never happened, cite papers that don’t exist, or misinterpret critical safety signals within complex medical literature. We’ve also seen cases where generic AI completely misunderstood the significance of negative test results, which in many medical contexts counterintuitively represent positive patient outcomes.
When an MSL receives inaccurate information like this, it creates a cascade of problems like inefficient workflows, delayed strategic decisions, and ultimately, a risk that patients might not get the right treatments at the right time.
From the start, Sorcero has focused heavily on foundational R&D and patents around medical AI verification and validation. Addressing AI hallucinations in medical contexts isn’t just about better algorithms. It’s about creating systems that fundamentally understand nuanced medical language and can trace every insight back to verified, attributable sources.
This approach ensures that when medical professionals make decisions based on AI-generated information, they can trust it’s backed by real evidence, not dangerous fabrication.
Burned by the hype? Many in life sciences were early adopters of AI. What were some of the biggest mistakes companies made when adopting generic LLMs? What lessons has the sector learned?
The life sciences industry has indeed been a pioneer, adopting AI early in specialized areas like gene mapping and protein folding. Like many transformative technologies, AI experienced a cycle of overheated expectations, particularly with the consumerization via tools like ChatGPT, followed by some disappointment, and now we’re entering a phase of steady, realistic adoption.
One key lesson learned is the limitation of generic LLMs. These models simply aren’t trained for the nuances of life sciences. They lack the specialized knowledge foundation for medical context, don’t inherently understand the critical jobs-to-be-done within pharma, and often operate without the necessary compliance and safety guardrails. This leads directly to the hallucination issues we discussed.
Another path companies explored was building proprietary AI solutions in-house. While attractive initially, many underestimated the significant, ongoing costs and effort required for development, domain-specific tuning, and maintenance. A major hurdle is achieving and maintaining AI-ready data. Creating a prototype might seem straightforward, but deploying and sustaining a production-ready, compliant AI program that understands the industry and keeps pace with rapid technological evolution is immensely challenging and resource-intensive.
Moving ahead, the most effective approach is partnering with companies developing purpose-built, industry-specific AI and LLM solutions. This directly addresses the shortcomings of generic tools while mitigating the risks and burdens of in-house development. Given the speed of innovation, partnership ensures organizations can leverage the latest advancements effectively and stay competitive.
Define the shift—Can you clearly explain the difference between a generic LLM and an agentic AI framework? How does the latter solve issues like hallucination and context relevance?
Think of a generic LLM like a highly knowledgeable generalist attempting to perform complex surgery—impressive breadth, but lacking the specialized tools and workflow for the specific task.
An agentic AI framework, which Sorcero utilizes, is more like a coordinated surgical team or a specialized drug development process. It involves multiple specialized ‘agents’—distinct AI components—each expert at a specific part of a complex workflow.
Instead of relying on a single LLM’s limited “one-shot” attempt to find and synthesize information, an agentic framework divides the labor. For example, one agent might specialize in precisely extracting data from diverse sources (like clinical trial reports or publications), another excels at assembling relevant information, a third focuses on generating potential answers, yet another critically evaluates those answers for accuracy and relevance, while a dedicated agent handles rigorous fact-checking and source attribution.
This collaborative approach is ideal for generating complex insights for medical and commercial teams. Agentic AI directly addresses hallucinations and improves context relevance because each step is handled by a purpose-built specialist. Crucially, this framework inherently supports clear source attribution and audibility, ensuring the final intelligence delivered is reliable, credible, and transparent.
Real-world application—Could you walk us through a real-world use case where agentic AI delivered measurable impact for a medical affairs team?
Let me share an example from one of our partners, a top 30 biopharma’s medical affairs team focused on a challenging rare disease landscape. They were struggling to synthesize a complete picture, with critical medical intelligence fragmented across congress summaries, CRM field notes, medical inquiries, and publications. Assembling insights relied on laborious manual processes, resulting in quarterly reports that often reach leadership weeks or even months late, delaying strategic decisions vital for patient outcomes in this specific therapeutic area.
Implementing Sorcero’s platform, which leverages our agentic AI framework, they unified these disparate data sources. Our system analyzed emerging trends in near real-time across their entire medical affairs ecosystem. Medically-tuned AI, combined with comprehensive dashboards, allowed the team to rapidly surface previously hidden insights, identifying key developments from publications, tracking evolving KOL perspectives, and understanding the impact of their engagement tactics much more quickly.
The result? The medical affairs team could define more precise strategies, significantly enhance the quality and timeliness of their KOL engagement and allocate resources far more effectively. In a rare disease space, where timely, comprehensive information is critical due to small patient populations and complex pathways, this shift from delayed reporting to near real-time, actionable intelligence was transformative. They moved from reactive reporting to proactive strategic planning.
Data overload vs. data insight – Life sciences teams often have access to huge amounts of data. Is the issue lack of access or lack of synthesis? How does agentic AI address the ‘signal vs. noise’ challenge?
This is a key challenge: the issue is rarely a lack of data, but often a lack of effective synthesis and insight generation. Life sciences organizations are swimming in data – clinical trials, publications, real-world evidence, CRM notes, medical inquiries, congress abstracts – generated across numerous siloed systems. The struggle lies in extracting meaningful, actionable intelligence from this vast, often disconnected, sea of information.
This is precisely where agentic AI excels at addressing the ‘signal vs. noise’ problem. Our platform connects to these diverse, enterprise-wide data sources. Then, leveraging specialized AI agents, it doesn’t just consolidate information, it actively analyzes and synthesizes it. These agents can uncover subtle connections and patterns across disparate datasets that would be incredibly time-consuming, if not impossible, for humans to spot manually.
This includes unlocking critical insights previously trapped in unstructured formats like text within images, complex tables in PDFs, presentation slides, or free-text field notes in CRM systems. By intelligently connecting, processing, and validating information from all these sources, agentic AI filters out the noise, identifies the crucial signals, and delivers timely, actionable intelligence tailored to the needs of life sciences teams. It transforms data overload into a strategic advantage.
Trust and transparency – AI systems in healthcare need to be explainable. How do you ensure that the insights produced by agentic frameworks are traceable and verifiable by human experts?
Trust isn’t optional in healthcare, it’s absolutely foundational. When any decision impacting patient care or strategic direction relies on AI-generated insights, professionals need absolute confidence in its accuracy and origin.
At Sorcero, building and maintaining that trust is paramount. Ensuring explainability and validating outputs requires a multi-layered approach combining adherence to regulatory standards like GxP, AI finely tuned for medical use cases, and robust technical safeguards. Techniques like Retrieval Augmented Generation (RAG) are fundamental for grounding AI outputs in verifiable external knowledge sources, improving precision.
However, our agentic framework goes further. Multiple specialized AI agents collaborate. They don’t just retrieve information, but they actively validate sources, verify generated answers against that evidence, rank source credibility, and cross-reference findings. This significantly enhances accuracy compared to a monolithic LLM system.
Critically, we design for human oversight. Human experts remain essential for final validation. The AI must not only be accurate but also fully transparent – every insight produced is traceable back to its source, allowing professionals to verify the evidence themselves. This is vital in a regulated, high-stakes environment like life sciences, where explainability isn’t a luxury – it’s a requirement.