Veeva’s Sebastian Wurst on building the data foundation for enterprise AI
The promise of AI in commercial biopharma is immense, with organizations anticipating transformative impacts on everything from sales uplift to operational efficiency. However, a significant gap has emerged between ambition and execution. While AI pilots show promise, a widespread failure to scale these initiatives enterprise-wide is preventing companies from realizing their full potential.
To unpack this critical industry challenge, we sat down with Sebastian Wurst (SW), Director Strategy, OpenData, Veeva Europe to discuss the findings from Veeva’s recent survey, The State of Data, Analytics, and AI in Commercial Biopharma, and explore the path forward.
The industry has high hopes for AI, but your report reveals a significant execution gap. Can you briefly summarize the core challenge preventing companies from scaling their AI initiatives?
SW: You’re right, the ambition is massive. 86% of leaders expect a 5%+ sales uplift from AI, for example. Yet the reality is that most initiatives fail to scale. The core challenge isn’t the AI technology itself; it’s the data foundation it relies on. A staggering 96% of leaders believe their data isn’t structured or ready for scaling AI, and 67% have had to abandon an AI project entirely due to bad data. This failure stems from three root causes: poor data quality, data inconsistencies, and data fragmentation. This creates a paradox where promising technology is constantly being undermined by foundational data issues.
Given that most leaders agree that Generative AI isn’t a silver bullet, what are the two primary strategic paths organizations are taking to build a better data foundation?
SW: Leaders are now looking beyond pure technology and focusing on foundational strategy. Two distinct paths have emerged.
The first is the traditional, in-house governance approach: building a proprietary data standard from the bottom up. Meaning, a company owns and maintains its own data standard. This offers maximum control but comes with a significant “operational tax” of constant data cleansing and can lead to diminishing returns on investment.
The second, emerging path is data standardization: Adopting a pre-harmonized, top-down data model. It’s cheaper and more efficient for shared and widely-used data, but it offers less control and does not provide the synergies for purely proprietary data. In our survey, 76% of leaders see this as the #1 enabler of AI at scale.
So leaders are facing a strategic dilemma: invest heavily in a high-control, in-house approach or opt for a more efficient, but less customized, industry standard. How do you advise them to navigate that choice?
SW: That is the central question, and personally, I think it doesn’t have to be an either/or choice. The most effective strategy isn’t to pick one path over the other, but to combine them in a balanced approach. The framework for this is to prioritize standardizing shared, common data, used across the organization. Foundational assets like customer master data, HCP and HCO profiles, affiliations, and specialties. This data should be standardized using an industry model. It solves the problem once for everyone, increasing efficiency and freeing up internal resources from low-value data cleansing tasks. So my advice would be to not reinvent the wheel, especially for foundational, shared data.
Once that foundational layer is standardized, where should a company focus its internal data governance efforts to create a true competitive advantage?
SW: Once you have a trusted, standardized foundation, you can redirect your most valuable resources toward what makes you unique. The focus of internal data governance should be on proprietary data that truly differentiates your commercial strategy. This includes e.g. internal datasets that reflect your specific go-to-market history and insights and is what feeds into company-specific analytical models and unique segmentation approaches. By governing this layer with precision, you create higher strategic impact. This is where your data teams move from data wrangling to generating the unique insights that drive performance and can’t be replicated by competitors.
Looking forward, what is the ultimate vision for an organization that successfully implements this balanced strategy? What does “success” look like in 3-5 years?
SW: Success means moving from a reactive to a proactive model. An organization with a solid, balanced data foundation will have eliminated the “operational tax” of manual data integration, resulting in a lower total cost of ownership and a simplified architecture. This speed and reliability will allow them to innovate continuously, deploying and scaling new AI tools and analytics across markets in weeks, not months or years. For field teams, it means having tools they actually trust, with recommendations that are genuinely helpful. Ultimately, success is when the organization’s data becomes a durable competitive advantage, enabling smarter, faster commercial execution that consistently outperforms the market.




