From outputs to outcomes: Saama’s Derek Lawrence on why clinical data management must change
Clinical data management is under pressure to evolve as trials become more complex, data sources multiply and AI-driven tools promise faster, more adaptive decision-making. In this Q&A with Discover Pharma, Derek Lawrence, Strategy & Transformation Lead for Saama, speaks to Liza Laws about why change in CDM needs to be deliberate but assertive, where legacy processes are holding the industry back, and what a truly fit-for-purpose data management function should look like.
Liza: I was intrigued by how you describe yourself on LinkedIn as an aggressive advocate for change in the CDM space. Can you explain what you mean by that?
Derek: I think the aggressive part is really a necessity. Change is generically difficult, and that’s been a constant throughout my career. What we’re seeing now, particularly with AI, is a more intense version of something that’s always existed. There’s a lot of discussion about how most AI projects fail to show ROI, and I don’t think that’s because the technology itself is weak. I think it’s because we underestimate how fragile and interconnected our processes are.
I started my career at a very flat contract research organization (CRO), where you were encouraged to try lots of things. I worked across SaaS and open-source programming environments, electronic data capture (EDC) development, data management leadership, data analytics and visualisation, as well as some biostatistics. That exposure showed me how much of our industry operates like a house of cards. If you change one thing in data management, it can ripple into clinical operations, statistical programming and biostatistics very quickly.
In CROs especially, where utilisation and margins matter, any disruption is seen as lost money. If a change makes a process even 20 or 25% less efficient in the short term, it’s often dead on arrival. That environment teaches you to be very clear about the problem you’re solving and the value proposition. You have to prototype quickly, pressure-test ideas early and prove they work before anyone will support them. That’s where the aggressiveness comes in: quietly pushing ideas forward, testing them hard, and then being persuasive when it’s time to get buy-in.
Liza: Do you think that resistance to change is specific to pharma, or is it more of a human issue that a heavily regulated environment just amplifies?
Derek: I think it’s fundamentally human. Pharma just turns the volume up because of regulation. Most people take pride in their mastery of a role or a process. It feels good to know what you’re doing. When you change things, that sense of mastery disappears, sometimes overnight, and nobody likes feeling like a newcomer at something they’ve been good at for years.
People who enjoy that kind of instability are usually drawn to chaotic environments, and I’m probably one of them. But that also means you have to be disciplined. You can’t chase every interesting idea. You have to ask what you can realistically achieve in six months, in a year, in two years. If you don’t, you create chaos for everyone else and undermine the very change you’re trying to make.
Liza: You’ve spoken before about the value of hybrid, agile skilled teams. What does that look like in practice within clinical ops or data teams?
Derek: Historically, the industry has divided people into “technical” and “non-technical.” The problem is that technical teams often underestimate process complexity, while operations teams are rightly protective of workflows they know inside out. That’s where friction starts.
When you bring those perspectives together properly, it’s incredibly powerful. Ops teams understand the real-world constraints and failure points. Technical teams understand what might be possible. Change-resistant people are actually a gift, because they force you to slow down and explain exactly how something will work. If you can convince them, your idea is probably solid. If you can’t, that tells you something too.
Agile, for me, really means trying things early with wireframes or very rough prototypes. I’ve had plenty of pilots that failed before they even reached a full test phase. That’s fine. It’s far better to discover a showstopper during ideation than after a global rollout. You need to see something working, even in a rough, imperfect form, before you can judge whether it’s worth scaling.
Liza: How has your background across data, systems and programming shaped the way you think about clinical development today?
Derek: I’ve had the unique experience of being angry with different versions of myself, having worn multiple hats on the same studies. As a programmer, I’ve suffered because of bad EDC design decisions I made earlier in a project. As a team lead, I’ve been frustrated by over-engineered solutions I suggested that slowed everything down.
These experiences have taught me how critical process design really is. I also worked closely with a fantastic change management consultant who drilled home an important point: you never want to depend on the genius you get. If a process only works because one person has a rare mix of skills, it doesn’t scale and it concentrates risk.
I learned this the hard way when I created and led an initiative to train a group of data managers in SAS programming to assist in their day-to-day efficiency at a CRO quite a few years ago. The training itself went fine & everyone was enthusiastic and motivated, but usage and actual adoption was near zero in the wild. The blocker turned out to be that we’d given the DMs the introduction and skills, but we hadn’t created any space in their standard daily tasks for them to use what they’d learned. The initiative failed because I was focused on the skills and not the process. That made me hyper-focused on process as the foundation for any meaningful change.
Liza: The role of data science and automation has expanded rapidly. How far do you think the industry has really come, and where does it still fall short?
Derek: We’re limited by access to data. Other industries can aggregate massive datasets. In pharma, data is fragmented and proprietary. CROs theoretically sit on huge volumes of data, but they can’t combine it across sponsors. That makes it hard to use higher-powered AI and machine learning approaches unless you’re a large pharma company with deep historical datasets.
On top of that, we have a serious knowledge fragmentation problem. We’re a siloed industry, and even within data management there are multiple sub-specialties that don’t always speak the same language. You can solve a problem for one group and accidentally create a bigger one downstream. Without shared understanding, it’s very hard to define what problem you’re actually solving.
Liza: Modern trials involve data from EHRs, wearables, sensors and eCOA. What’s the biggest challenge in harmonising and analysing all of this efficiently?
Derek: There’s a technical challenge around volume, variety and velocity, but the deeper issue is cultural. Our entire data review model assumes there’s always a source document you can go back to. With sensors and patient-reported data, that’s often not true. If data isn’t captured correctly the first time, it’s gone.
Despite that, we still rely heavily on reactive querying. We chase every missing value even when it has no impact on analysis. What we should be doing instead is defining thresholds for use. If a statistician only needs four non-consecutive days of data per week for an endpoint related to an eCOA measure, that should drive the cleaning strategy. If they don’t care about a specific missing value, neither should we.
This requires a shift from being reactive to being proactive, and from focusing on individual data points to focusing on whether the dataset is actually usable for analysis.
Liza: How can sponsors move from static data review to more dynamic, real-time decision-making?
Derek: Step one is getting all the data into one place as quickly as possible. Traditional models where data is reviewed every 30 days simply don’t work for short or fast-moving studies. By the time you detect a problem, it may be too late to fix it.
Speed is the biggest differentiator. If you can ingest, normalise and review data quickly, you can intervene while a study is still salvageable. Without that, you’re always reacting after the damage is done.
Liza: AI is everywhere at conferences right now. What are the realistic applications in CDM today, and what’s still hype?
Derek: There are lots of potential applications, but we’re probably ready to exploit only a small fraction of them. Legacy processes don’t support many of the efficiencies AI can offer. Tools like natural language query builders raise difficult questions about validation, ownership and responsibility.
Historically, a human programmer didn’t just write code, they sanity-checked the details as well as approach and refined it through dialogue with whomever made the request. If end users generate their own outputs, who validates that they’re correct? Are we disrupting established SOPs without replacing them with something equally robust and governable?
AI is incredibly powerful, but without rethinking skills and processes, we risk creating confusion rather than efficiency.
Liza: Finally, if you could redesign the data management function from scratch today, what would it look like?
Derek: I’d centre it on one principle: Delivering fit-for-purpose data for the endpoint analyses. Data doesn’t need to be perfect and error-free down to the data point, but sufficiently robust to answer the research questions posed in the protocol and Statistical Analysis Plan. The real question is whether the conclusions of the statistical analyses are the same without further cleaning beyond a specific point.
Risk-based data management only works if teams understand what truly matters for analysis. That means closer alignment with biostatistics and a cultural shift away from querying everything by default. If we did that well, confidence should increase as we approach database lock, not decrease.
That, for me, is the future of CDM: focused on outcomes, not outputs, and confident that the data supports the scientific question at hand.




