Yseop on FDA’s AI push: Why structured, explainable content is the future of regulatory submissions

As the FDA accelerates its use of generative AI in regulatory reviews, life sciences companies are under pressure to modernize how they prepare and structure submissions.

In this Q&A, Yseop’s Pierre-Louis Durel, who is VP of corporate partnerships outlines how the company’s automation platform is designed to meet emerging regulatory expectations, enabling faster, more consistent, and audit-ready documentation.

What is your reaction to the FDA’s announcement about using generative AI in scientific reviews?

We view the FDA’s announcement as a significant validation of the trajectory we’ve been pursuing: a future where regulatory content is not only compliant and clear but also optimized for AI-assisted review. This development underscores the importance of creating structured, machine-readable documents that facilitate efficient and accurate analysis by both human reviewers and AI systems.

How significant is this milestone for the broader adoption of AI in life sciences and regulatory processes?

The FDA’s aggressive rollout of AI across all departments marks a pivotal shift in regulatory operations. By integrating generative AI into the review process, the FDA is setting a precedent that will likely accelerate the adoption of AI technologies across the life sciences industry. This move signals to pharmaceutical companies the necessity of adapting their documentation practices to align with AI-driven review processes, thereby enhancing efficiency and consistency in regulatory submissions.

How do Yseop’s AI/NLG solutions align with the type of automation the FDA is piloting?

The company has long championed the automation of regulatory writing through structured, scalable approaches. Our technology automates the creation of critical documents such as Clinical Study Reports (CSRs), summary documents, and narratives. Yseop Copilot, our flagship solution, ensures that generated content is consistent, traceable, and aligned with both human and machine readability standards. As regulatory bodies like the FDA embrace AI, the demand for such structured and compliant documentation becomes increasingly vital.

Do you see opportunities for the company’s technology to support or complement regulatory review processes, particularly around narrative generation or data interpretation?

Absolutely. With the FDA mandating the integration of generative AI across all centers by mid-2025, there is a pressing need for life science companies to adapt their processes accordingly. Yseop is well-positioned to assist organizations in generating AI-ready content that aligns with the new FDA review model, ensuring that submissions are both efficient and compliant.

Given your experience in automating clinical and regulatory documentation, how ready is the industry for large-scale AI integration like this?

While the industry has made strides in adopting AI technologies, readiness varies among organizations. Our experience indicates that companies that have invested in structured data and document automation are better prepared for this transition. However, widespread readiness will require concerted efforts in training, process reengineering, and technology adoption to fully leverage AI’s potential in regulatory processes.

What challenges do you foresee in implementing generative AI within regulatory frameworks—particularly around transparency, traceability, and auditability?

Implementing generative AI in regulatory contexts necessitates a focus on:

  • Transparency: Ensuring that AI-generated content is understandable and interpretable by human reviewers.

  • Traceability: Maintaining clear links between source data and generated narratives to facilitate verification.

  • Auditability: Providing comprehensive records of AI processes and outputs to support compliance audits.

The company addresses these challenges by designing solutions that produce structured, self-contained documents with clear data lineage, aligning with regulatory expectations.

How do you ensure that AI-generated content meets compliance and quality standards required by agencies like the FDA or EMA?

The company’s solutions are built with compliance at their core. We incorporate regulatory guidelines into our AI models, ensuring that generated content adheres to required standards. Our systems also include validation checks and quality controls to maintain the integrity and accuracy of the documentation, thereby meeting the stringent requirements of agencies like the FDA and EMA.

Has Yseop had any discussions or collaborations with regulatory bodies or agencies regarding the use of AI in document preparation or review?

While we cannot disclose specific collaborations, we actively engages with regulatory bodies and industry stakeholders to align our solutions with evolving regulatory requirements and to contribute to the development of best practices in AI-assisted regulatory documentation.

Do you anticipate an increase in demand for explainable and validated AI tools as agencies like the FDA move forward with adoption?

Yes, the FDA’s adoption of AI underscores the necessity for explainable and validated AI tools. As regulatory agencies integrate AI into their processes, the demand for transparent, reliable, and compliant AI solutions will grow. Yseop is committed to providing tools that meet these criteria, supporting the industry’s transition to AI-enhanced regulatory practices.

How might this shift by the FDA influence the uptake of AI-driven solutions across pharma and biotech clients?

The FDA’s initiative serves as a catalyst for the broader adoption of AI in the pharmaceutical and biotech sectors. Companies will recognize the need to modernize their documentation processes to align with AI-driven regulatory reviews. This shift will likely accelerate investment in AI technologies that enhance efficiency, compliance, and the overall quality of regulatory submissions.

How does your platform ensure data security and control when deployed in regulated environments?

The company prioritizes data security and control by implementing robust security measures, including encryption, access controls, and compliance with industry standards such as ISO 27001. Our platform is designed to operate within the stringent data protection requirements of regulated environments, ensuring that sensitive information is safeguarded throughout the document generation process.

What differentiates your approach to generative AI from more generalized tools like ChatGPT when it comes to scientific or regulatory use cases?

Our approach combines symbolic AI, machine learning, and large language models to deliver accurate and compliant narratives tailored for the life sciences industry. Unlike generalized tools like ChatGPT, which may produce probabilistic outputs with varying accuracy, Yseop’s solutions are designed for precision, traceability, and adherence to regulatory standards, making them suitable for high-stakes applications in scientific and regulatory contexts.

Mail Icon

news via inbox

Sign up for our newsletter and get the latest news right in your inbox