Rethinking How Life Sciences Organizations Approach AI – Mathias Cousin of Deloitte
This interview analysis is sponsored by Deloitte and was written, edited, and published in alignment with our Emerj sponsored content guidelines. Learn more about our thought leadership and content creation services on our Emerj Media Services page.
Life sciences organizations stand at a paradoxical moment in AI adoption. The technology’s potential is visible in discovery, manufacturing, and commercial operations — yet operational value, as is the case across industries, is somewhat harder to discern.
Deloitte’s 2025 R&D ROI report, — “Be brave, be bold: Measuring the return from pharmaceutical innovation,” frames the biotech growth challenge starkly: forecast cohort IRR rose to 5.9% (2024), yet the average cost to take an asset from discovery to launch is $2.23B, while average forecast peak sales are $510M per pipeline asset (or $370M excluding GLP-1s).
That gap is where generative AI and agentic AI can create leverage by compressing high-friction knowledge work (scientific synthesis, trial and regulatory drafting, insights from clinical/RWD), and coordinating multi-step workflows enterprise-wide — connecting R&D, clinical ops, regulatory/quality, manufacturing, and commercial — so decisions, documentation, and execution move faster with the right governance and human oversight.
So while algorithms can generate billions of molecular candidates or automate documentation, few life sciences enterprises have built systems to integrate AI capabilities into the highly regulated, data-fragmented environments they operate in.
This article, drawn from an Emerj ‘AI in Business’ podcast interview between Deloitte Managing Director Mathias Cousin and Emerj Editorial Director Matthew DeMello, reframes the gap between hype and deployment as an opportunity to redesign how life sciences organizations create value. Cousin’s perspective shifts the question from “Where can we apply AI?” to “How should we structure adoption to deliver measurable outcomes?”
The discussion distills two key topics for pharma leaders navigating the AI deployment challenge:
- Prioritizing a “string-of-pearls” over point solutions: Connecting discrete use cases into an end-to-end “string-of-pearls” program to reimagine the way work is done and deliver transformative value.
- Building for adoption, not expectation: Focus where data quality, business priorities, and time-to-impact align, and empower AI-native teams to drive change.
Listen to the full episode below:
Guest: Mathias Cousin, Managing Director, Deloitte
Expertise: Hypergrowth Biotech and Medtech, Next Generation Therapies, Engineered Biology,
Brief Recognition: Mathias leads Deloitte’s Life Sciences Hypergrowth Biotech and Medtech practice in New England. With over 12 years at the company, Cousin, now Managing Director at Deloitte, focuses on nurturing biotech ventures from inception through launch and scale.
Prioritizing “String-of-Pearls” Over Point Solutions
Cousin’s central recommendation is to move beyond narrow, isolated use cases and instead architect “string-of-pearls” programs: sequences of connected use cases that collectively reimagine a core process, thereby delivering significant value in terms of efficiency and productivity.
Instead of piloting a single model in a single task, leaders map a high-value process — such as parts of clinical development, safety, pharmacovigilance workflows, or internal service functions like HR contact centers — and identify a small number of tightly linked interventions that together change outcomes that matter.
Point solutions are “narrow in definition” and often technically feasible, but they limit impact. A string-of-pearls approach aims “to bring value forward in a much more compelling fashion,” shifting from scattered pilots to a coherent operating mechanism that alters cycle time, error rates, or throughput of the process as a whole.
Cousin advises that executives seeking to drive effectiveness in the deployment of AI should first ask:
- What is the process we are redesigning, not just the task? How do we drive not just efficiency, but also creative ideation (e.g., positive hallucinations)?
- Which few use cases, staged in a deliberate order, can compound value?
- Where do we still need humans to do the “heavy lifting,” versus where can AI safely automate or assist?
- What are the implications of what we will find, and will we be able to leverage the additional efficiency effectively? In other words, is the real world able to adopt what the virtual world can produce?.
This approach demands a more mature discussion of appropriate use. In HR or contact-center contexts, for example, organizations should decide up front where self-service agents make sense and where human judgment is required. The objective is not replacing people wholesale but building a trust-preserving division of labor that improves service, speed, and consistency. As a result, the entire operating model of biotechs needs to be reviewed, with AI agents becoming new ‘team members’ and helping break down traditional silos.
Building for Adoption, Not Expectation
Cousin is direct about the slowdown in enterprise enthusiasm: implementation is hard, and value is not automatic. He advises that leaders should concentrate their efforts where three factors overlap: data readiness, business priority, and time-to-impact. Once their plan is in place, Mathias continues, they should empower AI-native talent and governance to carry out the change.
“Let’s face it, it’s actually difficult to implement those tools right. You need to have your data well-organized. You need to have, you know, access to the right models. You need to set up your infrastructure in order to do that. You need to have talent in order to make it happen for you.”
– Mathias Cousin, Managing Director at Deloitte
Focusing on where the conditions are right avoids the pitfalls of valueless use cases, Cousin argues. Avoid the “everywhere at once” instinct; instead, use a short evaluation to pick applications that are worth scaling:
- Value potential: Which outcome—cycle time, right-first-time, cost-to-serve, or revenue—will move in a way executives can recognize?
- Strategic differentiation: Does success change your competitive position, rather than just delivering a localised efficiency?
- Data readiness: Are inputs accurate, accessible, and contextualized?.
- Scalability: Do you have the people who will adopt it, the infrastructure to run it, and a plan to expand it?
- Time to impact: Can you produce learning and value inside an acceptable window?
Cousin is explicit: the programs that matter most “require much more organizational and leadership commitment.” In other words, easy wins are fine, but they are not the point.
Mathias goes on to distinguish between the expertise of foundational AI researchers and AI-native operators. Most enterprises do not need a lab of world-renowned scientists to scale value, he argues. They need practitioners who have used these tools, understand what models can and cannot do, can prototype safely inside line functions, and have a firm grasp of said functions. Operationally, that means:
- Embedding AI-native product owners inside functions like R&D, manufacturing, and commercial.
- Pairing them with process owners and giving them clear outcome mandates.
- Encouraging hands-on experimentation with guardrails. “If you’ve got access to GPT-5,” he says, “go and code something… see what it looks like.”
- Supplementing with consultants or contractors where headcount is constrained, without losing proximity to the work.
Governance of this process isn’t simply a review board at the end of a deployment cycle, Mathias warns — it is the system by which the organization learns safely and decides where to expand automation. He goes on to describe practices for approaching governance, starting with keeping a human in the loop.
To that end, Cousin makes the case that employees must:
- Assess early results to decide which intents or steps can graduate to automation or broader rollout.
- Adopt high-quality instruments, enabling teams to monitor leading indicators that predict movement in outcomes.
- Tailor thresholds for success (or failure) by function to more effectively validate outcomes. For example, Research can tolerate exploration, GMP manufacturing does not, while Commercial expects a quarterly impact.
Mathias’ final insight for pharma leaders is to be wary of envisioning a single adoption plan across the enterprise; a single, uniform plan fails across these cadences. Leaders, he argues, should tune goals, guardrails, and communications to each function’s incentives and risks in order to deliver AI use cases that are timely, effective, and impactful.
