How Servier Built a Semantic Layer That Makes Pharma Data Work for Scientists

In the race to develop new drugs, pharmaceutical R&D teams face a constant challenge: how to make sense of mountains of complex, siloed data. Whether it’s in vitro results, in vivo studies, or omics datasets, researchers often struggle to connect the dots due to missing context, inconsistent terminology, and outdated systems.

Servier, the leading French independent pharmaceutical company and the second-largest French pharmaceutical company governed by a non-profit organization, is flipping this script.

At the Pistoia Alliance Conference in London, Thibaut Snollaerts from Servier and Eelke van der Horst from The Hyve shared how Servier is building a semantic layer that adds powerful context and unifies data across their research pipeline into their TURING Knowledge Platform.

The Problem: Too Much Data, Not Enough Insight

Let's face it. Pharma companies generate a staggering amount of data:

  • It's scattered across legacy systems
  • Each domain uses different schemas and vocabularies
  • Researchers often don't know what data exists or how to interpret it
  • Valuable results can't be reused because the context remains within the data generator and isn't shared (e.g., who ran the assays, which compound, which cell line).

The result? Slower discovery cycles, wasted efforts, and limited potential for AI or advanced analytics.

The Solution: A Semantic Layer for Research Data

The TURING Knowledge Platform, short for Trusted, Useful & Reliable Information you Needed to Gather, was the solution. Built as a centralized “knowledge marketplace,” it connects data and context through a powerful semantic layer.

Here's how it works:

  • Domain Models: Each research area (in vitro, in vivo, omics) gets a structured model that defines how data should be described and linked.
  • Ontologies: Standard vocabularies ensure everyone speaks the same language, no more guessing what "assay ID" means in one system versus another.
  • RDF-Based Knowledge Graph: All this information is turned into a graph where data and metadata are connected and searchable.

What Context Looks Like in Action

Picture this: you find a result like 5.0E-07. That number alone means nothing. But with the platform, you instantly see:

  • The assay name and assay target
  • The cell line and pH used
  • The compound batch, fitting model, and experimenter
  • The sample info and related data generated from it
  • The associated project, unit, and more

Suddenly, the result isn’t just a number, it’s a story you can trace, validate, and reuse.

Why This Changes Everything

This shift isn't just about organizing data. It's about unlocking real value:

  • Speed: Researchers find what they need faster and with confidence.
  • Collaboration: Shared terminology fosters better communication across teams.
  • AI Readiness: Clean, contextual data fuels predictive models, regulatory report generation, and more.
  • Transversality: Data from multiple domains can be explored together to drive cross-functional insights.

It’s also flexible, cloud-based, and built to scale, making it a future-proof foundation for research and innovation.

Who Made It Happen?

This initiative brought together a dream team within Servier:

  • Research teams (in vitro, in vivo, omics)
  • Data Science & Data Management
  • Digital transformation and governance leaders
  • Ontology and semantic modeling experts from The Hyve (Consultants for FAIR, Semantic modeling, AI, and Knowledge Graphs)

It’s a shining example of what’s possible when solving technical depth for real-life research needs.

What Is Next?

The TURING Knowledge Platform is in its early stages and slowly paying off. The next steps include:

  • Expanding to include external datasets and public resources
  • Scaling the platform across more data domains
  • Powering generative AI tools that turn raw data into insights (and even draft regulatory content)

In short: making data actually work for the people who need it most.