Skip to main content

In many nonclinical workflows, SEND appears near the end of the process.

The study has been designed and executed. Data has been collected, reviewed, analysed, and reported. Then the data is converted to SEND, quality-controlled, prepared for submission, and included in the regulatory package.

After that, the SEND datasets are often archived.

For regulatory delivery, this is a rational workflow. The deliverable has been created, and the requirement has been met. But from a data management perspective, this is where value is often left unused.

SEND may be one of the most structured representations of the nonclinical data. But if it is stored as files outside the active research data environment, it becomes difficult to query, compare, and reuse.

SEND solves part of the structure problem

Nonclinical data are difficult to reuse across studies because they often differ in terminology, collection patterns, result structures, and the way subjects, samples, treatments, and observations are represented.

SEND addresses part of this by introducing a common structure.

It defines domains such as DM, LB, EX, etc. It uses controlled terminology and provides a consistent way to represent subjects, treatments, findings, timing, and relationships across datasets.

That does not mean SEND solves every data challenge, though. But it does create a structured version of nonclinical study data that is easier to reuse than many of the source formats it was created from.

Standardised files are not the same as usable data

A dataset can be technically standardised and still be difficult to use.

A team may want to compare a lab parameter across several historical studies. In principle, SEND should help because the observations follow a common domain structure and terminology.

But if the datasets are stored as files in a GxP archive, the question quickly becomes operational:

  • Where are the datasets?

  • Which studies are included?

  • Can they be queried across studies?

  • Are subject, treatment, timing, exposure, and result relationships preserved in a usable way?

  • Or does each study need to be extracted, loaded, interpreted, and scripted separately?

This is the difference between having structured files and having usable data.

If SEND sits outside the research data environment, reuse often starts with manual extraction, custom scripts, and study-specific interpretation.

The value increases across studies

The long-term value of SEND is not only in individual studies, but also in the datasets that accumulate over time. While one SEND dataset can support a submission, a collection of SEND datasets can become a research data asset.

Because SEND applies a consistent structure across studies, it makes it possible to do a number of things that  are difficult when each study is treated as an isolated deliverable:

  • Comparing findings across related studies.

  • Exploring repeated patterns in historical control animals.

  • Exploring how exposure, dosing, lab findings, and outcomes relate across a broader set of studies.

  • Determining whether historical data could support study planning or approaches such as virtual control groups.

But to make SEND useful beyond submission, it has to be stored in a way that preserves its structure.

The storage layer should support queries across domains such as DM, LB, EX, and related datasets. It should preserve links between subjects, samples, dosing, observations, timing, and results. And it should make the data accessible beyond the group that created or submitted it.

SEND should not be the end of the data journey

The practical shift is to treat SEND not only as a regulatory package, but as a reusable data structure for nonclinical research.

The question is not only whether an organisation can create valid SEND datasets. It is whether those datasets remain findable, accessible, queryable, and connected after submission.