Biotech does not suffer from a lack of data. It suffers from misreading it.
Every week brings new trial results, press releases, and conference updates. Numbers look promising. Graphs show upward trends. The claims sound confident. Then months later, the same programs fail or disappear.
The issue is not volume. The issue is judgment.
A large share of clinical programs fail despite early positive signals. Only about 10–15% of drugs that enter clinical trials reach approval. Many show encouraging data early, then collapse under larger, more rigorous testing. That gap reflects one core problem: noise is often mistaken for signal.
Dr. Leigh Beveridge, Australia has spent years reviewing late-stage data where early optimism meets hard reality. His role requires separating what looks good from what holds up.
“I’ve seen datasets where the headline result looked strong,” he says. “Then we traced it back and found the effect came from a small subgroup that wasn’t even part of the original plan. That’s not proof. That’s a prompt to investigate.”
Data Is Easy to Generate. Signal Is Rare.
Modern trials generate layers of data.
Endpoints, biomarkers, subgroups, timepoints.
More data should improve clarity. It often creates a distraction.
Teams search for patterns. They slice results until something appears significant. A number crosses a threshold. A subgroup responds better. A narrative forms.
This is where errors begin.
“People don’t invent results,” he says. “They prioritize certain ones. That choice shapes the story.”
Signal stays stable across analysis. Noise shifts depending on how you look at it.
Signal Answers a Question. Narrative Fills a Gap.
Strong data resolves uncertainty. Weak data invites interpretation.
A real signal:
- Appears consistently across analyses
- Aligns with known biology
- Persists under scrutiny
- Connects to meaningful outcomes
Weak data behaves differently:
- It depends on selective cuts
- It disappears with small adjustments
- It requires explanation to seem relevant
In one program, a subgroup showed strong response while the overall population did not.
“We asked whether that subgroup was defined before the trial started,” he says. “It wasn’t. That changed the weight we gave it immediately.”
Predefined endpoints carry credibility. Retrospective findings require validation.
Noise Spreads Faster Than Truth
Positive interpretations travel further than cautious ones.
Companies want progress. Investors want upside. Teams want validation.
This creates pressure to highlight the best version of the data.
Research shows that over 30% of published studies emphasize secondary outcomes when primary results fall short. The data remains accurate. The emphasis shifts.
“Everyone wants the program to succeed,” he says. “The discipline is staying anchored to what the data can actually support.”
Good Data Is Quiet and Consistent
Strong data rarely surprises.
It shows steady, repeatable effects. It answers the same question in the same way across different analyses.
It withstands simple questions:
- Does it work?
- How much does it work?
- For whom does it work?
- What are the risks?
In one late-stage review, results showed moderate improvement across all patient groups.
“It wasn’t dramatic,” he says. “But it held up everywhere we looked. That’s what builds confidence.”
Consistency matters more than intensity.
Early Data Often Misleads
Small studies exaggerate effects.
Limited sample sizes create volatility. Strong responses appear larger than they are. Variability is hidden.
As trials scale, results stabilize. Many early signals weaken or disappear.
This is why late-stage failures are common.
“Early data shows what might be possible,” he says. “Late data shows what is actually reliable.”
Understanding that difference prevents overreaction.
How to Recognize Strong Data
Clear evaluation reduces bias.
1. Start With the Primary Endpoint
If the main goal of the trial is not met, the result is weak. Secondary findings do not replace it.
2. Look for Repetition
A real signal appears across multiple analyses. One isolated result is not enough.
3. Treat Subgroups Carefully
Small groups produce unstable results. Larger populations provide stronger evidence.
4. Align With Biology
Results should match the known mechanism. If they do not, question them.
5. Separate Statistical Significance From Impact
A result can meet statistical thresholds and still offer minimal real-world benefit.
“Improving a number is not the same as improving a life,” he says.
6. Limit Overanalysis
The more times data is sliced, the more likely false positives appear.
Presentation Shapes Interpretation
How data is shown influences how it is understood.
Charts can exaggerate effects. Scales can distort magnitude. Language can imply certainty where none exists.
In one internal review, a graph suggested a sharp improvement.
“We adjusted the scale,” he says. “The effect looked smaller but more accurate. That changed how the team evaluated it.”
Accurate presentation prevents false confidence.
Culture Determines Data Integrity
Teams interpret data through culture.
If the environment rewards positive results, bias increases. If it rewards accuracy, judgment improves.
Clear practices help:
- Define endpoints before analysis
- Limit retrospective interpretation
- Encourage challenge
- Value accuracy over optimism
“If people feel pressure to show success, they will find it,” he says. “If they feel safe showing reality, decisions improve.”
The Next Challenge Is Not More Data
Data volume will continue to grow.
New tools will generate more signals, more metrics, more analysis.
The advantage will not come from access to data. It will come from the ability to filter it.
Strong leaders will focus on fewer, better questions.
The Bottom Line
Data does not create clarity. Discipline does.
Good data:
- Resolves uncertainty
- Holds up across analysis
- Connects to real outcomes
Everything else creates noise.
Dr. Leigh Beveridge, Australia, puts it directly: “If the result only works after you adjust it five different ways, it doesn’t work.”
In a field built on evidence, judgment remains the most important tool.
Signal is stable. Noise is flexible.
The difference defines every decision that follows.
