Intelligence

The intelligence layer for bioprocess.

A bioprocess-native teammate that turns the hard questions about your process into traceable answers — in minutes, not weeks. Every run makes it sharper.

Why it matters

The answers are already in your data.

Every run your team has ever executed is full of answers — about your process, your scale-up risks, your next best experiment. Today, most of them never reach the team in time. Weeks of aggregating, reformatting, and reconciling before a single question gets answered.

Invert closes the gap. Raw process data to a traceable answer — in minutes, not weeks.

The old way
Day 1Pull exports from the historian
Day 2–3Clean, reformat, reconcile units
Day 4–5Align timestamps across systems
Day 6–7Write the Python to run the analysis
Day 8–9Build the charts, write it up
Day 10Paste into the deck for Friday
Total: ~2 weeks
With Invert
4m 12s
You
Across our last mAb-04 campaign, how do pH and DO setpoints affect final titer?
Assist
Response surface fit across 14 runs. Peak titer at pH 7.0, DO ~35% — 28% above campaign mean. Interactive chart attached.
14 runs cited·traceable·editable as report
Asked at 9:04am. Acted on by 9:09am.
Interactive analyses

Ask anything. Every answer is interactive.

Assist is the way your team asks Invert anything. Plain language in, a real answer out — grounded in your runs, anchored to the data that proved it.

  • Ask in plain language. Assist finds the right runs, loads the data, chooses the right analysis.
  • Every chart, table, and summary is interactive — filter it, drill into it, export it, or hand it off as a report.
  • Build reusable Skills, customized to your workflows.
Invert Assist
Y
You
Across our last mAb-04 campaign, how do pH and DO setpoints interact to affect final titer? Show a response surface.
A
Assist
Fit a response surface across 14 mAb-04 runs. pH is the dominant factor; DO has a secondary interaction effect above pH 7.0.
DO %6040206.66.87.07.2pH setpointpeak 3.1 g/Lg/L3.02.31.6
Response surface peaks at pH 7.0, DO ~35%— 28% above campaign mean. Interaction isn't linear; both are close to their minima at pH 6.6 / DO 20%.
Data14 Ambr250 runs · Vi-CELL · HPLC
AnalysisView steps ↗
Save as Skill+ ↗
Ask a follow-up…
Experiment summaries

Every experiment, already read and summarized.

Pick up any experiment — yours or a colleague's, this week's or last year's — and see the intent, the design, the outcome, and the outliers before you open a single protocol PDF. The context arrives ahead of you, so teammates pick up each other's work in minutes instead of days.

Run summaries live in Enablement
Experiment — Q1 Fed-batch feed strategy DoE
Q1 Fed-batch feed strategy DoE
12 runs · Ambr250 · Jan 8 – Mar 28 2025
Intent
Compare three feed strategies (baseline, high-glucose day-4, staggered glucose+glutamine) against titer and viability at harvest. Target: identify feed strategy for 200L tech transfer.
What happened
Staggered strategy out-performed baseline by 18% titer; high-glucose showed higher peak VCD but lower viability (86%). Outlier: BR-107 — DO dropped <20% during feed window, titer 28% below group.
Conclusion
Recommend staggered feed strategy for 200L transfer. DO control during feed addition flagged as a risk to monitor at scale.
Process modeling

Predict where a live run is headed. Test a change before you commit to it.

Train a hybrid mechanistic and data-driven model on your harmonized dataset. Use it mid-run to forecast harvest against target — and to simulate what a feed-rate, DO, or pH change would do to the outcome, before it happens. The inputs, the results, and the underlying logic all live in a report your team can share, re-run, or build on.

Invert — Process modeling
Mid-run prediction · Fed-batch titer
BR-112 · mAb-04 · Ambr250 · currently at day 6 of 12
0123titer (g/L)d0d3d9d12target 2.7NOW
Predicted harvest: 2.9 g/L (±0.3) · 7% above target
With recommended +10% feed rate at h152: 3.2 g/L (+10%)
Data quality

The gaps in your data, surfaced — not hidden.

Missing samples, unit mismatches, metric drift — flagged continuously, not hidden under an average. The questions Assist can answer are only as sharp as the data it stands on, and Invert keeps that foundation honest.

The data foundation lives here
Invert — Data quality
Checks · 3
Last scan · just now
Missing Vi-CELL reads · 4 historical runs
BR-88 through BR-91 are missing day-6 viability samples. Enrich or annotate to unblock future cross-run analysis.
Unit mismatch · glucose set-point
2022 runs recorded glucose in g/L; post-2023 in mM. Auto-harmonized, flagged for review.
Coverage: 96.2% · across 214 runs
12 of 52 canonical metrics have enrichment opportunities. Fix once; every future analysis benefits.
Process knowledge

The knowledge your org never wrote down, captured automatically.

Invert learns how your process actually runs — the vocabulary your team uses, the baselines your scale-ups target, the deviations you've seen before. Every run adds to a compounding process-knowledge layer that Assist draws on to answer the next question in your terms, not a textbook's.

The knowledge stops living in one senior scientist's head — and starts living in the system the whole team uses.

Invert — Process knowledge
Your process knowledge
learned from 214 runs · 3 sites
LIVE
CPPs tracked
pH · DO · feed rate · osmolality · +14
18
Feed strategies in use
baseline · high-glucose day-4 · staggered
3
Historical comparators
across Ambr250, Pilot 50L, GMP 2,000L
214 runs
Team vocabulary
"aeration rate" "airflow rate" · +46
47 aliases
Baseline ranges learned
per-scale, per-modality
52 metrics
Proactive insights

Assist doesn't just answer questions. It raises them.

Invert enables proactive monitoring through live run surveillance against your historical baselines. It surfaces anomalies before anyone asks, investigates the likely cause against comparable runs, and drafts the investigation — ready for your team to review, refine, and file. The process knowledge the system compounds with every run is what makes every flag, and every draft, sharper than the last.

Invert Assist — Proactive insights
Surfaced · 3
Last updated · just now
Assist flagged · BR-04 · DO drift at h72
2.4σ below historical baseline. 3 correlated runs identified; DO during feed addition is the common factor.
Draft investigation readyReview
Titer trend watch · mAb-04 campaign
Campaign trending 6% below the last three Q1 campaigns. Early signal — worth reviewing before the next run starts.
New comparator run learned · BR-114
Added to the staggered-feed cohort. Future prediction intervals tighten by ~4% as a result.

Transparent by design

Every Assist output is inspectable. No black boxes.

In GMP environments, you can't act on answers you can't verify. Every step Assist takes — the data it loaded, the Python it ran, the statistics it applied, the model it fit — is readable, reproducible, and auditable.

analyze.py · under every answer
# Response surface: pH × DO → final titer
from invert import runs, fit_response_surface

df = runs.where(molecule="mAb-04", scale="Ambr250").load()

model = fit_response_surface(
    df,
    x=["pH_setpoint", "DO_setpoint"],
    y="final_titer",
)

peak = model.optimize()
# → pH 7.0 · DO 35% · titer 3.1 g/L

The system was picking up and reporting on conclusions we hadn’t specifically asked about — things that were actually impactful to the process. Work that could have taken days took five minutes.

PD
Process Development Scientist
Emerging biotech

Use Cases

From routine summaries to advanced modeling — it's all a prompt away.

Why did titer drop in BR-107?

Process troubleshooting

Assist traces the deviation, surfaces correlated parameters, and points to the most likely root cause — with citations to the specific runs and time windows that support the conclusion.

What are the most important factors from my last fed-batch DoE?

DOE analysis

Full design-of-experiments analysis, ranked parameter importance, and a response surface — ready for your next experimental design.

Flag any unusual patterns in my last ten runs.

Anomaly detection

Scans your recent data for deviations across any metric, any scale — and surfaces anything worth investigating, with the runs that support each flag.

Build a predictive model for final titer from my Ambr250 data.

Hybrid modeling

Trains a predictive model against your harmonized dataset, reports accuracy, and returns a model you can use to plan future experiments.

Start a control chart for Protein A yield across my last campaign.

Control charting

Interactive control chart embedded directly in a report. Update it as runs complete. Share the locked version with MSAT and QA.

Run a PCA on my process parameters across all pilot runs.

PCA / multivariate

Clusters surfaced, outliers flagged, and the output annotated in plain language — ready to drop into a report.

Case study · Digital twin
<1 mo
to go live vs. 6+ months for PI
$2M+
digital twin investment unlocked
Real-time
data feed to digital twin
Read the full case study

Bring us your hardest question.

Bring a question your team has been stuck on. We'll walk through how Assist turns it into a traceable answer on real bioprocess data — in minutes.

Book a demoExplore Enablement