# Invert — full documentation corpus This file concatenates every markdown source on invertbio.com into a single document. Sections are separated by `---` and each section is preceded by a YAML frontmatter block with `url`, `title`, and `kind`. For a curated index see `/llms.txt`. For granular access use the `.md` URLs on each section header, or query the MCP server at `/mcp`. --- # Section: Docs --- kind: doc category: user-guides title: "Schedule" slug: schedule url: https://invertbio.com/docs/user-guides/schedule markdown_url: https://invertbio.com/docs/schedule.md --- # Schedule The [Schedule](https://app.invertbio.com/schedule) page allows you to manage experiments in Invert. From here, you can review past and ongoing experiments, presented in either list or calendar view. Create a new experiment as necessary or navigate to a given experiment details page for a close-up view on a specific experiment. ## Creating a New Experiment To create a new experiment, simply click the "New Experiment" button. This action will direct you to the 'New Experiment' view, where you can enter the experiment name and edit experiment properties, such as status, start date, end date, and more, as needed. Navigate to the Runs tab to create runs you wish to associate with the new experiment. Once you've filled in the necessary details, the newly created experiment will appear in the experiment list on the Schedule page. ## Experiment list The experiment list on the Schedule page gives you an overview of all your experiments whether past, ongoing or future experiments. Navigate through the list and apply filters to narrow down the view based on experiment status, start date, or end date. The calendar view provides a visual representation of your experiments, allowing you to see them in a calendar format. Use the arrow buttons for quick navigation. Clicking on an experiment in the list or calendar view will direct you to the respective experiment summary page. --- kind: doc category: user-guides title: "Import" slug: import url: https://invertbio.com/docs/user-guides/import markdown_url: https://invertbio.com/docs/import.md --- # Import The [Import](https://app.invertbio.com/import) page is your gateway for bringing data into the app. Whether you're importing timeseries data, run data, event notes, or file attachments, the tools provided here allow you to upload files, making them available for your analysis. Navigate to the 'View History' page for a detailed view of historical file uploads. ## Option 1: Data from File This is the primary method for uploading bioprocess data from files. Pick a file and choose a mapping template from the list that supports the structure of your data. The mapping ensures that your data is imported correctly into the app. Press 'Start Import' to initiate file ingestion. - **Step 1: File Picker** Select a file from your computer using the _File Picker._ Supported file extensions include .csv, .xls, and .xlsx. You can upload multiple files at once, as long as they share the same file structure. It is recommended to carefully review each file before ingestion to avoid ingestion of faulty data. ![](/docs/user-guides/import/image-1.png) - **Step 2: Mapping Picker** Select a mapping template that best fits your data structure. Several mappings are available to support the following data types: Run Data, Timeseries Data, or Run Events. Custom mappings can be added to list upon request. Refer to the mapping example in the top right corner of the screen or consult the Mapping Guide below for more information on a particular mapping. ![](/docs/user-guides/import/image-2.png) - **Step 3: Settings (Timeseries Data Only)** When importing timeseries data, you have the option to either 'merge' or 'replace' existing datasets. Choose 'merge' to update an existing dataset by either overwriting or appending data. Alternatively, select 'replace' to discard the existing dataset for a given metric and replace it entirely with the imported data. ![](/docs/user-guides/import/image-3.png) - **Step 4: Start Import** Click the 'Start Import' button to initiate file ingestion. This will direct you to the 'Importing' page where you can review 'Import details' for more information on ingestion status. Depending on the file size, file ingestion may take up to several minutes to complete. ![](/docs/user-guides/import/image-4.png) - **Step 5: Ingestion Evaluation** - **Step 5.1: Successful File Uploads** Upon file ingestion, ensure data imported into Invert meet your expectations. Click 'Runs' or 'Metrics' for a comprehensive overview on the imported data. Consider editing and re-uploading files to update or replace data inside the app as needed. ![](/docs/user-guides/import/image-5.png) ![](/docs/user-guides/import/image-6.png) - **Step 5.2: Failed ingestion attempts** Failed file ingestions may occur if the source file contains unexpected incomplete data or the file structure is not supported by the selected mapping. Consult the error log on the 'Importing' page for more details and ensure the file meets the mapping criteria. Consider reaching out to Invert staff for additional support. ![](/docs/user-guides/import/image-7.png) ## Mapping Guide - **Run Data** - Description: Metadata associated with a run - Example: LOT#, Reactor ID, Site, Operator - Mapping: 'Run Data' - Required Columns: - Run - Metric A (unit) - Recommended Columns: - Experiment - **Timeseries Data** - Description: Time-based metrics - Example: Time (h) versus Temperature (**°**C) - Mapping: - Timeseries Data (absolute time) - Required Columns: - Run - Timestamp - Metric A (unit) - Timeseries Data (relative time) - Required Columns: - Run - Time (h) or Time (min) - Metric A (unit) - **Run Events** - Description: Notes associated with a run. - Example: Reactor Foaming @ 24h EFT - Mapping: - Run Events (absolute time) - Required Columns: - Run - Timestamp - Metric A (unit) - Run Events (relative time) - Required Columns: - Run - Time (h) or Time (min) - Metric A (unit) - **Invert Data** - Description: Import data from an Invert export file - Example: Time (h) versus Temperature (**°**C) - Mapping: - Invert - Required Columns: - Export file structure generated by Invert ## Option 2: Other Files The second option allows you to upload files as run attachments. Use the file picker to select the file you wish to upload, and then choose the relevant run from the dropdown menu. Once uploaded, the file can be accessed from the Run summary page, providing easy access to additional documentation or resources related to your runs. ## Option 3: View History Navigate to the 'Import History' page to get an overview on historical file ingestions. Select a upload from the list for more details on a specific file upload task. Review imported 'Runs' or 'Metrics' and download a copy of the file --- kind: doc category: user-guides title: "Runs" slug: runs url: https://invertbio.com/docs/user-guides/runs markdown_url: https://invertbio.com/docs/runs.md --- # Runs The [Runs](http://app.invertbio.com/run) directory is your hub for managing all your bioprocess runs. Here, you can view, filter, group, and sort your runs in a table format. Whether it's shake flask, bioreactor or any other type of run, this page helps you keep track of your data and serves as a starting point for your analysis. ## Get Started Begin by exploring your runs in the run directory. Use the filtering option to tailor your view to include relevant runs for your analysis. Customize the layout by adding or removing columns. Select runs for bulk editing or timeseries analysis. ## Key Features - **Filtering:** Narrow down your view by applying filters based on different attributes like experiment, organism type, or any other relevant metadata associated with your runs. - **Sort:** Sort your runs alphabetically in ascending or descending order depending on your needs. Click on the arrow down icon inside any of the column headers and select 'Sort A to Z' or 'Sort Z to A'. - **Grouping:** Group your runs by specific criteria for a clearer overview. Choose from options like operator, site, strain, or other any run data from the list. Bulk select grouped runs by clicking the checkbox next to the group label. - **Table Layout:** Tailor your run directory view by adding or removing columns. Change the column order by dragging-and-dropping. Take advantage of the 'Add Similarities'/'Add differences' feature to quickly compare run data across highlighted runs. - **Saved Views:** Save your run directory configuration for a more streamlined and reproducible data analysis experience. Apply filter settings and adjust the table layout followed by pressing 'Save'. Select a profile from the dropdown menu to restore a previously built view. Run directory profiles are available for the entire organization. - **Aggregations and Units:** Choose between a variety of aggregation settings for timeseries metrics (e.g. Mean, Last, Maximum, Minimum, etc.). Use the built-in unit conversion tool to quickly change between available unit options (e.g. mL/min to L/h). - **Search:** Use the search feature to quickly identify relevant runs in the current run directory view. Search for run names or any other metric entry, like NH4OH or Process development. - **Experiment planning:** Take advantage of the 'Status' property to organize runs by its status. Use the 'Status' filter and select 'Completed', 'In-progress', 'Scheduled' or any of the other options to further customize your view. - **Quick access to Summary page:** Runs and Experiments are clickable entities, allowing you to quickly access the associated Summary page for detailed information related to a particular entry. - **Editing:** Select a run and make edits to its associated metadata. Use our bulk editing feature to streamline editing across multiple runs simultaneously. Use shift-click feature for quickly highlighting multiple cells. - **Run merging:** To merge two or more runs into a single run, select the relevant runs and click the 'merge' button accessible by through the dropdown menu in top right corner. Choose the run to keep and proceed with the run merge. - **Transfer to Analysis:** Choose a specific run or a selection of runs to carry forward to the Analysis page. This allows you to create line and scatter charts based on the selected data for deeper insights and visualization. You have the option to save the analysis as a sharable report. --- kind: doc category: user-guides title: "Library" slug: library url: https://invertbio.com/docs/user-guides/library markdown_url: https://invertbio.com/docs/library.md --- # Library The [Library](https://app.invertbio.com/library) page allows you to explore and manage the key components of your bioprocess data analysis: _metrics_ and _properties_. You can easily add, remove, or modify library entries either through the user interface (UI) or by importing files. The page is organized into three tabs: **Metrics**, **Properties,** and **Unit Operations**, making it easy to differentiate between data types. - **Metrics** are entities representing time-based data with a distinct x and y value pairs for each data point. Example: Online pH time course signal coming from a bioreactor, or offline product titer concentration time course. - **Properties** are single value entities typically considered as meta data providing additional context to a run or experiment. Supported data types are numeric, text, date, among others. Examples: 'Host Organism' with a value '_E. coli_' or 'Bioreactor size (L)' and '200'. - **Unit Operations** are process steps within a run or experiment that group together related metrics and properties for a specific stage of work. - Both metrics or properties can be used as **formula inputs**. Depending on how the formula is configured, formulas may qualify as a metric or property. Examples: A time series aggregation function converts a time series metric into a property (e.g. Maximum(pH) = 7.9) whereas multiplying a metric with a property results in a metric (e.g. Metric[Feed delivery volume (L)] x Property [Feed concentration (g/L)] = Metric[Substrate delivery mass (g)]). - **Parent metrics** organize and consolidate data streams into groups. This is recommended because bioprocess hardware and data stream tag names can vary widely. Additionally, this enables Invert based formulas to be grouped with pre-calculated data streams from your hardware. For example, use 'Temperature (°C)' parent metric to bundle 'TP001 (°C)', 'Temp (°C)', and 'T_PV (°C)'. ## Navigation Move between Library sections using the selector in the top left: ![](/docs/user-guides/library/image-1.png) ## Metrics The _Metrics_ tab provides an exhaustive list of all timeseries metrics currently in Invert. This list includes time series data uploaded by the user or hardware agent - as well as formula-derived time series data. Each entry is a clickable link navigating to individual metric details pages with additional information and editing options. ![](/docs/user-guides/library/image-2.png) ## Properties On the _Properties_ tab, you'll find a collection of single-value properties. Similar to the Metrics tab, this tab presents entries in a table layout and each linking to the respective details and editing pages. ![](/docs/user-guides/library/image-3.png) ## Unit Operations On the _Unit Operations_ you'll find a collection of process steps. Similar to the Properties tab, this tab presents entries in a table layout and each linking to the respective details and editing pages. ![](/docs/user-guides/library/image-4.png) ## Key Features - **Data Sources** - Data Sources column provides insights into the origin of a metric or property. The displayed value is automatically generated and cannot be changed by the user. Example: Time series metric 'Aeration' uploaded via file ingestion - mapping name is used for Data Source value. Similarly, metrics imported via hardware or ELN Integration will automatically derive their label from the respective ingestion agent. Quantities can have multiple Data source labels. ![](/docs/user-guides/library/image-5.png) - **Adding a new metric** - Create a new metric by clicking the 'Add' button on the 'Metric' tab. Enter a metric name and adjust metric properties as needed. The newly created metric will show in the metric table view. - **Archiving a Sub or Parent metric** - To archive a metric or parent metric from the library, click on the metric name to access the details page. Then, click the 'Archive' button in the top right corner to remove the metric. ## Bundling 'Sub metrics' into 'Parent metric' - Bundle one or multiple sub metrics into a parent metric to streamline metric management across bioreactor platforms. Select relevant sub metrics and click 'Change Parent'. In the modal, select 'Add new parent' from the dropdown and confirm with 'Change Parent'. Specify name and display unit - optionally you can update the sub metric list as needed. Once saved, you may select runs and transfer to analysis page or wait until 'State' updates to 'Ready'. Sub metric<>Parent metric relationships are reversible. Parent metrics can be archived and recreated at any point in time. ![](/docs/user-guides/library/image-6.mp4) ## Updating Units - You can update the unit associated with a metric from the metric editing view, either by updating the Default Display Unit or Default Ingestion Unit (see Metric Property Guide). For that access the metric details page by clicking on the metric name in the metric table. Proceed to the metric editing view to modify the unit. Press 'Save' when done. ![](/docs/user-guides/library/image-7.mp4) ## Adding a Formula - You can calculate derived-quantities in a streamlined and automated fashion using Invert's formula feature. Formula use cases include KPIs (e.g. Yield, Productivity), mass balance (e.g. reactor volume over time) or signal noise reduction (e.g. moving average). Refer to the 'How to use Formulas?' info box for more information on supported mathematical operations. Formulas accept both properties and metrics as input variables. Depending on the formula configuration, the formula output could either be a time series metric or a single-value property. - Example: f(x) = centered_moving_average(DO) = Timeseries metric - Example: f(x) = last(Product titer) = Single-value property - Once a formula is configured, Invert automatically calculate results for runs that meets the formula criteria. Formula calculation triggered upon file ingestion or after changing the formula configuration via formula editing page. ## Adding a Constant into an existing Formula - Enter a formula name and set your dependencies. Press 'Add constant' and pick a constant from the list or create a new constant. Ensure the units of the constants is compatible with the mathematical operation. Proceed with formula creation. ## Adding a Notes - You can annotate metrics by adding a note. Notes are accessible from Line charts via tool tip hover. Simply open the metrics editing page and update the 'Notes' section and press 'Save' when done. ## Metric & Property Editing Page - User Guide ### Name _Description:_ Name of the metric _Impact:_ Changing the value will update the metric name. _Example:_ Oxygen Uptake Rate or Final OD. ### Type (Property only) _Description:_ Indicates the metric data type is timeseries data or run data. _Impact:_ Changing the data type has implications on the types of analysis the metric can be used for. For instance, only numeric metrics can be used for formulas. _Example:_ Number, Text, Timeseries, Date, etc. ### Default Display Unit _Description:_ Default unit in which the metric is displayed across the app. _Impact:_ Changing default display unit converts the metric value into a different unit in accordance with the base unit when displayed in Invert. The value is not altered. _Example:_ mg/L or g/L ### Base Unit _Description:_ The SI unit in which the metric is stored inside the app. _Impact:_ Unit conversions and other unit related features require metric units to be unambiguous and defined so that it can be stored in SI unit. E.g. Yield in 'g product/g biomass' should be represented as 'g/g' (kg/kg in SI Unit). _Example:_ K or kg/m^3 ### Molar mass _Description:_ _Impact:_ _Example:_ ### Notes _Description:_ Text field used for capturing notes _Impact:_ Text shows when hovering over a metric/formula name in Line charts. _Example:_ Primary Nitrogen Source | Measured via Thermo Gallery Analyzer ### Expresses Timeseries Data _Description:_ Converts a metric with an 'Unknown' data type into timeseries data. Only applies to metrics that were not classified correctly upon ingestion. _Impact:_ Once a metric is expressed as timeseries data, this action cannot be reversed. ### Uses Log Scale _Description:_ Enables Log Scale for a specific metric. _Impact:_ Metric show on a logarithmic Y-axis when feature is turned on. ### Disable Interpolation _Description:_ Disables linear interpolation for a specific metric. _Impact:_ Interpolation affects the way data sets are shown in line charts. When disabled, data show without connecting lines when feature is turned off. ### Resampling Method _Description:_ Determines how data is aggregated when condensing time series data into manageable intervals. You can select either 'Mean' to smooth data trends by averaging values, or 'Max' to capture the highest value within each interval. _Impact:_ Choosing 'Mean' provides a clearer view of overall trends by reducing noise, while 'Max' emphasizes peak conditions, making it useful for identifying extreme events or anomalies in the data. _Example:_ For temperature data, selecting 'Mean' will show the average temperature over each hour, whereas 'Max' will highlight the highest recorded temperature for that period. ### Default Ingestion Unit _Description:_ The unit in which the metric is ingested into the app. _Impact:_ Changing default ingestion unit alters the metric value. E.g. changing the default ingestion unit to g/L for a metric originally ingested as mg/L will result in a 1000x multiplication of the base values. E.g. 1 mg/L will change to 1 g/L. _Example:_ mg/L or g/L **Screenshot 1:** Example metric editing page ![](/docs/user-guides/library/image-8.jpg) **Screenshot 2:** Example property editing page ![](/docs/user-guides/library/image-9.jpg) --- kind: doc category: user-guides title: "Run Summary" slug: run-summary url: https://invertbio.com/docs/user-guides/run-summary markdown_url: https://invertbio.com/docs/run-summary.md --- # Run Summary The Run Summary page contains all the information associated with a given run. It covers every aspect of your bioprocess including run data, file attachments, lineage or event notes. Switch to the editing view for run editing and archiving or plot your data by clicking the Analysis button. ## Navigation To access the Run Summary page, double-click on a run in the run data table on the Runs page. This action will direct you to the Run Details page, where you'll find seven tabs: Properties, Metrics, Lineage, Events, Attachments, Import History and Reports. ## Properties The Properties tab presents a list of the run meta data, including Run Start, Run End, and any other custom property that was previously associated with the run through file ingestion or manual editing. Navigate to the 'Edit' page to switch to the editing view for run archiving and property editing. _Data Sources_ labels provide insights into the origin of a property. Labels are automatically generated upon ingestion. ![](/docs/user-guides/run-summary/image-1.png) ## Metrics In the Metrics tab, users can access an overview of the time series data associated with a given run. This tab displays a list of time series metrics and formulas, along with useful aggregations and metric units for easy reference. Metrics are categorized into parent and sub-metrics, allowing users to quickly understand the structure and relationships of the data (See '[Library](/docs/user-guides/library)' Article for more details). From this view, users can navigate directly to the metric details page for more information or archive a specific metric from the run without removing it from the overall metric library. This feature provides a centralized place for exploring and managing time series data efficiently. _Data Sources_ labels provide insights into the origin of a metric. Labels are automatically generated upon ingestion. ![](/docs/user-guides/run-summary/image-2.png) ## Lineage In the Lineage tab, users can access the process flow diagram that tracks the relationships between individual runs. This functionality is particularly useful for understanding the lineage of a run, such as identifying which seed flask was used to inoculate a certain bioreactor or tracing the bioreactor run used for downstream processing testing. To use this feature, enter a valid run name into the 'Input Run' property and navigate to the Lineage tab. Click 'Add property' to provide additional context to the blocks. ![](/docs/user-guides/run-summary/image-3.png) ## Events The Events tab facilitates the annotation of timestamped event notes. Users can document important process annotations such as Inoculation, Feed Start, or any other observations or milestones associated with the run. Event notes show in Line Chart enhancing your analysis by providing context to understanding of the bioprocess workflow. ![](/docs/user-guides/run-summary/image-4.png) ## Alerts The Alerts tab allows users to manage alerts. An alert is a service that is available to users with data streaming into the app in real time via hardware integration. Users can opt into the service and have Invert sent automated notifications emails and create events when condition is reached. Example: Dissolved Oxygen <60 %. ![](/docs/user-guides/run-summary/image-5.png) ## Import History The Import History tab shows the file upload history for a given run providing insights on the origin of data at a single glance. Each entry in this table links to the relevant page on the Import History tab for additional insights on mappings used, metrics uploaded, etc. ![](/docs/user-guides/run-summary/image-6.png) ## Reports The Reports tab offers a quick overview and easy access to all reports associated with a specific run, helping you efficiently review and navigate related information. ![](/docs/user-guides/run-summary/image-7.png) --- kind: doc category: user-guides title: "Analysis" slug: analysis url: https://invertbio.com/docs/user-guides/analysis markdown_url: https://invertbio.com/docs/analysis.md --- # Analysis The Analysis page is your go-to destination for visualizing and analyzing your bioprocess data. Here, you can choose between line charts and scatter charts, providing you with the options you need to tell the story behind your data. Use line charts for timeseries data, customize the view and add process annotations using the event feature. For a more quantitative analysis use scatter charts to take advantage of aggregations and non-time based analysis. ## Navigating to the Analysis Page To access the Analysis page, start by navigating to the Runs page and selecting a set of runs you wish to analyze. Press the 'Analyze' button in the top right corner of this view to transition to the Analysis page. The line chart view is the default option for visualizing timeseries data. To switch to scatter chart view, choose 'Scatter Chart' under Chart Type in the 'View' sidebar menu. ## Workflow 1. **Graph customization** Select one or multiple metrics from the dropdown list. Open the 'View' sidebar and make changes to Chart Type selection or viewing options (e.g. Split By > Metric) 2. **Run selection** Update Run selection as needed by checking/unchecking run checkboxes in the table underneath the graph. Optionally, click 'Run selection' for altering the list of runs included in the analysis. 3. **Full Screen view** Switch to Full Screen view for a close-up view of the graph. 4. **Export** Export graph as an image (.PNG) or export displayed data as an Excel file. 5. **Save Analysis** Save analysis as a report by creating a new report or appending the analysis to an existing report. ## Line Charts ### Visualization of Timeseries Data The line chart is a powerful tool for visualizing timeseries data, providing flexibility in layout to support various views. This includes a close-up view of individual runs or metrics, as well as side-by-side comparisons of multiple runs or metrics. ### Viewing Options Customize your line chart with viewing options such as 'Split By' and 'Chart Layout' - Split By: - Separate: One graph per run and metric. - Split runs: One graph per selected run. Close-up view of an individual runs.. - Split metrics: One graph per metric. Compare a single metric across multiple runs. - All-in-one-graph: Summary view with all runs and all metrics in a single graph. - Chart Layout: - 1 Graph: One graph per line - 2 Graphs: Two graphs per line - 3 Graphs: Three graphs per line ### X-Axis Zoom Zoom into specific areas of the graph to gain a closer look at the data, allowing for detailed analysis and insights. Click into the graph, highlight the area and release to zoom in. The updated bounds are available in the top right corner under "ERT" (Elapsed Run Time). ### Y-Axis Custom Range Overwrite the default Y-axis range using manually entered, custom values for a more tailored viewing experience. Open the 'View' sidebar and navigate to the 'Y-Axis settings' section. Select the Y-metric in the dropdown menu and enter in values for 'Start range', 'End range' and optionally 'Interval'. Press the 'Apply' button to proceed. Undo the zoom filter in the top right corner to revert to the default settings. ### Removing Zoom Configurations Cancel the zoom setting by clicking the "x" on the zoom annotation in the top right corner of the plot to revert to the default settings. Please note the zoom configurations are stored on a per axis basis. Enabling or disabling "Combined Y-Axis" will change the available set of displayed axes, but the zoom setting will still be stored. ### Formulas Create derived, custom metrics using the built-in formula editor. Type the custom metric name into the metric dropdown field and click 'Add'. Enter formulas name, dependencies and formulas into the formula editor. Use the formula preview feature for troubleshooting as needed. For more information on formulas, see [Library](/docs/user-guides/library) article. In the example below, air flow rate (L/h) and reactor volume (L) are used to calculate VVM, a scale-independent air flow metric to enable comparison of the aeration regime across different bioreactor sizes (0.25L to 100.000L). ### Events & Phases Utilize the event annotation feature to create time-stamped and interactive event notes, enhancing your analysis. Supported event types are: Inoculation, Induction, Transfection, Sample, Harvest, Drawdown, Feed Start, Foamout, and Observation. Optionally, upload an image to provide additional context to your data. Limit the number of events shown on the graph as needed using the Event filter functionality in the View sidebar. Phase markers are useful visual indicators for differentiating between different process segments (e.g. Growth or Production phase). When used in a Formula context, a phase is can be used in the 'Applied Time Frame' dropdown menu in the Formula customization section allowing for more control over formula input bounds. For instance, Specific Growth Rate formula with Applied Time Frame limited to 'Growth phase' or Productivity formula with Applied Time Frame limited to 'Production phase'. ### Time Filters Use the **Data Start** and **Data End** dropdowns to narrow the X-axis view to specific segments of your process. These filters help you zoom in on relevant timeframes of your data. The **Normalization Basis** dropdown lets you define the reference point for time normalization on the X-axis. This controls how relative time is displayed. _Invert_ uses **Run start** as the default. Click the 'Reset' button to restore the default view. Common use cases are: ### Data Bounds - _Run - Start/End: 'Run Start' & 'Run End'_ - _Pre-Run - Start/End: 'Record Start' & 'Run Start'_ - _Post-Run - Start/End: 'Run End' & 'Record End'_ ### Time Normalization - _Run: 'Run Start'_ - _Events: 'Feed Start'_ - _Phases: 'Production Start'_ ### Grouping Use the "Group by" functionality to aggregate and compare related runs based on specific attributes, such as "Experimental Condition", "Strain" or "Alias". When runs are grouped, it enables the analysis of variability (shaded regions representing 16th and 84th percentiles) and central tendencies (median) within those groups. Run IDs in run tables and chart legends are replaced by the attribute name enabling users to assign custom run names. ### Metric/Formula Notes Provide additional context to your analysis by adding notes to your metrics or formulas (Library>Editing page). On the analysis page. hover over the relevant quantity name for quick accessing the note via tool tip hover. ### Run Data Table Customization Customize the run data table to provide additional context to the timeseries graph. This includes displaying relevant metadata such as strain ID, run ID, bioreactor size, and more, enhancing the interpretability of the visualized data. ## Scatter Chart ### Aggregation Use scatter charts when you want to explore relationships between two variables in your data. They are particularly useful when the input variable for X is non-time-based (e.g., Strain ID, Run ID). You have the option to choose between a variety of aggregations for the Y input variable, such as mean, standard deviation, sum, count, minimum, maximum, last value, etc. For example, compare 'Product (Last)' versus 'Run ID' or 'OD (Maximum)' versus 'Strain ID'. Use this tool to Identify trends, clusters, outliers, or other patterns in your data, facilitating data-driven decision-making and analysis. ### Statistics Calculation Scatter chart automatically calculates statistics when using X variables with multiple entries providing insights into the distribution of the dependent variable across different categories of the independent variable. The available statistics are mean, standard deviation, standard error, count, and lower/upper 95%. For example, plot 'Biomass concentration (Last)' versus 'Strain ID' where Strain ID has three unique values (Strain ID#1, Strain ID#2, StrainID#3), to get a deeper understanding of how the Biomass concentration varies across each of the unique strain IDs. ![](/docs/user-guides/analysis/image-1.png) --- kind: doc category: user-guides title: "Reports" slug: reports url: https://invertbio.com/docs/user-guides/reports markdown_url: https://invertbio.com/docs/reports.md --- # Reports Compile your bioprocess data analysis into standardized, reproducible summary reports using our Reports functionality. A report allows you to bring together charts, run tables and text blocks, also known as plot blocks, allowing you to showcase your data in a clear and structured manner. You can include as many plot block entities as you need to tell the story behind your data. Once complete, reports can be shared with collaborators for data review, or duplicated and used as template for streamlined report editing. ## Plot Block - A Plot Block entity consists of a text entry field for title and description, a chart (line or scatter), and a run data table. Plot blocks can be added, edited, or duplicated within a report, allowing for flexible and comprehensive data presentation. For more information on chart editing and features related to data visualization, see [Analysis](/docs/user-guides/analysis) article ## Workflow 1. **Add New Report:** Navigate to the Reports page and click 'Add new report' to create a blank report. Give your report a descriptive title. 2. **Plot Block Editing:** Begin by entering a title and description for your first plot block. Click 'Select Runs' and choose the relevant runs from the Runs selection table. These runs will serve as the basis for your charts and tables. 3. **Chart Customization:** Choose a chart layout and metric(s) to create line or scatter charts. Tailor the view settings to your specific needs (See [Analysis page](/docs/user-guides/analysis) for more details on graph customization). Press 'Done' when you're ready to proceed. You can always come back to this editing view by clicking the 'Edit Plot' button inside the plot block if needed. 4. **Further Report Customization:** Customize further by adding additional plot blocks using the 'Add Plot Block' button or by duplicating an existing plot block. Duplicated plot blocks maintain the settings for run selection and chart view which streamlining the editing process. 5. **Save and Share:** Once your report is complete, save it by clicking the 'Save' button. Share the report with collaborators by selecting the 'Copy Share Link' option after pressing the 'Share' button. Sharing reports outside the organization requires Invert support staff to get involved. 6. **Report Archiving:** To remove a report from the list, press the 'Archive' button accessible in the top right corner of the report. 7. **Additional Actions:** Duplicate a report to create backups or use it as a starting point to streamline the analysis of related experiments. For bulk editing run selection across all plot blocks within a report, press 'Run selection' in the top right corner of the screen and modify run selection. ## Alternative Workflow 1. **Start Analysis from Runs Page:** Alternatively, you can begin your analysis from the [Runs](/docs/user-guides/runs). Select the runs you're interested in and transfer them to the Analysis page. 2. **Visualize Data:** Visualize your data using line or scatter charts. Customize the chart view settings and run data table as needed 3. **Save to Report:** Click 'Save' and choose 'Add new report' or 'Add to a report' to incorporate your analysis into a new or existing report. --- kind: doc category: user-guides title: "Assist" slug: assist url: https://invertbio.com/docs/user-guides/assist markdown_url: https://invertbio.com/docs/assist.md --- # Assist Ask questions about your data through Invert Assist Invert Assist lets you query your data directly in plain language. Instead of manually investigating your data through plots and exports, you can ask questions conversationally and get results generated from your Invert data. Behind the scenes, Invert Assist generates and runs Python code, so every answer is reproducible and traceable. ## What you can do with Invert Assist - **Outlier Detection:** Quickly process all your timeseries data to see if there were any excursions worth investigating further. - **Root Cause Analysis:** Identify drivers behind unexpected trends or deviations in your process data. - **Experiment Summarization:** Generate clear summaries of multi-run experiments, highlighting key similarities and differences. - **Scale up**: Refine your question or follow up to dig deeper - the AI chat keeps the context. ## How to use Invert Assist 1. Select your runs. _We're actively optimizing the maximum data load, for best immediate performance we recommend selecting 15 runs or fewer for the time being._ 2. Open the **Invert** **Assist** chat panel using the Assist button at the bottom of your screen: ![](/docs/user-guides/assist/image-1.png) 3. Type your question—for example: - _“What caused the excursion in Run75?"_ - _"Is there any effect of pH on titer based on these runs?"_ - _"What is the next experiment that might be interesting to explore?"_ 4. Review the answer and trace the exploration and thought chain, along side the code that was executed for the analysis. 5. Use follow-up questions to refine your results or switch context. ## History and context in Assist - Your Assist queries and outputs are saved at a user-level. To view a repository of past chats, click the clock icon in the Assist modal: ![](/docs/user-guides/assist/image-2.png) - In addition to Runs, **Notebooks** can also be provided as context to Assist. This allows users to leverage existing analysis templates as a reference for Assist to perform calculations. For this, navigate to the relevant notebook page, open Assist modal, and add notebook as context: ![](/docs/user-guides/assist/image-3.png) - Workspace admins can further add context at the organizational level, which will be used in all Assist queries for your team. Use this space to provide terminology, conventions, or guidelines that should inform the assistant's responses. Organization context can be added under Settings > Assist: ![](/docs/user-guides/assist/image-4.png) ## Tips for better results - Be specific: include **timeframes or phases, run names, or metrics** in your question. - Use follow-ups: instead of repeating full queries, build on the last response. ## FAQs ### Will Invert Assist chat change my data? No. Invert Assist's chat tooling is read-only. It queries and analyzes data but never edits it. ### What data can Invert Assist chat access? It has access to your structured bioprocess data stored in Invert, including runs, metrics, properties, events, lineage, and formulas but is safely secured in the Amazon bedrock infrastructure so that none of your data is available for any model development or fine tuning. ### How does Invert build in robustness? We have written and manage a suite of benchmarking evals that are run regularly to help us characterize what capabilities are performing well and to alert our staff if there are any capability regressions due to model updates or infrastructure changes. **What if it doesn’t understand my question or the result is completely wrong?** Try rephrasing with more details (e.g., specify a date range or metric name). If you still don’t get what you need, please help us make our product more robust by submitting feedback through the thumbs-up and thumbs-down icons after the chat is completed. This will help us investigate further and build out additional evals for edge cases. 👉 Still need help? Reach out to our support team via email or intercom. --- kind: doc category: user-guides title: "Skills" slug: skills url: https://invertbio.com/docs/user-guides/skills markdown_url: https://invertbio.com/docs/skills.md --- # Skills ## Skills Your team has established ways of doing RCA, DoE review, regression analysis — methods refined through years of experience. Until now, that knowledge lived in people's heads or buried in SOPs. Now you can capture it directly in Invert as a **Skill**: a reusable set of instructions that tells Assist _how_ to conduct a specific type of analysis, so it works the way your team works. ## How it works Create a Skill in the **Library** by giving it a name, description, and a body — written in plain text, code, or both. The body is your method: step-by-step procedures, statistical thresholds, decision criteria, preferred chart types, or Python snippets that define how an analysis should be done. When you're ready to use it, open Assist, type `@` to bring up the mention menu, and select your Skill alongside any runs or reports you want to work with. Assist reads the Skill's instructions and follows your approach — same methodology, every time, regardless of who's running the analysis. ## What can you do with Skills? - **Standardize root cause analysis:** Define the steps your team follows for RCA — which metrics to check first, what thresholds flag a deviation, how to structure the final summary — and let anyone on the team run the same investigation - **Encode DoE review procedures:** Specify how to evaluate experimental results, which statistical tests to apply, and what constitutes a meaningful difference between conditions - **Create reporting templates:** Describe the sections, plots, and key metrics that should appear in a campaign summary or tech transfer package, then let Assist generate it - **Capture domain-specific calculations:** Include Python code for custom analyses — growth rate calculations, metabolite ratios, yield corrections — so Assist executes them consistently - **Compose multiple Skills:** Reference more than one Skill in a single conversation. Combine a "Growth Phase Analysis" skill with a "Metabolite Profile" skill to build a complete picture ## Example Skills to get started | Skill name | What it does | | --- | --- | | Fed-Batch RCA | Walks through a structured root cause analysis: check feeds, DO, pH, temperature, then correlate deviations with titer impact | | Campaign Summary | Generates a standardized report with VCD/viability overlay, titer bar chart, and key observations per condition | | Scale-Up Comparison | Compares matched parameters between bench and pilot scale, flags any that fall outside defined equivalence bands | | Harvest Timing | Evaluates viability trend and titer plateau to recommend optimal harvest window | _Available now for all Assist-enabled organizations._ --- kind: doc category: api title: "Authentication" slug: authentication url: https://invertbio.com/docs/api/authentication markdown_url: https://invertbio.com/docs/authentication.md --- # Authentication Tokens for Invert's external API are issued through Auth0. The same authentication flow applies to both the [Core](/docs/api/core) and [DSP](/docs/api/dsp) views. ### Receive a token from Auth0 In order to get a valid token use the following command: ```bash curl --request POST \ --url https://invert.eu.auth0.com/oauth/token \ --header 'content-type: application/json' \ --data '{ "client_id": "", "client_secret": "", "audience": "https://api.invertbio.com/", "grant_type": "client_credentials" }' ``` `CLIENT_ID` and `CLIENT_SECRET` will be shared with you separately. The token returned by Auth0 will be valid for **24 hours** and will need to be sent along with each SQL request. The response will look like this: ```json { "access_token": "", "expires_in": 86400, "token_type": "Bearer" } ``` ### Using the token Include the token in the `Authorization` header of every request: ```bash curl -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -X POST \ -d '{"statement": "SELECT * FROM v_bioprocesses LIMIT 1"}' \ https://api.invertbio.com/external/v1/statements/ ``` If the token is invalid or expired, the response will be: ```json { "message": "Unauthorized" } ``` Get a new Auth0 token and retry. --- kind: doc category: api title: "Core views" slug: core url: https://invertbio.com/docs/api/core markdown_url: https://invertbio.com/docs/core.md --- # Core views The Core external API provides SQL access to bioprocess data in Invert. See [Authentication](/docs/api/authentication) for how to obtain an access token. ### Available Views #### `v_bioprocesses` *Note: parents of bioprocesses (aka experiments) have been moved to a separate table called `experiments`* ```bash curl -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -X POST \ -d '{"statement": "SELECT * FROM v_bioprocesses LIMIT 1"}' \ https://api.invertbio.com/external/v1/statements/ ``` | Field Name | Type | Description | |------------|------|-------------| | `id` | String | Unique identifier for the bioprocess | | `external_id` | String | An external identifier for the bioprocess | | `name` | String | Name of the bioprocess | | `parent_id` | String or null | Identifier of the parent bioprocess, if this is a child process | | `scheduled_start_timestamp` | String or null | Scheduled start time (ISO 8601) | | `scheduled_end_timestamp` | String or null | Scheduled end time (ISO 8601) | | `start_timestamp` | String | Record start time (ISO 8601 with timezone) | | `run_start_timestamp` | String or null | Start time of the bioprocess run | | `run_end_timestamp` | String or null | End time of the bioprocess run | | `end_timestamp` | String | Record end time (ISO 8601 with timezone) | | `duration_ms` | Number | Duration of the bioprocess in milliseconds | | `status` | String | Draft / Requested / Scheduled / In-progress / Completed | | `qc` | Object | Quality control information: `{status, failure_mode}` | | `data` | Array | List of data objects associated with the bioprocess | | `events` | Array | List of events associated with the bioprocess | | `induction_event` | Object or null | Information about the induction event | | `attachments` | Array | List of attachments | | `lineage` | Object or null | Lineage information | | `last_updated_at` | String | Timestamp of the last update (ISO 8601) | ##### Common Event Types **`DbObservationEvent`** | Field | Type | Description | |-------|------|-------------| | `note` | String | Textual observation or comment | **`DbAdditionEvent`** | Field | Type | Description | |-------|------|-------------| | `lot_number` | String or null | Lot number of the added material | | `reagent_name` | String or null | Name of the reagent added | | `addition_type` | String | Type of addition (e.g., "Reagent Bolus", "Feed Start", "Induction", "Inoculation") | | `volume` | Object | `{unit: string, value: number}` | **`DbRemovalEvent`** | Field | Type | Description | |-------|------|-------------| | `volume` | Object | `{unit: string, value: number}` | | `lot_number` | String or null | Lot number associated with the removal | | `sample_name` | String or null | Name or identifier of the sample removed | | `removal_type` | String | Type of removal (e.g., "Sample", "Harvest") | **`DbBioprocessPhaseEvent`** | Field | Type | Description | |-------|------|-------------| | `phase` | String | Name of the bioprocess phase (e.g., "growth", "production") | | `time_point` | String | Indicator of the phase timing (e.g., "start", "end") | #### `v_timeseries` ```bash curl -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -X POST \ -d '{"statement": "SELECT * FROM v_timeseries LIMIT 1"}' \ https://api.invertbio.com/external/v1/statements/ ``` | Field Name | Type | Description | |------------|------|-------------| | `id` | String | Unique identifier for the timeseries | | `bioprocess_id` | String | Identifier of the associated bioprocess | | `quantity_id` | String | Identifier of the associated quantity | | `start_timestamp` | String | Start time (ISO 8601 with timezone) | | `end_timestamp` | String | End time (ISO 8601 with timezone) | | `duration_ms` | Number | Duration in milliseconds | | `unit` | String | Unit of measurement | | `statistics` | Object | Statistical summary (see below) | | `last_updated_at` | String | Timestamp of last update (ISO 8601) | **`statistics` object:** | Field | Type | Description | |-------|------|-------------| | `max` | Number or null | Maximum value | | `min` | Number or null | Minimum value | | `sum` | Number | Sum of all values | | `last` | Number or null | Last value | | `count` | Number | Number of data points | | `first` | Number or null | First value | | `arithmetic_mean` | Number or null | Average of all values | | `standard_deviation` | Number or null | Standard deviation | #### `v_timeseries_data` ```bash curl -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -X POST \ -d '{"statement": "SELECT * FROM v_timeseries_data LIMIT 1"}' \ https://api.invertbio.com/external/v1/statements/ ``` | Field Name | Type | Description | |------------|------|-------------| | `id` | String | Identifier of the timeseries this data point belongs to (ref to `v_timeseries.id`) | | `timestamp` | String | Timestamp of the data point (ISO 8601) | | `value` | Number or null | The recorded value at this timestamp | | `data_item_id` | String or null | Optional identifier for the specific data item | #### `v_quantities` ```bash curl -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -X POST \ -d '{"statement": "SELECT * FROM v_quantities LIMIT 1"}' \ https://api.invertbio.com/external/v1/statements/ ``` | Field Name | Type | Description | |------------|------|-------------| | `id` | String | Unique identifier | | `name` | String | Primary name of the quantity | | `alternative_names` | Array of Strings | List of alternative names | | `is_timeseries` | Boolean | Whether this quantity represents time series data | | `data_type` | String | The data type | | `default_unit` | String | The default display unit | | `default_ingestion_unit` | String | The default unit used when ingesting data | | `base_units` | Object | Base units object | | `molar_mass` | Number or null | The molar mass, if applicable | | `notes` | String or null | Additional notes | | `last_updated_at` | String | Timestamp of last update (ISO 8601) | #### `v_formulas` ```bash curl -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -X POST \ -d '{"statement": "SELECT * FROM v_formulas LIMIT 1"}' \ https://api.invertbio.com/external/v1/statements/ ``` | Field Name | Type | Description | |------------|------|-------------| | `id` | String | Unique identifier | | `name` | String | Name of the formula | | `formula` | String | The actual formula/calculation | | `log_scale` | Boolean | Whether to apply on a logarithmic scale | | `disable_interpolation` | Boolean | Whether interpolation is disabled | | `default_unit` | String or null | Default unit for the result | | `run_phase` | String or null | Phase during which this formula is applicable | | `notes` | String or null | Additional notes | | `state` | String | Current state (e.g., "ready") | | `last_updated_at` | String | Timestamp of last update (ISO 8601) | **Example formula types:** 1. Simple references: `A`, `B` 2. Basic arithmetic: `(A+B+C)/3`, `A+B` 3. More complex: `(A*3)*B/100` 4. Time-based: `time_derivative_hours(A)`, `time_integral_hours(A*2)` 5. Advanced statistical: `log_linear_regression(A)` #### `v_formula_results` | Field Name | Type | Description | |------------|------|-------------| | `id` | String | Unique identifier | | `formula_id` | String | Identifier of the formula used | | `bioprocess_id` | String | Identifier of the associated bioprocess | | `unit` | String | Unit of measurement | | `data` | Object or null | Single value result (if not a time series) | | `timeseries` | Object or null | Time series data (if formula produces multiple values) | | `timeseries_start_timestamp` | String | Start time of the time series | | `timeseries_offset_ms` | Number | Time offset in milliseconds | | `timeseries_statistics_min` | Number | Minimum value | | `timeseries_statistics_max` | Number | Maximum value | | `timeseries_statistics_arithmetic_mean` | Number | Arithmetic mean | | `timeseries_statistics_standard_deviation` | Number | Standard deviation | | `timeseries_statistics_sum` | Number | Sum of all values | | `timeseries_statistics_first` | Number | First value | | `timeseries_statistics_last` | Number | Last value | | `timeseries_statistics_count` | Number | Number of data points | | `last_updated_at` | String | Timestamp of last update | When `timeseries` is not null: `{data: Array, times_ms: Array}` #### `v_archived_records` | Field Name | Type | Description | |------------|------|-------------| | `record_id` | String | Unique identifier for the archived record | | `table_name` | String | Name of the source table | | `archived_at` | String | Timestamp of archival (ISO 8601) | #### `v_experiments` | Field Name | Type | Description | |------------|------|-------------| | `id` | String | Unique identifier | | `external_id` | String | An external identifier | | `scheduled_start_timestamp` | String or null | Scheduled start time (ISO 8601) | | `scheduled_end_timestamp` | String or null | Scheduled end time (ISO 8601) | | `last_updated_at` | String | Timestamp of last update (ISO 8601) | ### Example Queries #### Delta Loads **Request data produced after a given date:** ```bash curl -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -X POST \ -d '{"statement": "SELECT * FROM v_bioprocesses WHERE last_updated_at > '"'"'2024-07-29'"'"' LIMIT 1"}' \ https://api.invertbio.com/external/v1/statements/ ``` **Get timeseries updated since a bioprocess was last updated:** ```bash curl -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -X POST \ -d '{"statement": "SELECT * FROM v_timeseries WHERE last_updated_at > (SELECT MAX(last_updated_at) FROM v_bioprocesses)"}' \ https://api.invertbio.com/external/v1/statements/ ``` #### View Joins **Get bioprocesses associated to a timeseries:** ```bash curl -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -X POST \ -d '{"statement": "SELECT t.*, b.* FROM v_timeseries t LEFT JOIN v_bioprocesses b ON t.bioprocess_id = b.id LIMIT 1"}' \ https://api.invertbio.com/external/v1/statements/ ``` ### Error States **Timeouts** Long running requests (>30 seconds) will be terminated and return a `504`. Modify your query to reduce/chunk the amount of data. **SQL execution errors** No data returned: ```json { "data": [], "status": { "state": "success", "message": "Statement executed successfully, but returned no results." } } ``` Non-existent view: ```json { "data": [], "status": { "state": "error", "message": "relation \"v_non_existent_view\" does not exist" } } ``` Syntax errors: JSON decode error — be careful about use of single vs double quotation marks. **Unexpected exceptions** ```json { "data": [], "status": { "state": "error", "message": "Error executing statement" } } ``` --- kind: doc category: api title: "DSP views" slug: dsp url: https://invertbio.com/docs/api/dsp markdown_url: https://invertbio.com/docs/dsp.md --- # DSP views The DSP external API provides SQL access to material streams, unit operations, and bioprocess data. See [Authentication](/docs/api/authentication) for how to obtain an access token. ### Available Views (DSP) The DSP API exposes all the same views as the upstream API (`v_bioprocesses`, `v_timeseries`, `v_timeseries_data`, `v_quantities`, `v_formulas`, `v_formula_results`, `v_experiments`), plus the following additional views: #### `v_material_streams` ```bash curl -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -X POST \ -d '{"statement": "SELECT * FROM v_material_streams LIMIT 1"}' \ https://api.invertbio.com/external/v1/statements/ ``` | Field Name | Type | Description | |------------|------|-------------| | `id` | String | Unique identifier | | `name` | String | Name of the material stream | | `start_timestamp` | String or null | Start time (ISO 8601) | | `end_timestamp` | String or null | End time (ISO 8601) | | `unit_operation_id` | String or null | Identifier of the associated unit operation | | `is_global_material_stream` | Boolean | `true` = data belonging to a unit operation; `false` = material stream data | | `data` | Array | List of data objects | | `last_updated_at` | String | Timestamp of last update (ISO 8601) | #### `v_unit_operations` ```bash curl -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -X POST \ -d '{"statement": "SELECT * FROM v_unit_operations LIMIT 1"}' \ https://api.invertbio.com/external/v1/statements/ ``` | Field Name | Type | Description | |------------|------|-------------| | `id` | String | Unique identifier | | `external_id` | String | External identifier | | `name` | String | Name of the unit operation | | `experiment_id` | String or null | Identifier of the associated experiment | | `run_start_timestamp` | String or null | Start time of the run | | `run_end_timestamp` | String or null | End time of the run | | `status` | String | Draft / Requested / Scheduled / In-progress / Completed | | `qc_status` | String | Quality control status | | `qc_failure_mode` | String | Quality control failure mode | | `unit_operation_type_id` | Object or null | Identifier of the associated unit operation type | | `last_updated_at` | String | Timestamp of last update (ISO 8601) | #### `v_unit_operation_types` ```bash curl -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -X POST \ -d '{"statement": "SELECT * FROM v_unit_operation_types LIMIT 1"}' \ https://api.invertbio.com/external/v1/statements/ ``` | Field Name | Type | Description | |------------|------|-------------| | `id` | String | Unique identifier | | `name` | String | Name of the unit operation type | | `icon` | String or null | Name of the icon | | `last_updated_at` | String | Timestamp of last update (ISO 8601) | ### Example Queries and Error States Queries, delta load patterns, view joins, and error state handling are identical between the Core and DSP views. See [Core views](/docs/api/core) for details. --- kind: doc category: faq title: "What is a mapping?" slug: what-is-a-mapping url: https://invertbio.com/docs/faq/what-is-a-mapping markdown_url: https://invertbio.com/docs/what-is-a-mapping.md --- # What is a mapping? A mapping is a feature designed to simplify the file upload process on our platform. Essentially, it's a predefined template that helps link specific data fields seamlessly. ![](/docs/faq/what-is-a-mapping/image-1.mp4) To upload a file using a mapping, head to **Import**, choose the mapping that matches your file format, and proceed with the upload. If you are unsure about the mapping format, select the mapping and click **Download Template .csv** to use a sample format. We offer generic mappings like Timeseries Data (absolute and relative time), Run Events, and Run Data (process metadata). If you require a custom mapping please contact us through Help & Support. --- kind: doc category: faq title: "Filtering the runs list" slug: filtering-the-runs-list url: https://invertbio.com/docs/faq/filtering-the-runs-list markdown_url: https://invertbio.com/docs/filtering-the-runs-list.md --- # Filtering the runs list Finding a specific run or set of runs between hundreds is like finding a needle in a haystack 🌾 So we made it easy for you with **filters**. ![Example run list](/docs/faq/filtering-the-runs-list/image-1.png) You can filter by any property by clicking **Add filter** and choosing the run property you would like to filter by. You can include on or more values on each filter. ![](/docs/faq/filtering-the-runs-list/image-2.png) --- kind: doc category: faq title: "How do I save an analysis on Invert?" slug: how-do-i-save-an-analysis url: https://invertbio.com/docs/faq/how-do-i-save-an-analysis markdown_url: https://invertbio.com/docs/how-do-i-save-an-analysis.md --- # How do I save an analysis on Invert? Analyses have been upgraded to Reports! - You can now store several analyses within a single report - Share context and conclusions through free text annotations for each plot block [Quick Video Walkthrough](https://www.loom.com/share/8b09923fb64046debd0d6ee8edfc3ba6?sid=0555d942-93cb-49df-b2be-65e300fa330f) From the Run Table: - Use filtering and grouping tools to select runs, then click '**Analyze**.' - Select graphed metrics, zoom, metadata, and view settings. - Click **Save Report** to store this analysis into a Report to share and annotate findings. From Reports: - Create a new report or open an existing report - Add a new plot block - Select your runs, run settings, and click Save Report for the new plot block to be added to your Report. ![](/docs/faq/how-do-i-save-an-analysis/image-1.mp4) --- kind: doc category: faq title: "How does graphing work?" slug: how-does-graphing-work url: https://invertbio.com/docs/faq/how-does-graphing-work markdown_url: https://invertbio.com/docs/how-does-graphing-work.md --- # How does graphing work? ## Time-series Charts ### Time normalization - All time based charts are normalized to the **Run Start** time which calculates the **Elapsed Run Time (ERT).** - If there is data recorded prior the Run Start time, click _View -> Show Pre-run start data_ to reveal the data prior to t=0. ### Zoom - For Y-axis zooming, click on the two axis numbers to set the zoom bounds. - For X-axis zooming, click and drag within the graph. ### Downsampling and Interpolation - In order to maintain a snappy interface, we downsample the vast amount of available raw data for the line charts. - Data is interpolated between time points to enable the calculations in formulas and grouped time-series statistics for runs that do not have identical data frequencies. - Higher resolution raw data will render as you increase the zoom and is also available through the data export. ### Chart Splits - In the View options you can select to split the data: - All-in-one - This will show all metrics and runs on a singular graph. - Each metric has a unique texture and each run has a unique color. - Split metrics - This creates a separate chart for each metric. - Each run has a unique color which is consistent across each chart. - Split Runs - This creates a separate chart for each run or run group. - Each metric has a unique color which is consistent across each chart. - Separate - Each metric and run is separated onto their own chart. ### Grouping - Use the 'Group by' option to categorize runs by metadata and access Invert's inter-run statistical comparison tools. - Line chart groups aggregate the run's interpolated data and display a solid line for the 50th percentile (median) values with a shaded range of the 16th to 84th percentiles. This ensures a distribution-agnostic analysis. Unlike mean and standard deviation, these percentiles provide a summary of both central tendency and variability within data. If the data is normally distributed these percentiles will represent 1 standard deviation. ## Scatterplot Charts - Toggle to scatter charts under the View dropdown menu. ![](/docs/faq/how-does-graphing-work/image-1.png) - The x-axis is set to the run name by default. However, you can select any categorical or time-series aggregated metric available to the selected runs by clicking on the tile next to the **X**. - The y-axis metrics can be any numeric values. This includes time-series aggregations, single-point values (numeric metadata), and calculated metrics. - Time-series aggregation options include mean, minimum, maximum, standard deviation, first, last, and count. --- kind: doc category: faq title: "How do I organize runs using Experiments?" slug: how-do-i-organize-runs-using-experiments url: https://invertbio.com/docs/faq/how-do-i-organize-runs-using-experiments markdown_url: https://invertbio.com/docs/how-do-i-organize-runs-using-experiments.md --- # How do I organize runs using Experiments? ## What are Experiments? Experiments, previously "parent runs", are a grouping of runs, typically with overlapping operations timelines used to help manage organizational complexity. For example - a set of 12 or 24 runs that were performed in the same operational window on a Sartorius Ambr could be batched into a single Experiment. The seed flask and seed train runs can also be included in this experiment if that organizational grouping is helpful. _Note: Run data can still be compared across experiments by simply selecting the runs of interest._ ## Editing via Uploads The **Run Data** mapping allows you to edit the experiment for any run. This is recommended approach for bulk editing any run data, including Experiments. 1. Creating the file to ingest: - Create a csv or excel sheet with headers "Experiment" and "Run" - ![](/docs/faq/how-do-i-organize-runs-using-experiments/image-1.png) - **_Ensure the Run names match exactly with the runs that exist in your tenant_** _that you would like to edit, otherwise this process may result in creating new runs._ - _The Experiments do not need to exist prior to this upload, they will be created through this process. If your experiments already exist, please ensure you are using the exact name of the Experiment you intend to map the runs with._ 2. Uploading the file for ingestion - Navigate to the [Import](https://app.invertbio.com/import/ingest) page - Select your file to import - Select the **Run Data** mapping - Click **Start Import** _Note: Some custom mappings may already incorporate experiments if the schema has been provided. If you are interested in updating an existing mapping, please reach out through Help & Support._ ## Editing via the Run Table If you prefer seeing the edits in a more familiar spreadsheet format you can edit from the main run table. - While in the run table interface, select the runs you wish to edit - Ensure the **Experiment** column has been added to the table - Click **Edit** - Type in the Experiment Name. - If the Experiment does not already exist, it can be created through this [link](https://app.invertbio.com/experiment/new?status=Scheduled&is_bioprocess_group=true) - Click **Save** ![](/docs/faq/how-do-i-organize-runs-using-experiments/image-2.mp4) ## Editing via the Run Summary Page Every Run can be edited individually through its summary page. - Click on the run name in the run table to navigate to the summary page - Click **Edit** - Select the Experiment name from the dropdown - If the Experiment does not already exist, it can be created through this [link](https://app.invertbio.com/experiment/new?status=Scheduled&is_bioprocess_group=true) - Click **Save** ![](/docs/faq/how-do-i-organize-runs-using-experiments/image-3.mp4) --- kind: doc category: faq title: "How can I annotate data using events?" slug: how-can-i-annotate-data-using-events url: https://invertbio.com/docs/faq/how-can-i-annotate-data-using-events markdown_url: https://invertbio.com/docs/how-can-i-annotate-data-using-events.md --- # How can I annotate data using events? ## What is an event? An event is a specific occurrence or milestone within your bioprocess. Users can annotate this data by creating event notes, which are distinct from timeseries data and metadata. Each event note includes details such as event type, timestamp, and optional information like event description, operator, and even image uploads. Event notes are bundled into a single event and displayed on the graph, aligned by its relative timestamp value. ![](/docs/faq/how-can-i-annotate-data-using-events/image-1.png) ## How to create an event on Invert? You can create an event by navigating to the **Event side bar** on the Analysis page. Open the **Run Events** page and select a run ID from the dropdown menu. Press 'Edit' to enable event editing and click 'Add' in the desired event category. Enter in event details for time and event type, save the changes and return to the Analysis page. ## Image upload You can upload images to events on the **Run Events** page! Images show in event tool tips and can be downloaded in the events side bar view. ![](/docs/faq/how-can-i-annotate-data-using-events/image-2.png) ## How can I hide certain events from view? You can control event visibility through the event filter dropdown menu in the **event side bar**. Unselect 'All' to hide all events from view. Alternatively, you can choose to show specific events by checking/unchecking any of the event checkbox in the dropdown menu, such as 'Sample' or 'Inoculation'. ## Event categories and Event types ### Phases - Growth (Start/End), Production (Start/End) ### Critical Operations - Inoculation, Induction, Feed Start, Transfection ### Additions / Removals - Sample, Drawdown, Foamout, Harvest, Reagent bolus ### Observations - Observations --- kind: doc category: faq title: "How do I assign friendly run names?" slug: how-do-i-assign-friendly-run-names url: https://invertbio.com/docs/faq/how-do-i-assign-friendly-run-names markdown_url: https://invertbio.com/docs/how-do-i-assign-friendly-run-names.md --- # How do I assign friendly run names? Run names tend to be long, uninformative, or overly detailed which can impact the effectiveness of your data presentation. By assigning friendly run names, you can streamline the appearance of charts and legends, enabling you to emphasize run-specific details and present your analysis with greater clarity. To assign friendly run names using the "Group by" feature, follow these steps: 1. **Create a new property:** Navigate to the Runs table or Library page to create a new property for your custom run names (e.g. 'Alias'). 2. **Change Data Type to 'Text':** Edit the property and set data type to 'Text'. 3. **Assign Friendly Run Names:** Enter your desired run names into the custom property field via the Runs table. 4. **Apply 'Group By' in charts:** On the analysis page, choose the newly custom property from the 'Group By'-dropdown menu. The custom text values will replace Run IDs in charts and chart legends, making it easier to identify and compare runs during analysis. --- kind: doc category: faq title: "What does the Live indicator mean?" slug: what-does-the-live-indicator-mean url: https://invertbio.com/docs/faq/what-does-the-live-indicator-mean markdown_url: https://invertbio.com/docs/what-does-the-live-indicator-mean.md --- # What does the Live indicator mean? When runs are receiving data right now, Invert will label them with the "Live" indicator. This means that Invert has received data from the last five minutes. You can use this to quickly find data from live runs and distinguish what's running right now from historical data. This live indicator is separate from the run status field (Draft, In-Progress, Completed, etc.), which remains the same. The live status will automatically appear in the run table: ![](/docs/faq/what-does-the-live-indicator-mean/image-1.png) Live status can be applied as a filter, to show only live runs, as shown below: ![](/docs/faq/what-does-the-live-indicator-mean/image-2.png) This status is visible in the analysis view as well: ![](/docs/faq/what-does-the-live-indicator-mean/image-3.png) --- kind: doc category: faq title: "Configure SAML/SSO on Microsoft Entra ID (Azure AD)" slug: configure-saml-sso url: https://invertbio.com/docs/faq/configure-saml-sso markdown_url: https://invertbio.com/docs/configure-saml-sso.md --- # Configure SAML/SSO on Microsoft Entra ID (Azure AD) To set up Single Sign-On (SSO) with Microsoft Azure Active Directory (Azure AD), please follow these steps: ## Step 1: Register an Application in Azure AD 1. Log in to your Azure portal. 2. Go to Azure Active Directory → App registrations → New registration. 3. Enter a name for the application (e.g., _Invert SSO_). 4. Under Redirect URI, add the following value: [https://auth.invertbio.com/login/callback](https://auth.invertbio.com/login/callback) 5. Save the application. ## Step 2: Collect and Share Information with Invert Once the application is created, please provide the following details to your Invert representative so we can complete the setup on our side: - Application (Client) ID - Client Secret (you will need to generate one in the app’s Certificates & Secrets section) - Azure AD Domain Name (found in your Directory overview) - Tenant ID (optional but recommended for more reliable configuration) Once we receive this information, we will finalize the SSO integration. After setup, your users will be able to log in to Invert using their Microsoft Azure AD credentials. --- # Section: Changelog --- kind: changelog title: "Skills" slug: 2026-04-03-skills date: 2026-04-03 type: new tags: ["Assist","Library","Analysis"] url: https://invertbio.com/changelog/2026-04-03-skills markdown_url: https://invertbio.com/changelog/2026-04-03-skills.md --- # Skills Your team has established ways of doing data analysis — methods refined through years of experience. Now you can capture them directly in Invert as a **Skill**: a reusable set of instructions that tells Assist *how* to conduct a specific analysis, so it works the way your team works — every time, for everyone. Add or edit Skills in the **Library** — each has a name, description, and instructions (plain text, code, or both). In Assist, reference a Skill directly with `@Skill Name`, or just ask your question — Assist will find and apply the right Skill on its own when one is relevant. Same method, same rigor, every time. **Examples:** | Skill | Description | | --- | --- | | **Program X Batch Deviation Triage** | Start with our 6 critical process parameters in the order our team defined, compare each to our validated ranges from PPQ, then check the 3 CQAs flagged as most sensitive — always present the deviation narrative in the format QA expects | | **Site B Monthly Process Review** | Use the acceptance bands set for Building 4 (not the corporate defaults), overlay the monthly runs against our baseline cohort from 2024, and call out any metric drifting toward our internal alert limits | | **Chromatography Step Yield Tracker** | Calculate step yield the way our DSP team does it (pool UV cutoff, not A280), compare against the last 10 runs on the same resin lot, flag anything below the 92% floor set after column qualification | | **New Analyst Onboarding Report** | Walk through the 3 campaigns we picked as reference, annotate the charts the way we do in training — highlight the feed bolus timing, the VCD inflection, and the harvest decision point | | **Tech Transfer Comparability Package** | Use the equivalence bands from our transfer protocol (±12% for yield, ±0.3 for pH), pull matched runs from both sites, and format the comparison table the way regulatory expects it in the dossier | *Available now for all Assist-enabled organizations.* --- kind: changelog title: "Custom Events" slug: 2026-04-01-custom-events date: 2026-04-01 type: improvement tags: ["Assist","Analysis","Reports","Library","Import","Events","Export"] url: https://invertbio.com/changelog/2026-04-01-custom-events markdown_url: https://invertbio.com/changelog/2026-04-01-custom-events.md --- # Custom Events Event and phase types in Invert are no longer one-size-fits-all. Every organization has its own terminology — what one team calls "Growth Phase" might be "Cultivation Phase" or "Pre-Transfection Phase" at another. An AAV process needs "Co-Infection" and "Transfection" events that a CHO fed-batch process never uses. Now you can configure your event and phase types directly from the **Library**, so your process model matches your team's nomenclature. **What you can do** - **Rename** any event or phase type to match your internal terminology. Call it what you call it — the change shows up everywhere: event creation, chart annotations, timeline views, Assist conversations, reports, and exports. - **Create** new types within the existing categories (Additions, Removals, Observations, Phases, Transfers) for events specific to your process. "Co-Infection Start," "Transfection Event," "Methanol Feed Start" — whatever your workflow requires. - **Archive** types you don't use, so they stop cluttering dropdowns and creation flows. Your team only sees what's relevant to their work. **How it works** Open the **Library** and navigate to the new **Event Types** tab. You'll see all types grouped by category. From there, click into any type to edit its display name, or create a new one by selecting a category and providing a name. Changes propagate immediately — every surface in Invert picks up your terminology. Phases work the same way. A phase is a pair of events (start + end) that define a time range, and they're managed alongside point-in-time events in the same Library view. **Automatic type creation during ingestion** When data comes in that references an event type you haven't configured yet, Invert creates it automatically — no manual setup required. The new type appears in your Library where you can rename or organize it. Ingestion is never blocked by a missing type. **Works with Assist** Assist uses your custom display names in conversation. If you've renamed "Growth Phase" to "Expansion Phase," that's what Assist says — no fallback to generic names. *Available now for all organizations.* --- kind: changelog title: "Introducing the Data Quality Dashboard (beta)" slug: 2026-03-29-introducing-the-data-quality-dashboard-beta date: 2026-03-29 type: new tags: ["Assist","Data Quality"] url: https://invertbio.com/changelog/2026-03-29-introducing-the-data-quality-dashboard-beta markdown_url: https://invertbio.com/changelog/2026-03-29-introducing-the-data-quality-dashboard-beta.md --- # Introducing the Data Quality Dashboard (beta) Good data quality is the foundation of reliable science. The new **Data Quality dashboard** gives you a clear, continuous view of how your data holds up against FAIR principles and AI/ML readiness standards — so issues get caught early rather than discovered downstream. Each issue is ranked by severity and tagged to the FAIR dimensions it affects, helping you prioritize what matters most. Clicking into an issue gives you the context you need to make an informed decision, and many can be resolved in just a step or two. You stay in control throughout — the dashboard guides you, but the call is always yours. We're launching with an initial set of rules and will be expanding them continuously. Let us know what data quality issues you'd like us to tackle next! --- kind: changelog title: "Analysis Templates: Saved Chart Configurations" slug: 2026-03-24-analysis-templates-saved-chart-configurations date: 2026-03-24 type: improvement tags: ["Runs","Analysis","Reports","Events"] url: https://invertbio.com/changelog/2026-03-24-analysis-templates-saved-chart-configurations markdown_url: https://invertbio.com/changelog/2026-03-24-analysis-templates-saved-chart-configurations.md --- # Analysis Templates: Saved Chart Configurations Setting up the same chart configuration repeatedly — quantities, axes, chart type, splits, aggregations — adds up. **Analysis Templates** let you save a plot block's configuration once and reuse it across any runs or reports. When you apply a template, it configures the chart while preserving your current run selection — so you can define a standard view once and apply it to any set of runs instantly. Similar to using Saved Views on the runs page: a dropdown in the plot block editor lets you select, save, rename, or delete templates. Templates are shared across your team, making it easy for anyone to recreate consistent analyses for process monitoring and reporting. Some examples to get you started: - **Titer regression** — titer vs. VCD scatter with your preferred axis scaling and split-by configuration - **Metabolite overlay** — glucose, lactate, and ammonia time series on a shared x-axis with phase splits - **Growth kinetics** — VCD and viability profiles with event markers for feeds and samples - **Process parameter monitoring** — pH, DO, and temperature across runs with aggregation bands --- kind: changelog title: "Reports: Direct Python Analysis" slug: 2026-03-18-reports-direct-python-analysis date: 2026-03-18 type: new tags: ["Assist","Runs","Analysis","Reports","Export"] url: https://invertbio.com/changelog/2026-03-18-reports-direct-python-analysis markdown_url: https://invertbio.com/changelog/2026-03-18-reports-direct-python-analysis.md --- # Reports: Direct Python Analysis Reports now support full Python execution — replacing Notebooks with a more powerful, collaborative way to analyze and visualize your data without leaving Invert: **Code Blocks** ## How it works Your run data is automatically available as structured data frames. Add a **code block** to any report, write Python to process, analyze, and visualize it — or describe what you want to see and let Assist generate the code for you. ## What can you do with them? - Run Python against your run data directly in Reports — no exporting, no context switching - Let Assist write the analysis so anyone on your team can get answers, not just the people writing scripts - Upload and combine external data alongside your Invert data - Update your data frames as new runs come in and re-execute your analysis - Share your reports with your team. They see the results, the code, and the data behind it ## Why this matters **Code blocks** make it possible for anyone on your team to process and visualize data, not just the people comfortable writing Python. That means more people running analyses, faster answers, and fewer bottlenecks. *Available now for all users.* --- kind: changelog title: "Experiment Summary Dashboards" slug: 2026-02-25-experiment-summary-dashboards date: 2026-02-25 type: new tags: ["Assist","Runs","Experiments"] url: https://invertbio.com/changelog/2026-02-25-experiment-summary-dashboards markdown_url: https://invertbio.com/changelog/2026-02-25-experiment-summary-dashboards.md --- # Experiment Summary Dashboards Open any experiment and immediately cut to the chase: see which conditions hit your targets, what data is missing, and how your runs connect. **Experiment Summary Dashboards** now provide a unified, experiment-level view that brings your runs, experimental context, and analyses together. - **Surface key insights automatically:** Quickly identify primary findings, critical observations, and missing data so you have a starting point in understanding your process. - **Assess performance against objectives:** View the study design and target conditions alongside the actual results so you can quickly assess whether the experiment achieved its objectives. - **Understand process relationships:** Visually map together how the runs are connected, trace material flows and quickly identify dependencies across the experiment lineage. Available now for all AI-enabled organizations. --- kind: changelog title: "Assist: Enhanced Run Discovery" slug: 2026-01-16-assist-enhanced-run-discovery date: 2026-01-16 type: improvement tags: ["Assist","Runs","Analysis","Library","Experiments"] url: https://invertbio.com/changelog/2026-01-16-assist-enhanced-run-discovery markdown_url: https://invertbio.com/changelog/2026-01-16-assist-enhanced-run-discovery.md --- # Assist: Enhanced Run Discovery ## No more hunting through the run directory.Just ask. We've upgraded how Assist finds runs. You no longer need to remember run names or pre-select experiments, just describe what you're looking for, and Assist handles the search. - **Search by metadata:** Find runs based on cell line, media formulation, scale, operator, or any parameter available for your data - **Traverse lineage:** Automatically connect upstream seed trains to downstream operations ## Example Queries: - **Cross-campaign comparisons:** "Compare VCD profiles for all CHO-K1 runs using Media Formulation B at 200L scale over the past 6 months" - **Root cause investigation:** "Show me the seed train conditions for any production runs that had viability drop below 80% before day 10" - **Process optimization:** "Which feeding strategy produced the highest titer across our DG44 campaigns?" - **Lineage-aware analysis:** "For our top 5 performing batches, trace back to their inoculum conditions and compare passage numbers" - **Metadata-driven filtering:** "Pull all runs from Building 2 bioreactors that used the updated pH control setpoints after March 2025" - **Anomaly detection:** "Find runs where lactate accumulated faster than 0.5 g/L/day in the first 72 hours" --- kind: changelog title: "Reports - Major Update" slug: 2026-01-09-reports-major-update date: 2026-01-09 type: improvement tags: ["Analysis","Reports","Library","Export"] url: https://invertbio.com/changelog/2026-01-09-reports-major-update markdown_url: https://invertbio.com/changelog/2026-01-09-reports-major-update.md --- # Reports - Major Update We've redesigned Reports and the Reports Directory to give you more control over how you organize, filter, and share your analyses. This update brings folders, tags, advanced filtering, PDF export, rich formatting, and flexible sharing options (including private reports). ## What's New - **Folders**: Organize reports into folders for quick filtering and discovery. - **Advanced Filters**: Search across runs, metrics, and content to find exactly what you need. - **PDF Export**: Generate polished PDFs for sharing outside Invert. - **Rich Formatting & Images**: Add headers, styled text, and embedded images directly in your reports. - **Flexible Sharing**: Choose from workspace-wide, guest-access, or private sharing modes. - **Bulk Plot Export**: Download all relevant plots and data from a plot block with a single click ### Filters The new filter options lets you narrow down reports by multiple criteria simultaneously. Access it by clicking on **Add Filter** in the reports directory. --- kind: changelog title: "Introducing Invert Assist" slug: 2025-11-12-introducing-invert-assist date: 2025-11-12 type: new tags: ["Assist","Import"] url: https://invertbio.com/changelog/2025-11-12-introducing-invert-assist markdown_url: https://invertbio.com/changelog/2025-11-12-introducing-invert-assist.md --- # Introducing Invert Assist We’re excited to launch **Invert Assist**, your new AI assistant for bioprocessing. Built on Invert’s trusted data foundation, Invert Assist lets scientists and engineers explore their process data through a simple chat interface — rapidly explore process behavior, identify key drivers, and make data-backed decisions with confidence. Every answer is transparent, traceable, and grounded in your own harmonized data. Watch the launch webinar for more details: [Introducing Invert Assist — Explainable AI for Bioprocess](https://invertbio.com/blogs/invert-assist-ai-bioprocessing-quality-control-data-integration#:~:text=See%20the%20Full%20Webinar) **We’re rolling out early access now. Connect with your Invert team representative to get started.** --- kind: changelog title: "Run Summary Dashboards" slug: 2025-11-12-run-summary-dashboards date: 2025-11-12 type: new tags: ["Assist","Runs","Analysis","Library","Events"] url: https://invertbio.com/changelog/2025-11-12-run-summary-dashboards markdown_url: https://invertbio.com/changelog/2025-11-12-run-summary-dashboards.md --- # Run Summary Dashboards **Run Summary Dashboards** are a streamlined view that helps you understand run performance at a glance. Instead of sifting through raw data tables, you now land on an automatically generated dashboard that brings key metrics and context together in one place. Each dashboard is generated from your available run data, including metrics, properties, and events, to summarize what’s happening and how the process is performing. - **Live KPI Tiles:** Biomass, titer, pH, temperature, and control metrics update automatically as the run progresses, always showing the latest recorded values and when they were last updated. - **Preview Charts:** Compact visualizations for environmental parameters, biomass & product, and feed trends, with direct links to the full Analysis view for deeper exploration. - **Objectives and Notes:** Automatically generated from run data and events to summarize goals and outcomes. Users can regenerate or edit as needed. - **Event Summary Sidebar:** Key events displayed alongside process data to help correlate actions and results. The autogenerated dashboards are powered by our AI-enabled tooling. **To opt in and enable these features, reach out to our team.** --- kind: changelog title: "Live Indicator" slug: 2025-03-25-live-indicator date: 2025-03-25 type: new tags: ["Assist","Runs","Analysis"] url: https://invertbio.com/changelog/2025-03-25-live-indicator markdown_url: https://invertbio.com/changelog/2025-03-25-live-indicator.md --- # Live Indicator Invert now automatically highlights which runs are live. Receiving data from the last five minutes triggers the "live" status. This indicator is visible in the run table and analysis view. You can also filter by live status, to easily find what's running right now. Run status (Draft, In-Progress, Completed, etc.) remains separate. Keep an eye out for the indicator, and try the filter next time you're wondering what's happening today! --- kind: changelog title: "🔀 Scatterchart Mode Toggle" slug: 2025-03-21-scatterchart-mode-toggle date: 2025-03-21 type: improvement tags: ["Analysis"] url: https://invertbio.com/changelog/2025-03-21-scatterchart-mode-toggle markdown_url: https://invertbio.com/changelog/2025-03-21-scatterchart-mode-toggle.md --- # 🔀 Scatterchart Mode Toggle You can now switch the x-axis in scatter charts between **continuous** and **categorical** modes—perfect for exploring both trends and group comparisons. Try it out in your next analysis. --- kind: changelog title: "💾 Saved Views" slug: 2025-02-20-saved-views date: 2025-02-20 type: new tags: ["Runs"] url: https://invertbio.com/changelog/2025-02-20-saved-views markdown_url: https://invertbio.com/changelog/2025-02-20-saved-views.md --- # 💾 Saved Views Are you coordinating experiments across multiple projects or programs? Quickly update your Run Table to pre-saved configurations of filters, columns, and groupings using **Saved Views**. --- kind: changelog title: "Analysis - Event Creation" slug: 2024-12-06-analysis-event-creation date: 2024-12-06 type: new tags: ["Analysis","Events"] url: https://invertbio.com/changelog/2024-12-06-analysis-event-creation markdown_url: https://invertbio.com/changelog/2024-12-06-analysis-event-creation.md --- # Analysis - Event Creation Have you noticed an anomaly in your run while doing process analysis? Quickly document any events, including observations and sample events, directly from the event bar above your run in any analysis. --- kind: changelog title: "📁 Reports and Import History" slug: 2024-11-27-reports-and-import-history date: 2024-11-27 type: new tags: ["Runs","Reports","Import"] url: https://invertbio.com/changelog/2024-11-27-reports-and-import-history markdown_url: https://invertbio.com/changelog/2024-11-27-reports-and-import-history.md --- # 📁 Reports and Import History We've added two new tabs to Run Summary page — **Import History** and **Reports**. You can now find all relevant ingestions and reports run directly from the run summary page. The Reports page now has a filter that can be accessed through the magnifying glass button on the top right of the page or through the `ctrl+f` keyboard shortcut. --- kind: changelog title: "🔎 Subsetting Formula Results" slug: 2024-11-21-subsetting-formula-results date: 2024-11-21 type: new tags: ["Library"] url: https://invertbio.com/changelog/2024-11-21-subsetting-formula-results markdown_url: https://invertbio.com/changelog/2024-11-21-subsetting-formula-results.md --- # 🔎 Subsetting Formula Results We’re excited to announce a new customization in **Formulas** that limits the output to distinct time points, inferred by the selected dependency. This option is perfect for offline data calculations such as yield and productivity where you have the highest confidence in the data at the time of the recorded measurements. To enable this option, head to your formula's settings page and select the desired dependency in the **Subset Results** dropdown within the **Formula Calculation Customization** options. --- kind: changelog title: "Process Models - Optimization" slug: 2024-11-18-process-models-optimization date: 2024-11-18 type: new tags: ["Analysis"] url: https://invertbio.com/changelog/2024-11-18-process-models-optimization markdown_url: https://invertbio.com/changelog/2024-11-18-process-models-optimization.md --- # Process Models - Optimization Run the optimizer on your models to determine which runs to explore next. This takes the guesswork out of experimental planning and maximizes the value of your existing data, helping you make faster, more effective decisions throughout your process development. **How It Works** - Build your model: Select the output you want to optimize—whether it’s a specific product titer, pH balance, or another metric. - Define Your Parameters: Set boundaries for your key variables, such as temperature, feed rates, or substrate concentrations. - Run the Optimizer: With a single click, the system calculates the most promising runs to explore next. - Explore Suggested Runs: Conduct the recommended experiments and feed the results back into your model to continuously refine it. **Need Assistance?** Please don't hesitate to reach out for a guided walkthrough. --- kind: changelog title: "Upgrades: Line Charts, Run Summary, Smoothing" slug: 2024-11-18-upgrades-line-charts-run-summary-smoothing date: 2024-11-18 type: new tags: ["Assist","Runs","Analysis","Library"] url: https://invertbio.com/changelog/2024-11-18-upgrades-line-charts-run-summary-smoothing markdown_url: https://invertbio.com/changelog/2024-11-18-upgrades-line-charts-run-summary-smoothing.md --- # Upgrades: Line Charts, Run Summary, Smoothing ## Improved Charts: Faster, Sharper, and Higher-Resolution Our charting library was slowing us down, so we made a major upgrade. Charts now render faster and display data at a higher resolution without downsampling, offering a more detailed and precise view of your timeseries data. We’re working hard to iron out any lingering bugs, but if you spot something, please let us know! Your feedback is invaluable in helping us refine this experience. ## Run Summary Details **Metrics:** Easily view all timeseries metrics for a given run. We’ve also added statistical summaries and a streamlined editing flow for metric archiving, giving you more control over your data. **Attachments**: Run attachments now have their own dedicated tab, making it simpler to upload, review, and download files. ## Timeseries Smoothing Functions Try out `simple_moving_average()` or `centered_moving_average()` in Formulas if your process signals are too noisy. Each allows you to smooth the specified metric over a customizable time interval. *Example: Centered Moving Average over a 1-hour period of a Dissolved Oxygen (DO) signal.* --- kind: changelog title: "Introducing Parent Metrics" slug: 2024-09-18-introducing-parent-metrics date: 2024-09-18 type: new tags: ["Library"] url: https://invertbio.com/changelog/2024-09-18-introducing-parent-metrics markdown_url: https://invertbio.com/changelog/2024-09-18-introducing-parent-metrics.md --- # Introducing Parent Metrics Data can be messy! It may have differing names or even differing units, especially when coming from different sources. We've introduced a way to simplify your metric library through **Parent Metrics**. Parent metrics allow you to combine data across existing metrics and even from formulas into a unified metric concept. Metrics can now be assigned to a parent metric in the library, and the parent metric can be used throughout Invert. - Scroll a shorter, more meaningful list of metrics - Combine formulas and metrics seamlessly for analysis - Improve traceability back to the data source and original name --- kind: changelog title: "Chart Export Improvements" slug: 2024-09-04-chart-export-improvements date: 2024-09-04 type: improvement tags: ["Analysis","Export"] url: https://invertbio.com/changelog/2024-09-04-chart-export-improvements markdown_url: https://invertbio.com/changelog/2024-09-04-chart-export-improvements.md --- # Chart Export Improvements We've improved the Chart Export functionality, providing more flexibility and control when exporting charts as PNG. You can now export charts with predefined, standard settings for consistent, high-quality results. Available Export Options: Standard Export: Generates charts with fixed aspect ratios and larger font sizes, optimized for use in presentations or documents like PowerPoint and Word. In-view Export: Creates charts based on your current graph view and browser zoom settings, allowing for customized exports that match what you see on your screen. How It Works: 1. Create your line chart. 2. Save it as a PNG by clicking the three dots button. 3. Choose between 'Standard' or 'In-view' in the Export Preview. 4. Press 'Export' to save your chart. --- kind: changelog title: "Data Export into Excel Improvements" slug: 2024-09-04-data-export-into-excel-improvements date: 2024-09-04 type: improvement tags: ["Export"] url: https://invertbio.com/changelog/2024-09-04-data-export-into-excel-improvements markdown_url: https://invertbio.com/changelog/2024-09-04-data-export-into-excel-improvements.md --- # Data Export into Excel Improvements Improved Configurability for Data Export into Excel! Our new exporting options give you greater control over how data is exported into Excel. These enhancements are particularly beneficial for handling sparse data sets and managing large volumes of data. **Interpolation:** 1) 'None': Data export without interpolation 2) 'Linear Interpolation': Fills in gaps in sparse data sets using linear interpolation **Resample Period:** 1) 'None': Data export without resampling. 2) Resampling at specific interval ('1 Minute', '5 Minutes', '15 Minutes'): Reduces the number of rows by resampling data at the selected interval, using the average value within each period. **How It Works:** 1. Create your line chart. 2. Click the 'Export' button to open the Export Configuration modal. 3. Choose your preferred options for Interpolation and Resample Period. 4. Press 'Export' to download your customized data export. --- kind: changelog title: "Metric & Formula Notes" slug: 2024-07-19-metric-formula-notes date: 2024-07-19 type: improvement tags: ["Library"] url: https://invertbio.com/changelog/2024-07-19-metric-formula-notes markdown_url: https://invertbio.com/changelog/2024-07-19-metric-formula-notes.md --- # Metric & Formula Notes Curious to learn more about the metric selected in your report? Hover your cursor over the metric name in any analysis or report to see descriptive notes and the full equation of the given formula. How to add notes: 1. Select a metric or formula from the Library. 2. Click Edit. 3. Add your desired description in the 'Notes' section. 4. Save your changes. 5. Hover over the metric or formula name in an analysis or report to view the note. --- kind: changelog title: "Chart Customization - Y-Axis Bounds" slug: 2024-06-21-chart-customization-y-axis-bounds date: 2024-06-21 type: improvement tags: ["Analysis"] url: https://invertbio.com/changelog/2024-06-21-chart-customization-y-axis-bounds markdown_url: https://invertbio.com/changelog/2024-06-21-chart-customization-y-axis-bounds.md --- # Chart Customization - Y-Axis Bounds We are excited extend the charts feature set with customizable y-axis bounds. You can now overwrite the auto-generated y-axis range with manually entered values. **How It Works:** 1. Open the 'View' sidebar or click on the y-axis. 2. Navigate to the 'Y-Axis Settings' section. 3. Select the y-metric you want to customize. 4. Define your desired Start range, End range, and Interval (optional). 5. Confirm by pressing the 'Apply' button. Try it out today for a more tailored data visualization experience! Check out the [Help Article](http://help.invertbio.com/en/articles/9258629-8-invert-user-guide-analysis#h_859b0c8ece) for more information. --- kind: changelog title: "Ambr 250 Integration - Sampling and Phase Events!" slug: 2024-05-22-ambr-250-integration-sampling-and-phase-events date: 2024-05-22 type: improvement tags: ["Import","Events"] url: https://invertbio.com/changelog/2024-05-22-ambr-250-integration-sampling-and-phase-events markdown_url: https://invertbio.com/changelog/2024-05-22-ambr-250-integration-sampling-and-phase-events.md --- # Ambr 250 Integration - Sampling and Phase Events! We've built new capabilities for the Ambr 250 Live integration. You'll now see: - sample events for both automated and manual samples, complete with volume and destination - observation events which mark the start of each process phase If you're interested in learning more about the automated data ingestion options, please reach out! --- kind: changelog title: "🧮 Formulas Improvements - Previews" slug: 2024-05-09-formulas-improvements-previews date: 2024-05-09 type: improvement tags: ["Library"] url: https://invertbio.com/changelog/2024-05-09-formulas-improvements-previews markdown_url: https://invertbio.com/changelog/2024-05-09-formulas-improvements-previews.md --- # 🧮 Formulas Improvements - Previews We've made a few changes to simplify formula creation and troubleshooting: - Preview your formula with a recent run to validate the calculation - Keep track of parentheses with color-coded parenthesis pairs - Add a note to maintain context of any assumptions in the calculation - Add your own custom constants --- kind: changelog title: "Process Models - Regressions!" slug: 2024-04-03-process-models-regressions date: 2024-04-03 type: new tags: ["Analysis"] url: https://invertbio.com/changelog/2024-04-03-process-models-regressions markdown_url: https://invertbio.com/changelog/2024-04-03-process-models-regressions.md --- # Process Models - Regressions! - **Expanded Model Selection**: We've added two more models to our repertoire: Elastic Net Linear Regression: Combines Ridge and Lasso regularization for a balanced, interpretable model. Gaussian Process Regressor (GPR): Ideal for understanding uncertainty in predictions, treating each input variable with consideration to their covariance. - **New Scaling Options**: MinMax and Standard scalers are provided for additional data preprocessing. - **Interactive Visualization**: Explore the Partial Dependence and Braid plots to better understand the outputs of the model. - **Prediction for Experimental Design**: Dive into the relationships between process parameters and key performance indicators using your own custom ML prediction model and determine what design space to study next. [More details available at our help center](http://help.invertbio.com/en/articles/9018202-process-models-updates) --- kind: changelog title: "Ambr 250 Integration Updates" slug: 2024-03-21-ambr-250-integration-updates date: 2024-03-21 type: improvement tags: ["Import"] url: https://invertbio.com/changelog/2024-03-21-ambr-250-integration-updates markdown_url: https://invertbio.com/changelog/2024-03-21-ambr-250-integration-updates.md --- # Ambr 250 Integration Updates We've been rolling out improvements to the Ambr 250 Live integration. You'll notice: - Run Start, Run End, and Status are now automatically set by the integration. - Runs from this system are assigned a variable "Source" which is set to "Ambr 250" by default. - The integration can be configured to automatically ignore variables, by specific names or partial matches. If you're interested in learning more, please reach out! --- kind: changelog title: "Events Enhancements" slug: 2024-03-19-events-enhancements date: 2024-03-19 type: improvement tags: ["Runs","Events"] url: https://invertbio.com/changelog/2024-03-19-events-enhancements markdown_url: https://invertbio.com/changelog/2024-03-19-events-enhancements.md --- # Events Enhancements Adding context to your bioprocess has never been easier! Tell the story behind your data using our updated Event feature. - Add new events using the event side bar without navigating away from your analysis - Distinguish between events with new icons and event tracks - Upload images to any event - Quickly access event details through tooltips --- kind: changelog title: "Phase-Based Formulas 🌱" slug: 2024-03-06-phase-based-formulas date: 2024-03-06 type: new tags: ["Library","Events"] url: https://invertbio.com/changelog/2024-03-06-phase-based-formulas markdown_url: https://invertbio.com/changelog/2024-03-06-phase-based-formulas.md --- # Phase-Based Formulas 🌱 Introducing Phase-Based Formulas: Define growth or production phases and apply precise calculations directly to your phase of interest. No exporting or bulk data cropping required. 1. Define your phases through the Run Events editor 2. Specify a phase to be used in any formula --- kind: changelog title: "Bulk Run Selection" slug: 2024-02-09-bulk-run-selection date: 2024-02-09 type: improvement tags: ["Runs"] url: https://invertbio.com/changelog/2024-02-09-bulk-run-selection markdown_url: https://invertbio.com/changelog/2024-02-09-bulk-run-selection.md --- # Bulk Run Selection We now support selecting a range of runs using Click + Shift functionality! To use this feature, click on the run at the beginning of the range, hold down the Shift key, and then click on the last run in the range. This will select all the runs in between. This functionality works both in the Run Table and within Charts! --- kind: changelog title: "Molar Conversion" slug: 2024-01-25-molar-conversion date: 2024-01-25 type: new tags: ["Library"] url: https://invertbio.com/changelog/2024-01-25-molar-conversion markdown_url: https://invertbio.com/changelog/2024-01-25-molar-conversion.md --- # Molar Conversion We now support adding molar mass to your metrics, enabling easy conversion between g/L and Molarity with the Molar Conversion feature! To use this: Head to the Metrics library and select a metabolite or chemical product, then click edit. From there you can specify the molecular weight. This will enable conversions on the fly in the table and graphs! --- kind: changelog title: "Run Comparison" slug: 2023-11-22-run-comparison date: 2023-11-22 type: new tags: ["Runs"] url: https://invertbio.com/changelog/2023-11-22-run-comparison markdown_url: https://invertbio.com/changelog/2023-11-22-run-comparison.md --- # Run Comparison Not sure which parameters have changed across your runs? We've rolled out a **Run Comparison** tool, enabling you to quickly identify differences between runs with just a few clicks. To get started: Select your runs of interest, then click "Add differences" in the new Add Column dropdown menu. This will add any metadata columns to the table that have differences within your selected run set. You can use this in Reports and the Run Table. --- kind: changelog title: "Scatter Plot Statistics" slug: 2023-11-07-scatter-plot-statistics date: 2023-11-07 type: new tags: ["Analysis"] url: https://invertbio.com/changelog/2023-11-07-scatter-plot-statistics markdown_url: https://invertbio.com/changelog/2023-11-07-scatter-plot-statistics.md --- # Scatter Plot Statistics Introducing Scatter Plot Statistics! Access automatically calculated statistics directly in the table view under the scatterplot in the "Statistics" tab. Mean distributions, correlations and linear fits are now calculated and displayed with all scatterplots as appropriate. --- kind: changelog title: "Parent Run -> Experiment" slug: 2023-10-09-parent-run-experiment date: 2023-10-09 type: improvement tags: ["Runs","Experiments"] url: https://invertbio.com/changelog/2023-10-09-parent-run-experiment markdown_url: https://invertbio.com/changelog/2023-10-09-parent-run-experiment.md --- # Parent Run -> Experiment As of October 9, 2023 we have updated the nomenclature for the run property "Parent Run" to "Experiment". This update has been made to support ongoing work for experimental planning. **What is an experiment? How should it be used?** An experiment is a grouping of runs, typically with overlapping operations timelines used to help manage organizational complexity. - For example - a set of 12 or 24 runs that were performed in the same operational window on a Sartorius Ambr could be batched into a single Experiment. - The seed flask and seed train runs can also be included in this experiment if that organizational grouping is helpful. - Run data can still be compared across experiments by simply selecting the runs of interest. **Impacted User Flows:** - Filtering - Grouping - Exports This change does not impact any existing automated ingestion flows. "Parent Run" can still be used for the column header in manual ingestions. Please reach out if you have any questions or concerns! --- kind: changelog title: "Compile your Analyses into a Report" slug: 2023-10-05-compile-your-analyses-into-a-report date: 2023-10-05 type: new tags: ["Reports"] url: https://invertbio.com/changelog/2023-10-05-compile-your-analyses-into-a-report markdown_url: https://invertbio.com/changelog/2023-10-05-compile-your-analyses-into-a-report.md --- # Compile your Analyses into a Report Share findings, observations, conclusions with your colleagues through the upgraded Reports interface. - You can now store several analyses within a single report - Share context and conclusions through free text annotations for each plot block Check out a quick walkthrough [here](https://www.loom.com/share/8b09923fb64046debd0d6ee8edfc3ba6?sid=0555d942-93cb-49df-b2be-65e300fa330f) --- kind: changelog title: "Nested Formulas and Advanced Functions" slug: 2023-09-22-nested-formulas-and-advanced-functions date: 2023-09-22 type: new tags: ["Library"] url: https://invertbio.com/changelog/2023-09-22-nested-formulas-and-advanced-functions markdown_url: https://invertbio.com/changelog/2023-09-22-nested-formulas-and-advanced-functions.md --- # Nested Formulas and Advanced Functions Formulas can now be referenced within other formulas! Simplify complex expressions by creating named reusable intermediates. Please reach out to our staff through Help and Support if you would like support in converting any existing formulas. New functions are available in formulas to support timeseries evaluations. **Timeseries Integration** Function: `time_integral()` Example: `Base Volume Totalizer = time_integral(base_volumetric_pump_rate)` **Timeseries Derivative** Function: `time_derivative_hours()`, `time_derivative_minutes()`,`time_derivative_seconds()` Example: `Acetate Accumulation Rate = time_derivative_hour(acetate_concentration)` **Linear Regression** Function: `linear_regression()` Evaluation: `Y = mX+b`, where X = Elapsed Run Time **Log-Linear Regression** Function:`log_linear_regression()` Evaluation: `ln(Y) = ln(b) + mX` or `Y = b*exp(mX)`, where X = Elapsed Run Time --- kind: changelog title: "Run Events" slug: 2023-06-01-run-events date: 2023-06-01 type: new tags: ["Runs","Events"] url: https://invertbio.com/changelog/2023-06-01-run-events markdown_url: https://invertbio.com/changelog/2023-06-01-run-events.md --- # Run Events Introducing **Run Events**: Enhance Bioprocess Visibility and Documentation! - Seamlessly document events and observations directly on the graph, providing clarity and context at every stage. - Gain insights into sample events, start times, and more on the Run Events page and directly on your analysis, enhancing your understanding of the bioprocess. - Effortlessly upload observations through a user-friendly interface, streamlining data input and boosting efficiency. --- kind: changelog title: "Run Lineage" slug: 2023-06-01-run-lineage date: 2023-06-01 type: new tags: ["Runs"] url: https://invertbio.com/changelog/2023-06-01-run-lineage markdown_url: https://invertbio.com/changelog/2023-06-01-run-lineage.md --- # Run Lineage Trace Your Bioprocess from Upstream to Downstream with **Run Lineage**! - You can now effortlessly connect your fermentation runs to downstream processing through to stability results. - Visualize the complete journey of your bioprocess, identify dependencies, and gain comprehensive insights for informed decision-making. - Run Lineage is available through the Run Summary page. Reach out to learn more! --- kind: changelog title: "Scatter charts ✨" slug: 2023-04-04-scatter-charts date: 2023-04-04 type: new tags: ["Analysis"] url: https://invertbio.com/changelog/2023-04-04-scatter-charts markdown_url: https://invertbio.com/changelog/2023-04-04-scatter-charts.md --- # Scatter charts ✨ Quickly compare Key Process Metrics across runs using the **Scatter Chart View**. - X axis can be continuous process data or categorical metadata - Several data aggregation methods available including mean, min, max, initial, and final values --- kind: changelog title: "Clean Up Your Metadata 🛀" slug: 2023-03-16-clean-up-your-metadata date: 2023-03-16 type: new tags: ["Runs"] url: https://invertbio.com/changelog/2023-03-16-clean-up-your-metadata markdown_url: https://invertbio.com/changelog/2023-03-16-clean-up-your-metadata.md --- # Clean Up Your Metadata 🛀 - Run metadata is critical to differentiate runs and to properly interpret process data. *Process metadata includes information such as strain names, vessel IDs, media used and process control setpoints.* - Using the new **Bulk Edit** feature, quickly update or add new metadata so that the context for each run is appropriately documented. - The analysis graphing page also now includes the metadata table for an overview of the metadata when reviewing your graphs. --- kind: changelog title: "Semi-log Axis Scaling 📈" slug: 2023-03-16-semi-log-axis-scaling date: 2023-03-16 type: new tags: ["Analysis"] url: https://invertbio.com/changelog/2023-03-16-semi-log-axis-scaling markdown_url: https://invertbio.com/changelog/2023-03-16-semi-log-axis-scaling.md --- # Semi-log Axis Scaling 📈 Want to compare exponential data such as growth curves or decay rates? Modify the axis scaling in the metric settings to turn on semi-log scaling mode. --- kind: changelog title: "Explore Run Metadata 🗺️" slug: 2023-02-14-explore-run-metadata date: 2023-02-14 type: new tags: ["Runs"] url: https://invertbio.com/changelog/2023-02-14-explore-run-metadata markdown_url: https://invertbio.com/changelog/2023-02-14-explore-run-metadata.md --- # Explore Run Metadata 🗺️ Run metadata is now more accessible than ever. Check out the table view and add relevant metadata columns for reference during run exploration and selection. - Line textures now help distinguish between metrics in "All-in-one" charts. - Colors are consistent per run when charts are split by metric. --- kind: changelog title: "Converting units on the fly 🚀" slug: 2022-12-30-converting-units-on-the-fly date: 2022-12-30 type: new tags: ["Library","Analysis"] url: https://invertbio.com/changelog/2022-12-30-converting-units-on-the-fly markdown_url: https://invertbio.com/changelog/2022-12-30-converting-units-on-the-fly.md --- # Converting units on the fly 🚀 Invert now automatically handles unit conversions for you on the backend, so that a sensor measuring flow rates in `L/h` and another in `m^3/h` will be displayed uniformly in the metric's default unit. You can also decide yourself and change the unit on any metric, by choosing between available display options on the metric and we'll convert it to whatever you like – useful for comparing across metrics of different units. --- kind: changelog title: "Charts just got a make-over ✨" slug: 2022-11-14-charts-just-got-a-make-over date: 2022-11-14 type: improvement tags: ["Analysis"] url: https://invertbio.com/changelog/2022-11-14-charts-just-got-a-make-over markdown_url: https://invertbio.com/changelog/2022-11-14-charts-just-got-a-make-over.md --- # Charts just got a make-over ✨ Following the addition of formulas in the previous release, charts had some quirks and UX that needed improving. ## 💯 A table Underneath every chart we've introduced a table that replaced the awkward tooltip on hover. It displays the value for the current hovered position (or its nearest neighbour). When not hovered you can select different statistics: mean, sum, standard deviation, min and max, to easily compare runs against each other. ## 🔥 Tooltips We did not kill tooltips, but we made them much more usable. Instead of showing everything (better viewed in the table), we now only show the value of the most recently hovered line. If you highlight one or more lines the tooltip will show values for all the highlighted lines. Highlight a line by clicking the corresponding colour in the table or directly on the line. ## 🔣 Units: same same, but different Often you want to plot a handful of metrics in the same chart. Before, each would get its own axis. Now you decide whether metrics that share units should also share axes. Toggle "Combine shared y-axes" depending on what's useful in your context. **Other fixes in this release** - Fix where sometimes you could not add/remove quantities - Fix for zero-values shown as `-` not `0` - Made it clearer that mappings can process multiple files - Better state management for uploading multiple files - Fix for run start dates not displaying as date-time - Removed denominator in groups until filter applied - Moved the "add prop" button on the run/edit page to be more visible - Filtered away unsuccessful ingestions from status page - Made search only search for one type of data at a time - Fix for hidden checked checkbox on metrics page - Fix for unneeded scrollbars for non-Mac users - Fix for Ctrl+Click "Open" button on parent run not opening in a new tab - Fix for duplicate y-axis ticks - Fix the chart/table height distribution for small screen sizes - Reset highlighting when chart filters/quantities change --- kind: changelog title: "Formulas - derived metrics" slug: 2022-11-10-formulas-derived-metrics date: 2022-11-10 type: new tags: ["Library"] url: https://invertbio.com/changelog/2022-11-10-formulas-derived-metrics markdown_url: https://invertbio.com/changelog/2022-11-10-formulas-derived-metrics.md --- # Formulas - derived metrics Today, we're introducing formulas to derive metrics based on your existing data. Formulas works *just* like other metrics on Invert, but are calculated on the fly using custom defined formulas. **Use formulas** to plot derived metrics on charts, as bars or lines, or export them to Excel, JMP, etc. – just like every other metric. **Build new formulas** by reference to other metrics, timerseries or single data points, and using basic arithmetic operators – or more complicated math and aggregations. Let us know what you think by writing us at [support@invertbio.com](mailto:support@invertbio.com) --- # Section: Blog --- kind: blog title: "Bioraptor vs. Invert vs. Genedata: Best Bioprocess AI Platform for Scale-Up & Pharma Manufacturing" slug: bioraptor-vs-invert-vs-genedata-best-bioprocess-ai-platform-for-scale-up-pharma-manufacturing date: 2025-12-02 author: "Veronica French" category: Product summary: "Compare Bioraptor, Invert, and Genedata to see which bioprocess AI platform delivers the fastest scale-up, real-time insights, and AI-ready data for pharma and bioprocessing teams. Understand why experts choose Invert for USP, DSP, and manufacturing." url: https://invertbio.com/blog/bioraptor-vs-invert-vs-genedata-best-bioprocess-ai-platform-for-scale-up-pharma-manufacturing markdown_url: https://invertbio.com/blog/bioraptor-vs-invert-vs-genedata-best-bioprocess-ai-platform-for-scale-up-pharma-manufacturing.md --- # Bioraptor vs. Invert vs. Genedata: Best Bioprocess AI Platform for Scale-Up & Pharma Manufacturing Bioprocess scale-up is no longer constrained by biological understanding alone — it’s constrained by **fragmented data**, slow insight cycles, and tools that simply weren’t designed for the realities of upstream, downstream, and CDMO collaboration. As pharma, biologics manufacturers, and bioproduction startups adopt AI-driven process development, three platforms frequently enter the evaluation process: - **Bioraptor** — a general scientific data platform - **Genedata** — a legacy enterprise informatics suite - **Invert** — purpose-built Bioprocess AI Software Each claims to support bioprocess optimization, but their architectures — and their suitability for real-world scale-up — vary dramatically. This comparison focuses on the capabilities that matter most for teams trying to accelerate bioprocess development: **data unification, real-time visibility, reproducibility, automation, and AI-ready intelligence.** ## Executive Summary: What Actually Accelerates Scale-Up? From conversations with scientists, MSAT leaders, and manufacturing executives, four capabilities consistently determine scale-up performance: 1. **Unified, harmonized, contextualized bioprocess data** — across USP, DSP, and CDMOs 2. **Real-time visibility into live runs** — so deviations are caught early, not after the batch 3. **An intelligence layer built on trusted data** — analytics, visualization, and transparent AI 4. **Automation that eliminates manual data cleanup** These capabilities define whether scale-up is **predictable** or **painful.** And only one platform was built specifically for them. ## Platform Overview ## Bioraptor A general-purpose scientific data platform designed for flexible data modeling and ML workflows. Strong in R&D but not purpose-built for bioprocess time-series data or manufacturing-scale environments. ## Genedata A long-standing enterprise informatics system with broad scientific coverage. Mature but heavy, slow to deploy, and not inherently suited for real-time bioprocess data or modern AI-driven analytics. ## Invert (Purpose-Built for Bioprocess AI) Invert is the only bioprocess-first software platform designed specifically to unify, harmonize, and contextualize high-density time-series data in real time, with analytics and transparent AI built in. ## Comparison: Invert vs. Bioraptor vs. Genedata When bioprocess teams evaluate these three platforms, the biggest differences become clear almost immediately. **Invert** stands apart because it was purpose-built for bioprocessing. It natively ingests high-density time-series data across upstream, downstream, and CDMO environments, and immediately harmonizes and contextualizes it. Real-time visibility into runs, automated data cleanup, and a built-in intelligence layer — including visualization, analytics, and transparent AI — are central to the architecture. Deployment typically takes hours, and the software meets enterprise-grade compliance requirements such as 21 CFR Part 11 and GxP. Invert is engineered explicitly for scale-up, tech transfer, and process comparability. **Bioraptor**, in contrast, is a broad scientific data platform. While it offers flexible data ingestion and supports machine learning workflows, it is not designed around bioprocess-specific needs. It lacks native support for ingesting real-time bioreactor data and does not automatically harmonize upstream–downstream datasets. Teams often rely on external analytics tools or custom pipelines, which slows insights and creates fragility — especially in scale-up or manufacturing environments. Bioraptor excels in data science labs, but it is not engineered for bioprocess scale-up. **Genedata** brings mature enterprise capabilities, but its legacy architecture makes it rigid and slow to implement. Organizations typically require months of customization to adapt it to bioprocess workflows, and real-time ingestion of high-density time-series data requires additional systems. Its analytics modules are largely reporting-oriented rather than built for active process interrogation or AI-driven decision support. The total cost of ownership is high, and its architecture is not optimized for fast-moving scale-up teams. Ultimately, the platforms differ in focus: - **Invert** is designed for the complexity of bioprocess scale-up and manufacturing. - **Bioraptor** is built for general scientific data and ML experimentation. - **Genedata** is a broad, legacy informatics system requiring heavy customization. Across the dimensions that matter most — unified data, real-time visibility, harmonization, built-in intelligence, and deployment speed — **only Invert delivers all capabilities natively**, without bolt-on modules or custom engineering. ## Where Bioraptor Falls Short for Scale-Up Bioraptor is popular with data science teams, but it is **not optimized for bioprocess engineering or manufacturing**. - It lacks native models for bioreactor time-series data and DSP traces. - It does not harmonize USP/DSP/CDMO datasets automatically. - Real-time monitoring capabilities are limited, making mid-run interventions difficult. - Insights frequently depend on custom scripts or external tools, slowing decision-making. Bioraptor fits R&D environments well — but scale-up and tech transfer require **purpose-built data infrastructure**, not generic scientific tooling. ## Where Genedata Struggles in Modern AI-Driven Bioprocessing Genedata has long served large pharma organizations, but today’s AI-driven bioprocessing needs have outpaced its legacy architecture. - Implementations often span many months and require specialized administrators. - Native support for high-frequency bioprocess time-series data is limited. - Most analytics live outside the platform, leading to brittle integrations. - Heavy customization creates significant long-term IT overhead. For teams seeking agility, rapid iteration, and real-time visibility, Genedata often slows progress rather than enabling it. ## Why Bioprocess Experts Choose Invert Invert combines decades of bioprocess experience with world-class software engineering — and that dual expertise shows up in every part of the platform. ## 1\. Purpose-Built, Not Retrofitted Invert is engineered specifically for USP, DSP, and scale-up. Rather than retrofitting generic or legacy tools, Invert was designed from the ground up around bioprocess realities — high-density time-series data, batch variability, CDMO collaboration, and the need for instant comparability. ## 2\. A Trusted, AI-Ready Data Foundation Invert continuously unifies, harmonizes, and contextualizes fragmented data sources in real time, creating reliable, reproducible, and compliant datasets. This foundation makes bioprocess data immediately actionable and AI-ready. ## 3\. Intelligence Layer Built In Unlike platforms that stop at storage, Invert includes built-in visualization, analytics, and a transparent AI interface that helps teams interrogate their data directly — without exporting files or relying on brittle pipelines. ## 4\. Real-Time Visibility Across USP, DSP, and CDMOs Teams monitor experiments as they run, detect deviations early, and prevent wasted batches — accelerating development and improving scale-up reliability. ## 5\. Fast, Low-Risk Deployment With prebuilt integrations for bioreactors and DSP systems, Invert connects in hours and delivers immediate value without heavy IT lift. ## Which Platform Accelerates Scale-Up Fastest? Across the metrics that matter — **time to insight, reproducibility, AI-readiness, and scale-up predictability** — Invert consistently outperforms Bioraptor and Genedata for bioprocess applications. For: - **Pharma manufacturers** needing predictability and compliance - **Scientists** needing harmonized, real-time data - **Startups** needing enterprise-grade capabilities without enterprise overhead - **MS&T and digital leaders** needing validated data pipelines without brittle integrations **Invert delivers the fastest path to scale-up readiness — because it was purpose-built for it.** ## See Why Bioprocess Experts Choose Invert Invert is the **Bioprocess AI Software** built specifically to transform fragmented bioprocess data into faster insights and more confident decisions. With a unified data foundation, real-time visibility, and an intelligence layer built in, Invert helps teams accelerate development, reduce risk, and scale with confidence. **Purpose-built, not retrofitted. Engineered for scale-up. Proven across pharma and advanced bioproduction.** ‍ --- kind: blog title: "How to Integrate Bioprocess Data Across Sites, Systems, and CDMOs | Invert Bioprocess AI" slug: how-to-integrate-bioprocess-data-across-sites-systems-and-cdmos-invert-bioprocess-ai date: 2025-12-02 author: "Veronica French" category: Product summary: "Learn how to integrate bioprocess data across instruments, LIMS, sites, and CDMOs without manual work or IT burden. See how modern bioprocess teams accelerate scale-up with automation, harmonization, and built-in intelligence." url: https://invertbio.com/blog/how-to-integrate-bioprocess-data-across-sites-systems-and-cdmos-invert-bioprocess-ai markdown_url: https://invertbio.com/blog/how-to-integrate-bioprocess-data-across-sites-systems-and-cdmos-invert-bioprocess-ai.md --- # How to Integrate Bioprocess Data Across Sites, Systems, and CDMOs | Invert Bioprocess AI ## How to Integrate Bioprocess Data Across Sites, Systems, and CDMOs — Without the Headache Bioprocessing rarely happens in one place. It spans research labs, pilot plants, GMP facilities, and CDMOs — each with its own systems, file formats, and data conventions. Every organization wants unified, analysis-ready bioprocess data, yet most scientists still spend hours stitching together files from bioreactors, downstream systems, LIMS exports, and email attachments from external partners. What should be a continuous data pipeline often becomes a patchwork of spreadsheets, ad-hoc scripts, and fragile workflows that break whenever something changes. This fragmentation slows development, increases compliance risk, and forces teams to rely on stale insights when making critical decisions. Integrating bioprocess data shouldn’t require months of custom engineering or manual cleanup from scientists. And with the right architecture, it doesn’t. ## The Real Integration Problem: Bioprocess Data Behaves Differently Most integration tools were built for transactional records or low-frequency scientific data — not the massive, high-density time-series streams generated by modern bioreactors and DSP systems. They don’t understand batch context, sampling workflows, or process lineage. They weren’t made to accommodate CDMO variability or the regulatory expectations of 21 CFR Part 11 and GxP. And they certainly weren’t built to harmonize data in real time. This is why generic integration frameworks, retrofitted LIMS systems, and internal IT builds often fail. They’re not aligned with the realities of USP, DSP, and scale-up, and they place the burden of cleanup on scientists instead of automating it at the source. ## The Modern Playbook for Seamless Bioprocess Data Integration ## 1\. Integrate With LIMS — Don’t Replace It LIMS is essential for sample tracking and compliance, but it’s not designed for bioreactor time-series data, DSP traces, or real-time contextualization. The right bioprocess platform augments LIMS rather than competing with it. It captures complexity upstream — harmonizing sensor data, events, sampling, and metadata — and delivers structured outputs back into the LIMS environment. This approach strengthens traceability, reduces manual correction, and lays the groundwork for AI-ready datasets that extend far beyond what the LIMS alone can support. ## 2\. Unify Data Across Sites and Instruments Automatically The single biggest step forward for most organizations is removing the manual effort involved in merging data. A modern integration layer ingests data from bioreactors, analyzers, and DSP equipment across all sites — including CDMOs — and harmonizes it automatically. Instead of reconciling naming conventions or aligning timestamps by hand, scientists receive consistent, structured, contextualized datasets ready for analysis. This level of unification is essential for reproducibility, comparability, and scale-up decisions. ## 3\. Build Integration Pipelines That Adapt, Not Break Bioprocess environments change constantly. Instruments get firmware updates, CDMOs adjust their formats, and different teams may use slightly different workflows. Hard-coded pipelines fail the moment any of these variables shift. Scalable integration requires flexible mappings and prebuilt connectors that absorb variability without disrupting data flow. This is precisely where homegrown systems struggle and why mature teams adopt purpose-built platforms that anticipate the realities of bioprocessing rather than forcing rigid structures on it. ## 4\. Make CDMO Collaboration Repeatable and Traceable Most organizations still exchange critical manufacturing data with CDMOs via emails, flat files, or shared folders — workflows that commonly lead to missing metadata, inconsistent structures, and delayed insights. A robust integration layer standardizes CDMO inputs automatically, preserving lineage and process context even when partners use different templates or tools. This gives internal teams real-time access to CDMO data and dramatically reduces risks during tech transfer and scale-up. ## 5\. Free Scientists and IT From Manual Data Work When scientists serve as data janitors and IT teams maintain brittle pipelines, progress slows. Automation must replace manual reconciliation, error handling, and formatting. A platform that performs ingestion, harmonization, mapping, and contextualization without human intervention shifts scientific time back to experimentation and engineering. This is central to Invert’s philosophy: **Automation That Frees Expertise** — empowering teams to advance discovery and scale-up instead of fighting with data. ## Where Invert Fits: Integration Without the Headache Invert is **Bioprocess AI Software** designed specifically to unify, harmonize, and contextualize bioprocess data across instruments, systems, sites, and CDMOs. Unlike generic tools or retrofitted LIMS/ELN extensions, Invert was engineered for the realities of USP, DSP, tech transfer, and scale-up. Invert integrates seamlessly with existing LIMS environments, enhancing them with richer, more reliable data rather than attempting to replace them. Its prebuilt connectors for bioreactors, analyzers, DSP equipment, and CDMOs allow teams to go live in hours, giving IT a reliable, validated pipeline instead of a long, risky project. Underneath, Invert continuously harmonizes and contextualizes all incoming data, creating a trusted foundation that is traceable, reproducible, and ready for AI-driven analysis. Because insights are only as good as the data beneath them, Invert ensures that scientists and engineers work with consistent, contextualized datasets — not a collection of inconsistent exports. Real-time visibility into USP and DSP runs becomes standard, and CDMO collaboration becomes predictable rather than a constant source of variability. Most importantly, scientists finally spend their time running experiments instead of fixing files. ## Why CMOs and CDMOs Choose Invert Manufacturers serving multiple clients must manage variability across programs, instrumentation, and regulatory expectations. Invert allows CDMOs to standardize their data delivery, improve traceability, reduce investigation cycles, and raise the quality of data they return to sponsors — without increasing operational or IT burden. In a competitive market, CDMOs that deliver clean, harmonized, ready-to-analyze datasets stand apart. Invert makes that possible. ## Conclusion: Integration Should Be a Capability, Not a Project Bioprocess data integration should not depend on manual cleanup, fragile pipelines, or months of configuration work. When handled correctly, it becomes an automated, scalable capability that supports every stage of development — and accelerates scale-up rather than slowing it down. With unified, contextualized, real-time data flowing across instruments, LIMS systems, internal sites, and CDMOs, organizations make faster, more confident decisions and reduce risk across the entire lifecycle of bioprocess development. ## Request an Integration Walkthrough See how Invert integrates your bioprocess data across systems, sites, and CDMOs — without manual cleanup or IT overhead. ‍ --- kind: blog title: "The Bioprocess Scale-Up Playbook: From Fragmented Data to Confident Decisions" slug: the-bioprocess-scale-up-playbook-from-fragmented-data-to-confident-decisions date: 2025-12-02 author: "Veronica French" category: Industry summary: "Learn how leading pharma, biologics, and vaccine teams close the scale-up gap with unified data, real-time visibility, and a built-in bioprocess intelligence layer. See how Invert transforms fragmented bioprocess data into faster, more confident decisions across development and manufacturing." url: https://invertbio.com/blog/the-bioprocess-scale-up-playbook-from-fragmented-data-to-confident-decisions markdown_url: https://invertbio.com/blog/the-bioprocess-scale-up-playbook-from-fragmented-data-to-confident-decisions.md --- # The Bioprocess Scale-Up Playbook: From Fragmented Data to Confident Decisions ### How the Intelligence Layer Built In transforms scale-up in pharma, vaccines, and advanced bioproduction Scale-up should be a moment of acceleration — not uncertainty. Yet for most bioprocessing teams, the transition from lab to pilot to manufacturing is slowed by a familiar force: **fragmented data**. Upstream bioreactors, downstream purification systems, and CDMO partners all generate massive time-series data streams, but they exist in different formats, different folders, different timelines, and different levels of quality. The result: - Weeks lost reconciling files - Missed signals that lead to failed or inconsistent runs - Delayed decisions at the exact moment clarity matters most This is the scale-up gap. And closing it requires something bioprocess teams have never truly had: **an integrated, AI-ready intelligence layer that unifies data and turns complexity into confidence.** ## Why Scale-Up Fails: The Real Impact of Fragmented Data Scaling a bioprocess is one of the most expensive and risk-heavy moments in biopharma and vaccine development. And yet, more than **95% of bioprocess data goes unused** — not because teams lack skill, but because they lack infrastructure purpose-built for USP, DSP, and CDMO collaboration. Common scale-up challenges include: ### 1\. Inconsistent visibility across sites and stages Benchtop, pilot, and CDMO runs rarely share a unified data model. Teams can’t reliably compare process performance or troubleshoot deviations until weeks later — when it’s too late. ### 2\. Massive time-series datasets that can’t be harmonized Generic or retrofitted tools — ELNs, LIMS, BI — can capture fragments of bioprocess data but cannot harmonize or contextualize high-density time-series data at scale. ### 3\. Late insights drive costly decisions When insight arrives days after a run completes, the damage is already done: lost batches, unpredictable scale-up, and delayed milestones. ### 4\. AI efforts stall before they start Unstructured, inconsistent data makes it nearly impossible to rely on AI for modeling, predictions, or decision support. The industry has tried to fix scale-up with more dashboards and more headcount. But the core problem is not visualization — **it is fragmentation.** ## The Scale-Up Playbook: How Leaders Turn Data Into Confident Decisions Below is the operating framework modern bioprocess teams use to close the scale-up gap. ## Step 1: Unify and Harmonize Your Bioprocess Data **The foundation of confident scale-up is a harmonized, AI-ready data layer.** Pharma and biotech teams increasingly start by closing the gaps between: - Upstream and downstream data - Internal runs and CDMO runs - Benchtop, pilot, and production systems - Historical datasets and live data streams Unification and harmonization restore reproducibility — the prerequisite for meaningful comparison, modeling, and decision confidence. ## Step 2: Build Real-Time Visibility Into Every Run Scale-up risk grows with every hour teams wait for data. To prevent divergent trajectories, bioprocess teams now depend on: - Live time-series ingestion - Real-time visualization - Early issue alerts and deviations - Immediate run-to-run and site-to-site comparability This shifts teams from reactive troubleshooting to proactive control — catching deviations early and reducing wasted batches. ## Step 3: Layer On Analytics, Modeling, and AI — Built In, Not Bolted On The next generation of scale-up relies not just on seeing data, but interrogating it. High-performing teams require an intelligence layer that delivers: - Advanced time-series analytics - Statistical and model execution - Analysis templates for repeatability - A transparent AI chat interface grounded in harmonized data This elevates teams from static data review to dynamic decision support — powered by trustworthy, contextualized data. ## Step 4: Automate Everything That Slows Scientists Down Manual cleanup, file reconciliation, renaming columns, and stitching spreadsheets are not scientific work — they are drag. Automation across ingestion, mapping, and contextualization frees scientists and IT to focus on experimentation, optimization, and scale-up strategy. ## Step 5: Ensure Enterprise-Grade Stewardship and Compliance Scale-up data shapes regulatory submissions, tech transfer packages, and investment decisions. Platforms must support: - 21 CFR Part 11 - GxP - Full data lineage and traceability - Secure, scalable infrastructure Without trustworthy data governance, AI and analytics cannot be trusted — and decisions slow instead of accelerate. ## Where Invert Fits: The Intelligence Layer Built In Invert is the only **Bioprocess AI Software** purpose-built to unify time-series bioprocess data and deliver intelligence on top — specifically for USP, DSP, scale-up, and CDMO collaboration. Invert closes the scale-up gap through: ### A Trusted, AI-Ready Data Foundation Continuous ingestion and harmonization of upstream, downstream, and CDMO datasets — instantly analysis-ready. ### A Native Intelligence Layer Built-in visualization, analytics, and a transparent AI chat interface make complex datasets immediately actionable. ### Live End-to-End Visibility Teams monitor runs in real time, catch issues earlier, and reduce failed batches. ### Automation That Frees Expertise Scientists and IT stop fixing data and start driving science. ### Fast, Low-Risk Deployment Prebuilt bioreactor and DSP connectors integrate in hours, not weeks — minimizing IT lift. ### Purpose-Built for Bioprocessing Designed for upstream, downstream, scale-up, and CDMO realities — not retrofitted from other industries. ## Why Scale-Up Leaders Choose Invert **Pharma and biologics manufacturers** adopt Invert because it brings reliability to the most unpredictable stage of development. By unifying upstream, downstream, and CDMO data into a harmonized, AI-ready foundation, teams can finally compare runs across scales and sites with consistency. Real-time visibility tightens process control, reduces the likelihood of failed batches, and enables more predictable tech transfer. With trustworthy data powering analytics and AI, scale-up decisions become faster and more defensible. **Vaccine production startups** rely on Invert to accelerate development without sacrificing scientific rigor. These teams often operate under compressed timelines and limited resourcing, making delays from manual cleanup or fragmented systems especially costly. Invert provides an enterprise-grade infrastructure from day one — live data pipelines, automated harmonization, and built-in analytics — enabling reproducibility across platforms and shortening the path from discovery to scale-up. **Digital transformation, MS&T, and manufacturing science teams** turn to Invert because it eliminates the brittle integrations and manual pipelines that drain IT resources. Prebuilt connectors ingest data across bioreactors, DSP equipment, and CDMO partners in hours rather than weeks. Validated, compliant data flows give scientists and executives reliable visibility while preserving IT flexibility and avoiding vendor lock-in. With a harmonized dataset and intelligence layer built in, these teams deliver modern, AI-ready infrastructure without adding technical debt. ## What Are the Best Bioprocess Scale-Up Manufacturing Intelligence Platforms? Organizations evaluating **bioprocess scale-up intelligence platforms**, **pharma scale-up AI**, or **manufacturing optimization software for bioprocessing** should prioritize solutions that deliver: - Real-time ingestion across USP, DSP, and CDMOs - Automated harmonization of high-density time-series data - Built-in analytics and AI grounded in trusted data - Enterprise-grade compliance and lineage - Fast, validated integrations with bioreactors and DSP systems - Transparent, traceable AI-driven decision support This is precisely where Invert stands apart — with an intelligence layer built in, not bolted on. If a platform cannot unify data, contextualize it, and deliver intelligence natively, it cannot support predictable scale-up. ## Conclusion: Scale-Up Demands an Intelligence Layer — Not More Dashboards Fragmented data isn’t just a workflow inconvenience — it is the root cause of failed batches, unpredictable scale-up, and delayed milestones. The next generation of bioprocessing leaders are adopting platforms that deliver: - Unified, contextualized, AI-ready data - Real-time visibility - A native intelligence layer - Automation that frees scientific expertise This is the new standard. This is the scale-up playbook. And this is where Invert leads. ## Call to Action **Explore the Invert Intelligence Layer — built in, not bolted on.** See how Invert transforms scale-up from uncertain to predictable. ‍ --- kind: blog title: "Engineer Blog Series: From Bioprocess to Software with Anthony Quach" slug: engineer-blog-series-from-bioprocess-to-software-with-anthony-quach date: 2025-11-19 author: "Invert Team" category: Product summary: "Welcome to Invert’s Engineer Blog Series — a behind-the-scenes look at the product and how it’s built.In this post, software engineer Anthony Quach shares how his career in bioprocess development led him into software, and how that experience shapes the engineering decisions behind Invert." url: https://invertbio.com/blog/engineer-blog-series-from-bioprocess-to-software-with-anthony-quach markdown_url: https://invertbio.com/blog/engineer-blog-series-from-bioprocess-to-software-with-anthony-quach.md --- # Engineer Blog Series: From Bioprocess to Software with Anthony Quach _Welcome to Invert’s Engineer Blog Series — a behind-the-scenes look at the product and how it’s built._ _In this post, software engineer_ [**_Anthony Quach_**](https://www.linkedin.com/in/anthony-quach) _shares how his career in bioprocess development led him into software, and how that experience shapes the engineering decisions behind Invert._ ## Tell us about your background before joining Invert. My educational background is in chemical engineering and I spent nearly a decade working in bioprocess development and process engineering. Most of my work was in CDMOs in an R&D group, where I focused on next-generation manufacturing processes, fed-batch development, and perfusion bioreactor technologies. That work involved running a lot of experiments and handling an even larger amount of data. I was constantly juggling different process versions, clones, and customer programs — and almost all of that work required pulling data from multiple systems and stitching it together manually. Those years gave me a deep understanding of how difficult bioprocess data management really is. ## How did you transition from bioprocess into software engineering? It happened slowly. Even though I enjoyed the science, I kept running into the same issue: most of my bottlenecks were issues with data, rather than issues with process-development. I was spending more time fixing spreadsheets, dealing with missing metadata, or looking for historical data than actually analyzing anything. I got really interested in the software side and eventually pursued a second bachelor’s degree in computer science while working full-time. I was lucky to work under leaders who supported that growth and understood that the biotech industry could learn from the tech industry. After leaving biopharma, I wanted to stay close to the domain, but focus on solving the problems I understood firsthand. Invert was a perfect overlap — the intersection of bioprocess, data, and software engineering. ## How does your bioprocess experience influence the way you build software at Invert? It influences everything. I’ve sat in the seat of the scientist using these tools, so I think a lot about reducing manual lift and cognitive burden. When I build features, I ask myself: - Would this have saved me hours back then? - Is this eliminating duplicate work? - Does this make scientific communication smoother? - Is the experience intuitive? One example is **Saved Views**, which I helped build. When you’re supporting multiple programs, you spend a huge amount of time curating subsets of runs manually. Saved Views lets you pre-filter your data so you can instantly load exactly what’s relevant. It seems simple, but it’s extremely high-value when you’re switching contexts all day. Another example is **Reports**, which is one of my favorite features. In my old roles, communicating findings was painful because the context behind your data never traveled with the analysis. With Reports in Invert, the underlying data, quantities, and events automatically move with the document. Everyone sees the same context — something I really wish I’d had during my time in the lab. ## From your perspective, what makes bioprocess data uniquely challenging? A few things make it hard: - The sheer volume of data - The cost and value of generating it - The amount of metadata that’s missing, inconsistent, or hidden - The fact that most context only exists in someone’s head Scientists end up spending a lot of energy deciding what’s relevant and important to analyze. Without the right infrastructure, the mental load is huge. One thing I appreciate about Invert is that our data model is designed specifically around bioprocess. It reflects how data is actually created and used, which means we can support new equipment, processes, and workflows without forcing scientists into unnatural structures. ## Do you feel like you’re building tools you wish you had earlier in your career? Absolutely. Pretty much every feature hits a pain point I remember dealing with personally. Reports is a great example of that. Scientific communication is one of the hardest parts of process development. Without shared context, teams spend a lot of time debating data instead of discussing insights. Reports eliminates that gap — the context originates from a transparent, single source of truth. If I’d had a tool like that earlier, it would have saved me countless hours and would have improved the quality of communication. ## What are you most excited to work on next? I’m really excited about **Invert Assist**, our AI-powered analysis feature we just released. Bioprocess data is an ideal use case for LLMs because there’s so much depth, and a lot of the insights are buried in time series and metadata relationships. Assist can surface insights or hypotheses in minutes — work that normally takes hours or even days manually. Even when it’s not perfect, it can nudges scientists toward the right question to ask or highlights something unexpected. That’s a massive shift from the tools that are available to the industry today. There’s nothing else like it right now, and I’m excited to keep improving it. **Anthony Quach** is a Software Engineer at Invert and a former bioprocess development engineer. His domain experience and engineering background help drive the development of intuitive, high-impact tools built for scientists. ‍ --- kind: blog title: "AI in Bioprocess Quality Control: Moving from Compliance to Confidence" slug: ai-in-bioprocess-quality-control-moving-from-compliance-to-confidence date: 2025-11-17 author: "Veronica French" category: Industry summary: "Learn how AI powered bioprocess quality control software helps pharma and cell therapy manufacturers improve consistency, strengthen GxP compliance, and shift from reactive oversight to confident, real time decision making." url: https://invertbio.com/blog/ai-in-bioprocess-quality-control-moving-from-compliance-to-confidence markdown_url: https://invertbio.com/blog/ai-in-bioprocess-quality-control-moving-from-compliance-to-confidence.md --- # AI in Bioprocess Quality Control: Moving from Compliance to Confidence ## Quality Control Is the Backbone of Bioprocessing Quality control protects patients, products, and entire manufacturing programs. It ensures that every batch meets the standards required for safety, purity, and potency. Yet QC remains one of the most manual and resource intensive parts of bioprocessing. Data lives in spreadsheets, isolated instruments, paper notebooks, and disconnected LIMS systems. Investigations take days. Release testing slows production. Even minor misalignments in data or metadata can lead to repeat testing or batch rejection. The result is a system that focuses more on catching errors than preventing them. AI powered bioprocess quality control software is changing this dynamic. It provides a path from reactive compliance to proactive, data driven confidence. ## The Real Bottleneck: Fragmented QC Data QC teams rely on a wide range of data sources. These include: - In process analytics - Environmental monitoring - Chromatography and assays - Manual checks and sampling - Equipment logs - Electronic lab notebooks - Stability and release testing - CDMO data packages When these datasets are siloed, several problems occur: - Investigations take far longer than necessary - Root causes remain unclear or incomplete - Batch release slows due to manual review - Deviations are detected too late - Data integrity is harder to prove - Cross functional collaboration becomes strained Most QC teams know that the issue is not lack of data. The challenge is lack of **accessible, contextualized, real time data**. ## How AI Strengthens Quality Control AI powered QC platforms are not replacements for human expertise. They are tools that make quality teams faster, more reliable, and more confident by giving them comprehensive and continuous visibility into process performance. Key benefits include: ## 1\. Real time anomaly detection Machine learning models monitor process data continuously. When a pattern begins to drift, the system highlights it before it turns into a deviation. QC teams can act early, reducing risk and preventing rework. ## 2\. Automated data harmonization Instead of hunting through mismatched parameters and inconsistent metadata, the platform harmonizes data automatically. This ensures that QC teams review unified, clean data that can be trusted. ## 3\. Transparent AI insights High quality QC platforms provide explainable models. Users can see why the system flags a result or recommends a next step. This transparency is essential for maintaining regulatory trust. ## 4\. Faster investigations With harmonized data and real time visualization, deviation investigations move from days to hours. Teams can identify the source quickly because all data lives in one accessible environment. ## 5\. Stronger manufacturing partnership QC no longer operates at the end of the process. With live insight, QC teams can collaborate more closely with PD, MSAT, and manufacturing. ## Quality Control in Cell Therapy: A High Stakes Use Case Cell therapy manufacturing places even greater demands on QC. Patient specific materials, variable donor profiles, compressed timelines, and sensitive cell handling introduce significant risk. QC teams must maintain strict control over: - Incoming material variability - In process attributes - Cell viability - Media and supplement consistency - Environmental monitoring - Critical gene expression patterns - Release testing AI powered QC platforms give cell therapy teams the advantage of live data, faster deviation identification, and automated traceability. These tools help maintain consistency even when biological variability is unavoidable. ## The Importance of GxP Compliance and Data Integrity AI in QC raises an important question. How can teams use advanced analytics while remaining compliant with GxP and 21 CFR Part 11 requirements? Modern AI ready platforms protect data integrity through: - Traceability for every data transformation - Access controls and audit trails - Time stamped and version controlled datasets - Validation pathways that map model behavior - Clear documentation of AI logic and assumptions - Reproducible outputs that can stand up to regulatory review These safeguards ensure that AI strengthens rather than threatens compliance. Invert’s platform was built from the ground up with these principles in mind. The result is a system that offers advanced intelligence while protecting the trust and reliability needed for regulatory success. ## Building a Strong QC Foundation With Integrated Data Better QC begins with better data. When data is unified and contextualized, quality teams can review the entire process with a single pane of glass. An integrated QC data environment should include: - Continuous ingestion from instruments, historians, and CDMOs - Harmonized naming and metadata - Context enriched datasets - Real time dashboards and alerts - Searchable historical records - Configurable review and approval workflows This foundation turns QC into a connected, insight driven function rather than a siloed, reactive department. ## How AI Improves Decision Making for QC Teams With the right dataset and intelligence layer, AI supports QC teams by: - Predicting drift in critical quality attributes - Identifying correlations between parameters and outcomes - Highlighting non obvious sources of variability - Supporting trend analysis across large batches - Reducing the noise that slows investigations - Increasing confidence in final release decisions These capabilities free QC scientists to focus on interpretation and oversight rather than manual data work. ## QC Without Data Burden: The Invert Approach Invert delivers a unified, real time environment where QC teams can access harmonized, traceable data that supports faster analysis and clearer decision making. Key advantages include: - Transparent AI that supports rather than replaces expertise - Real time monitoring across upstream and downstream operations - Automated data harmonization for consistent reporting - Complete lineage and traceability - Fast deployment that minimizes IT effort - Support for regulated environments and validation workflows This combination helps QC teams shift from compliance driven back end review to confident, proactive oversight. ## The Future of Quality Control Is Proactive, Not Reactive The bioprocess industry is moving toward continuous monitoring, real time optimization, and autonomous decision support. QC must evolve alongside it. AI enabled QC will allow teams to: - Detect deviations before they occur - Maintain consistency across sites and CDMOs - Reduce the cost and time of investigations - Increase confidence in batch release - Support digital twins and advanced analytics - Build a culture of continuous improvement The shift from compliance to confidence requires a strong data foundation and an intelligence layer that supports scientific reasoning. Invert brings these capabilities together in one platform designed for the future of bioprocessing. Learn how Invert’s bioprocess quality control software helps teams streamline QC, improve consistency, and maintain trust in every decision. --- kind: blog title: "Best Practices for Bioprocess Data Integration" slug: best-practices-for-bioprocess-data-integration date: 2025-11-17 author: "Veronica French" category: Industry summary: "Learn the key best practices for integrating bioprocess data across development, scale up, CDMOs, and manufacturing. Discover how harmonized, contextualized, AI ready data accelerates insight and reduces risk." url: https://invertbio.com/blog/best-practices-for-bioprocess-data-integration markdown_url: https://invertbio.com/blog/best-practices-for-bioprocess-data-integration.md --- # Best Practices for Bioprocess Data Integration ## Bioprocessing Runs on Data. The Challenge Is Making That Data Useful. Upstream development, downstream purification, analytics, automation systems, CDMOs, LIMS, batch records, sensors, and bioreactors all generate data every minute. In theory, this should make bioprocessing smarter and faster. In reality, most teams spend more time **pulling data together** than learning from it. Different naming conventions, inconsistent units, missing timestamps, disconnected control systems, and siloed CDMO data make it extremely difficult to see the complete picture. Fragmented data slows investigations. It complicates tech transfer. It limits the value of analytics and blocks AI adoption entirely. This is why leading organizations are investing in **bioprocess data integration** and building **harmonized, AI ready datasets**. When data is unified, contextualized, and accessible, teams move faster. They make better decisions. They scale with confidence. Below are the essential best practices used by high performing bioprocess and manufacturing teams. ## Best Practice 1: Start With Data Connectivity Across All Systems Data integration begins with reliable connectivity. Modern bioprocess environments require continuous data flow from: - Bioreactors and control systems - Sensors and PAT tools - OSIsoft PI and other historians - LIMS and ELNs - CDMOs and external partners - Downstream purification skids - QC and QA systems Without direct, automated ingestion, teams fall back on manual file exports that introduce delay and inconsistency. A strong integration strategy ensures that **data enters your ecosystem automatically and continuously**. Invert’s platform uses prebuilt connectors to establish this connectivity quickly, allowing teams to unify years of historical data and stream live data with little IT lift. ## Best Practice 2: Harmonize Variables, Units, and Metadata Early Once data is captured, the next step is harmonization. This means standardizing: - Parameter names - Units - Sample identifiers - Time alignment - Metadata such as lot, run, product, method, and analyst Harmonization is one of the most important steps. It transforms scattered data into a coherent dataset that can be compared across runs, bioreactors, sites, and CDMOs. Without harmonization, even simple comparisons become time consuming. With it, teams gain a consistent and reusable foundation that supports analytics, visualization, and AI modeling. ## Best Practice 3: Add Context to Every Data Point Bioprocess decisions require context. A temperature value means nothing unless you know which run it belongs to, which phase of the process it was captured in, and which operator or control strategy was applied. Contextualization links each data point to its full process story. Modern bioprocess data platforms automatically attach context such as: - Process phase - Feeding strategy - Equipment identifiers - Consumables - Date and time - Product and batch - Set point vs. measured value This creates a dataset that is not just accurate but meaningful. Invert’s unified data foundation ensures that every value is tied to consistent metadata. This produces a trustworthy dataset that supports both scientific investigation and regulatory review. ## Best Practice 4: Build an AI Ready Data Foundation AI only works when data is clean, structured, and context rich. Machine learning models depend entirely on well organized data. If the underlying data is inconsistent or incomplete, AI outputs become unreliable. An AI ready foundation includes: - Complete lineage and traceability - Time aligned variables - Harmonized naming - Standardized metadata - No gaps or missing context - Consistent data types Organizations that invest in this foundation unlock downstream benefits such as: - Real time monitoring - Predictive modeling - Automated root cause analysis - Digital twins - Parametric release tools Invert’s architecture is designed specifically for AI readiness. It ensures that every dataset meets the requirements needed for reliable analytics and transparent AI. ## Best Practice 5: Enable Real Time Visualization and Analytics Bioprocess runs move quickly. Waiting until the end of a run to interpret results limits the ability to optimize or intervene. With integrated, harmonized data feeding a live intelligence environment, teams can: - Monitor bioreactors in real time - Detect anomalies before they escalate - Compare runs instantly - Understand variability across scales or clones - Review full process histories without manual data prep This shift from delayed analysis to real time insight shortens learning cycles and accelerates scale up. Invert’s intelligence layer makes this possible without separate dashboards or custom pipelines. ## Best Practice 6: Support Tech Transfer With Structured, Reusable Data Tech transfer is one of the most data intensive activities in bioprocessing. Poor data integration creates unnecessary risk during handoff. Harmonized datasets allow teams to: - Send consistent, structured datasets to CDMOs - Maintain ongoing visibility across sites - Reuse models and templates without rework - Reduce miscommunication and manual reconciliation Data integration creates a repeatable, traceable process that accelerates success from development through GMP manufacturing. ## Best Practice 7: Prioritize Governance, Traceability, and Compliance Bioprocess data must meet strict regulatory expectations. Integrated environments need to support: - Audit trails - Access controls - Historical versioning - Validation and reproducibility - GxP and Part 11 readiness Teams should be able to trust every value, every timestamp, and every transformation. Invert’s platform provides continuous traceability from ingestion through analysis, helping quality and regulatory teams maintain confidence and governance across the full digital ecosystem. ## Best Practice 8: Use Integrated Data to Power Continuous Improvement Once data is unified and contextualized, organizations can unlock higher value use cases. ### Examples include: - Batch comparison dashboards - Early deviation detection - Feeding strategy optimization - Media and supplement evaluation - Scale up modeling - Clone performance benchmarking - Multivariate analysis across years of runs These insights are only possible with strong data integration. ## Why Modern Bioprocess Teams Prioritize Integration Effective bioprocess data integration delivers real impact. - Faster time to insight - Reduced experimental duplication - Fewer deviations - More confident decision making - Shorter scale up cycles - More predictable yield and quality Teams with harmonized data spend less time cleaning spreadsheets and more time running experiments that move programs forward. ## The Foundation for Intelligent Biomanufacturing The future of bioprocessing will rely on continuous learning, advanced automation, and AI supported decision making. All of this depends on the quality, context, and accessibility of data. A strong data foundation is the first step toward predictive models, digital twins, and next generation biomanufacturing. Invert helps bioprocess teams unify fragmented data into a single, AI ready foundation. With real time visibility, transparent analytics, and reliable traceability, organizations scale faster and operate with greater confidence. Learn how Invert’s data integration platform helps bioprocess teams build the connected foundation needed for scientific and operational excellence. ‍ --- kind: blog title: "How to Scale Up Bioprocess Manufacturing Without the Data Headaches" slug: how-to-scale-up-bioprocess-manufacturing-without-the-data-headaches date: 2025-11-17 author: "Veronica French" category: Industry summary: "Learn how bioprocess teams are scaling manufacturing faster with AI-powered intelligence platforms that unify data, simplify tech transfer, and reduce risk in cell therapy and pharmaceutical production." url: https://invertbio.com/blog/how-to-scale-up-bioprocess-manufacturing-without-the-data-headaches markdown_url: https://invertbio.com/blog/how-to-scale-up-bioprocess-manufacturing-without-the-data-headaches.md --- # How to Scale Up Bioprocess Manufacturing Without the Data Headaches ## Scaling Up: The Moment Where Promise Meets Pressure In bioprocess development, scale-up is where science meets production reality. A process that performs beautifully in a 5-liter bioreactor can behave very differently at 500 or 5,000 liters. Oxygen transfer, shear stress, nutrient gradients, and subtle timing differences can all shift results. Getting this transition right is the difference between a successful therapy launch and costly delays. Yet the work of scaling a process remains burdened by the same obstacle that slows so many teams: **fragmented data**. From development to pilot to manufacturing, data is spread across systems, sites, and partners. Each step introduces new formats, naming conventions, and blind spots. The more you scale, the more complexity compounds. That’s why modern biomanufacturers are turning to **AI-powered bioprocess scale-up software**—platforms built to unify live data, automate insight generation, and simplify the path from development to production. ## Why Scale-Up Is So Data-Intensive Scaling up isn’t just about bigger tanks. It’s about understanding how thousands of variables interact as volumes grow. Each batch generates gigabytes of time-series data from sensors, probes, and control systems. Add in analytics from chromatography, metabolite assays, and CDMO data, and the result is an overwhelming amount of information. Traditional spreadsheets and static dashboards cannot keep up with this level of complexity. Scientists spend days cleaning and stitching together data just to perform a single comparison. By the time they spot an anomaly, the run is over and the opportunity to correct it is gone. The bottleneck isn’t the science—it’s the data workflow. ## The Role of AI in Modern Scale-Up Artificial intelligence helps teams move faster by transforming raw data into usable insight in real time. Instead of waiting for end-of-run reports, scientists can visualize performance as it happens, identify deviations early, and predict which parameters will affect yield or quality. AI models learn from historical and live process data to recommend adjustments that improve scalability and consistency. They also flag risks that might compromise cell viability or product potency, long before they turn into failed batches. When built on a **trusted, harmonized data foundation**, these models deliver insights that are not only fast but reliable and compliant. ## How AI-Driven Scale-Up Platforms Work A modern **bioprocess scale-up platform** combines several key functions into one connected environment: 1. **Unified Data Layer** The software automatically ingests and harmonizes data from instruments, control systems, and CDMOs. Parameters, units, and timestamps are standardized so every data point is comparable across experiments and facilities. 2. **Real-Time Visualization** Scientists and engineers can monitor critical process parameters live, catching issues such as oxygen limitation or pH drift immediately rather than hours later. 3. **AI-Powered Analytics** Machine learning models analyze patterns across historical runs to reveal relationships between scale, feed strategy, and performance. The insights are presented transparently so users can understand why the model recommends a particular change. 4. **Collaboration and Traceability** Centralized, contextualized data ensures that everyone—from process development to manufacturing and quality—works from the same source of truth. This traceability is essential for regulatory readiness and successful tech transfer. [Invert’s Bioprocess AI Software](https://invertbio.com) integrates all of these capabilities in a single system. Designed by experts who have lived both bioprocess and technology, Invert gives teams the live visibility they need to scale faster with confidence. ## Case in Point: Cell Therapy Manufacturing Cell therapy exemplifies the complexity of modern bioprocess scale-up. Each therapy involves living cells that respond dynamically to subtle environmental changes. Traditional batch-based monitoring cannot capture this nuance. With an AI-driven intelligence layer, teams can track culture conditions in real time, compare donor-to-donor variability, and adjust feeding or oxygenation strategies before quality drifts occur. Harmonized data also makes tech transfer smoother between development labs, CMOs, and manufacturing suites. For startups entering clinical or commercial production, this capability can mean the difference between a successful tech transfer and months of troubleshooting. ## Benefits of AI-Powered Scale-Up Software - **Shorter time to milestone:** Faster analysis means quicker learning cycles and reduced experimental repetition. - **Improved yield and consistency:** Predictive analytics identify the process settings that deliver optimal performance at each scale. - **Lower cost and risk:** Early detection of process deviations prevents wasted batches and rework. - **Simplified compliance:** Harmonized, traceable data makes it easier to demonstrate control and validation. - **Faster collaboration:** Scientists, engineers, and quality teams can access live, shared data through one platform instead of relying on email and spreadsheets. Organizations that have adopted AI-driven scale-up intelligence consistently report fewer failed runs and faster transfer from pilot to GMP production. ## The Future of Scale-Up: Connected and Continuous The next evolution of bioprocess manufacturing will be defined by **connected data pipelines** and **continuous learning systems**. As AI models mature, they will not only predict performance but automatically adjust parameters within validated boundaries. This future depends on building a strong data foundation today. Companies that invest in harmonized, AI-ready data will be positioned to take advantage of advanced capabilities such as digital twins, real-time optimization, and autonomous control. Platforms like [Invert](https://invertbio.com) are enabling this transformation by turning complex process data into clear, actionable intelligence that bridges development and manufacturing. ## Ready to Scale Without the Headaches? Scaling bioprocesses no longer has to mean scaling your data problems. With an AI-powered foundation and live intelligence layer, your teams can move faster, make better decisions, and focus on science rather than spreadsheets. Learn how [Invert’s Bioprocess AI Software](https://invertbio.com) helps biotech and pharma organizations unify their data, streamline scale-up, and achieve milestones sooner. **Book a demo to see scale-up in action.** --- kind: blog title: "Connecting Shake Flask to Final Product with Lineage Views in Invert" slug: connecting-shake-flask-to-final-product-with-lineage-views-in-invert date: 2025-11-11 author: "Invert Team" category: Product summary: "Invert’s lineage view connects products across every unit operation and material transfer throughout the entire process. It acts as a family tree for your product, tracing its origins back through purification, fermentation, and inoculation. Instead of manually tracking down the source of each data point, lineages automatically show material streams as they pass through each step." url: https://invertbio.com/blog/connecting-shake-flask-to-final-product-with-lineage-views-in-invert markdown_url: https://invertbio.com/blog/connecting-shake-flask-to-final-product-with-lineage-views-in-invert.md --- # Connecting Shake Flask to Final Product with Lineage Views in Invert Every molecule of a product matters in bioprocess. Questions such as “where are we losing over 15% of our product?” or “what is the cumulative yield between harvest and purification?” should be both simple and immediate to answer—but the reality is often far messier. While products move through successive steps, the data produced at each step does not necessarily follow along. Data from every unit operation is often siloed and disconnected, especially between upstream and downstream process development. Connecting data between fragmented systems is manual and time-consuming. Analytical results might be stored on an ELN or LIMS, while time series data might be siloed in system-specific software. Exporting, aggregating, cleaning, and combining this data to make sense of it could take days or even weeks—and there would be no guarantee that relevant insights would be accessible to different teams. Without insight into previous experimental results and conditions, many points of potential process optimization are left untouched, and anomalies could escape detection until CQAs are significantly impacted. ## Link process parameters to outcomes Invert’s lineage view connects products across every unit operation and material transfer throughout the entire process. It acts as a family tree for your product, tracing its origins back through purification, fermentation, and inoculation. Instead of manually tracking down the source of each data point, lineages automatically show material streams as they pass through each step. ![__wf_reserved_inherit](https://cdn.prod.website-files.com/6765896ee196438329424471/69129b9a726e43c8a5c38f01_CleanShot%202025-11-10%20at%2015.29.20%402x.png) Lineage view showing material streams through centrifugation, fitration, and chromatography Beyond tracing the genealogy of a given product, lineage view also automatically calculates and displays: - **Product yields and recovery rates** at each process step - **Material losses** and where they occur - **Mass balance closures** across unit operations ![__wf_reserved_inherit](https://cdn.prod.website-files.com/6765896ee196438329424471/6912acd8b26adcc6877715d3_CleanShot%202025-11-10%20at%2019.25.26.png) Run summary view showing lineage, KPIs, and Invert-Assist generated observations and notes about expected mass balances. ## Flagging anomalies and potential waste Mass closures allow you to reconcile the amounts of starting material with the material recovered. These values are crucial for process consistency and understanding, as well as for regulatory compliance. However, manual calculations are tedious and error-prone. They might involve multiple unit conversions or tracking separate material streams, such as pooling supernatant into a single column, or splitting harvests into multiple purification runs. With Invert’s lineage view, all these calculations are done automatically, ensuring that anomalies are easily flagged. For example, a mass balance closure of only 85% after a purification step would highlight unaccounted loss or documented waste and could be addressed immediately, instead of only after batch review weeks later. ## Finding hidden losses One of the most powerful applications of lineage tracking is loss analysis. When overall process yields are lower than expected, pinpointing where and why the problem is occurring is can be a challenge. Scouring the data from different unit operations for irregularities might involve weeks of checking calculations, normalizing data, and troubleshooting results. With Invert’s lineage view, you can simply view the entire cascade of events to identify the areas of unusual product loss. Being able to narrow down the potential source of loss to chromatography, for example, could lead you to check if the column was overloaded, or if sample pH needs to be adjusted—a much more directed course of action compared to troubleshooting every unit operation step by step. ## Strengthening regulatory efficiency Beyond operational efficiency, lineage tracking strengthens regulatory compliance: - **Complete traceability**: Demonstrate chain of custody from raw materials to final product - **Data integrity**: Automated calculations eliminate transcription errors - **Deviation investigation**: Assess impact when something goes wrong - **Process validation**: Build robust mass balance data across multiple batches - **Tech transfer**: Clear process understanding facilitates site-to-site transfer ## Get a full picture of your process Invert’s lineage view connects processes from shake flask to final product, calculating yields, losses, and mass closures across upstream and downstream process. It gives scientists the insights to troubleshoot efficiently, optimize for the highest impacts, and trace deviations back to their source. By supporting process understanding, operational rigor, and regulatory compliance, Invert’s lineage gives you the full picture—the experimental context, visibility, and traceability necessary to improve processes with confidence. ‍ --- kind: blog title: "Engineer Blog Series: Invert Assist with Simon Sotak Gregor" slug: engineer-blog-series-invert-assist-with-simon-sotak-gregor date: 2025-11-11 author: "Invert Team" category: Product summary: "Invert recently launched Invert Assist, our AI interface for bioprocess data analysis. We speak to senior software engineer Simon Sotak Gregor about Invert Assist to learn more about how it was built, what problems it solves, and how he hopes it’ll change the way bioprocess is done." url: https://invertbio.com/blog/engineer-blog-series-invert-assist-with-simon-sotak-gregor markdown_url: https://invertbio.com/blog/engineer-blog-series-invert-assist-with-simon-sotak-gregor.md --- # Engineer Blog Series: Invert Assist with Simon Sotak Gregor ## Engineer Blog Series: Invert Assist with Simon Sotak Gregor _Welcome to Invert's Engineer Blog Series!  This series is a behind-the-scenes look into the product and how it's built. Invert recently launched Invert Assist, our AI interface for bioprocess data analysis. We speak to senior software engineer Simon Sotak Gregor about Invert Assist to learn more about how it was built, what problems it solves, and how he hopes it’ll change the way bioprocess is done._ ‍ ## What problem does Invert Assist solve? Bioprocess experimental data is one of biopharma and biotech's most valuable assets. We think there's a huge gap in being able to use this data and a huge opportunity to use more of it much better. We want to arm every single bioprocess engineer with their own on-demand data scientist that can help them analyze and model data they already have. There are so many bioprocessing teams sitting on large amounts of data, but analyzing it all would be too time-consuming. Now, you have the capability to have sophisticated models and answers ready in a couple of minutes. Invert Assist can generate insights and design experiments faster, and ultimately improve time to milestone, such as how long it takes for a product to get to clinical trials. ## What makes applying AI to bioprocess data different from other kinds of data? Even a small bioreactor generates large amounts of data from its online sensors — a single run can generate tens to hundreds of megabytes of data. You cannot just take all of this data and copy and paste it into a naïve general-purpose model like ChatGPT. In bioprocess, companies usually have a large number of historical runs. Some of our customers that have all their data consolidated in Invert easily have tens of thousands of them. We need to be smart about how we structure the AI's workflow, as well as what tools we give the AI to pick and choose the the right pieces of the data it needs to answer your questions. ## How did the team make sure Invert Assist is suitable for enterprise biotech and biopharma use? Biotech and especially biopharma are heavily regulated industries. One of the foundational requirements for any AI system is to ensure that it’s compliant with industry standards and that it doesn't drive decisions that might endanger anyone or lose a company millions of dollars. To develop and deploy Invert Assist responsibly, we've adopted several compliance frameworks—the [EU AI Act](https://artificialintelligenceact.eu/) and the [NIST Risk Management Framework for AI](https://www.nist.gov/itl/ai-risk-management-framework), which are both regulatory frameworks that outline best practices throughout the entire lifecycle of an AI model. We are also SOC 2 Type 2 and ISO 27001-certified, which means we have to consider all risks when we develop AI features for our product. Given the value of bioprocess data, security is of utmost importance. We adhere to rigorous security standards to ensure no customer data leaks, and that it isn’t stored or used to train future AI models. Another concern that customers may have is about the traceability of Invert Assist’s answers, or how to maintain audit-ready documentation of AI results. Most large language models like ChatGPT are “black box” models, which means you don’t have any insight into how they arrived at their results. In contrast, Invert Assist provides answers that are fully traceable. It analyzes data by writing and executing Python code, and then reasons about it to give you conclusions and recommendations. Both the code and the reasoning are fully transparent and accessible to the user. This means that Invert Assist’s answers are reproducible, and users have the ability to review and verify answers for correctness. ## How did the team assess the reliability and accuracy of Invert Assist’s answers? We use an industry-standard approach in AI called evaluations. They’re a set of tasks to test the performance of an AI model. For example, each new version of ChatGPT tells you how good it is at answering mathematical questions. At Invert, we built our own bioprocess evaluations. We assembled datasets and sets of bioprocess-specific questions and tasks. They ranged from simple and objective, such as “Which run had the highest titer?”, to complex and subjective, such as, “Given data from these runs, how should I design my next experiments?”. We run these evaluations daily, as well as after every change we make to Invert Assist. We assess its performance by testing how fast the system is, how well it performs, how sound its reasoning is, and how much it hallucinates. We also evolve our evaluations as we as we get new insights about what kind of problems our customers are using Invert Assist for. We see new failure modes, new applications, and new capabilities every day. These let us add to our evaluations, which gives us insight into how the system is performing overall. I’d love it if Invert’s work in this space could become the industry standard for AI in bioprocess one day. ## What were some difficult technical hurdles and how did the team overcome them? The hardest hurdle was how the AI should make decisions regarding the appropriate data to select. We overcame this in what I think was a pretty clever way. There's a term in AI called “human-in-the-loop”. The basic principle is that you shouldn’t try to solve the hardest problem, especially if your user can solve it really easily. The bioprocess engineer ultimately knows their runs very well, so instead of having the AI hunt for the right data, you simply select the runs that you’re interested in. That way, you solve both the hardest problem and you get the most value out of out of the AI. The intersection gives us the best of both worlds. ## Which AI capabilities are you most excited to build in the future? I'm excited about more proactive AI. If you have an experiment running, the AI would keep looking at the data and keep analyzing it. If anything unexpected or surprising showed up, or perhaps a piece of data confirmed or disproved your hypothesis, it would be able to proactively point it out to you. It’d be able to tell you, “This data is worth looking at because it’s unexpected or interesting,”— that’s something I’d be excited to make a reality. ‍ --- kind: blog title: "Engineer Blog Series: Integrations with Julia Miller" slug: integrations-with-julia-miller date: 2025-11-11 author: "Invert Team" category: Product summary: "Senior software engineer Julia Miller speaks to us about Invert's integrations — how they're implemented, what makes them special, and what goes into making them work for bioprocess. " url: https://invertbio.com/blog/integrations-with-julia-miller markdown_url: https://invertbio.com/blog/integrations-with-julia-miller.md --- # Engineer Blog Series: Integrations with Julia Miller _Welcome to Invert's Engineer Blog Series!  This series is a behind-the-scenes look into the product and how it's built. For our first post, senior software engineer Julia Miller speaks to us about Invert's integrations — how they're implemented, what makes them special, and what goes into making them work for bioprocess._** ‍** ## What can a customer expect during a brand new Invert implementation? First, we learn about the data that the customer already has. Every new customer comes with historical data in different formats, a different set of equipment, and software that might have data on it already. We learn about how we need to transform this data to fit into Invert so all the data can be compared and used together. Once we know how data is going to fit, we import historical data via file ingestion and connect integrations to the customer’s equipment so that we can get live data from it. ## Some of Invert’s integrations are pre-built. What does that mean? It means that we already know how the data from that equipment, for example, a DASGIP bioreactor, is going to look like. After that, we just need to work out how to configure the integration—where the equipment is running, how it’s connected, what the IP address is, and most importantly, what concrete information they want out of this equipment. Once we know what information they care about, then it’s basically just configuring what we’ve already built for that customer. ## What makes Invert’s integrations unique? We have a unique data model that works for all the processes in biotech and biopharma. One thing that’s enabled us to fit all that data into our data model is the development of Invert’s data streams. Previously, we only were able to ingest “batch aware” data—which meant that whenever there was incoming data, we needed to know which batch or process it belonged to. With data streams, we can save all the data that we don’t have a batch or experiment or process for yet. Later, the customer can say, “This data from yesterday from 2PM to 4PM belonged to this experiment,” and they can retrospectively assign that data to that experiment. I don't think there are many companies that can do that. ![__wf_reserved_inherit](https://cdn.prod.website-files.com/6765896ee196438329424471/68f2871e58e2393912f9e049_CleanShot%202025-10-17%20at%2011.12.24%402x.png) Julia at Invert's 2025 team offsite at Zion National Park ## What’s challenging about building integrations? Historically, the most challenging thing was file ingestion for customer historical data. For example, we might get a batch record that’s an Excel file, but the information is not often in a format you can easily read and put in your database. There might be information all over the file, with the name of a run in one corner of the first page and the pH in a different tab, so you have build code to map that information to Invert’s database. You have to account for data that isn’t there or is in a different format as well. Also, as with every Excel file, people change stuff. This breaks the map and leads to an error if someone tries to upload an altered file. However, this process is getting a lot easier with large language models (LLMs). Usually the process of building file ingestion maps would take much longer, but now our CX team can use LLMs to build new mappings for each customer, which has made it much faster. ## Tell us about a memorable real-life problem you had to solve. Once, we had to integrate with a specific bioreactor with a data structure that was very different from anything we’d ever seen before. We also didn’t have a lot of documentation, and there was no UI to explore that would allow us to see how the data looked like. To get around that, we built a little program that we installed on the machine where our agent was running. Its job was to walk the whole data tree, scan it, and give it to us in a text file format. That allowed us to open the bioreactor up and make decisions based on the data we wanted to have, all without having a UI or ever seeing any of the real data. ‍ _Julia Miller is a Senior Full-Stack Software Engineer at Invert, former business analyst, and product owner. She has an M.Sc. in Financial Services and Risk Management. At Invert, Julia works on integrating Invert with all kinds of other systems in bioprocess._ --- kind: blog title: "Invert Launches Alerts: Turning Real Time Data Into Immediate Action" slug: invert-launches-alerts-turning-real-time-data-into-immediate-action date: 2025-11-11 author: "Masaki Yamada" category: Product summary: "Invert Alerts transform real time bioprocess data into immediate, actionable notifications so scientists can act in the moment, protect their runs, and focus on advancing processes instead of manually monitoring them" url: https://invertbio.com/blog/invert-launches-alerts-turning-real-time-data-into-immediate-action markdown_url: https://invertbio.com/blog/invert-launches-alerts-turning-real-time-data-into-immediate-action.md --- # Invert Launches Alerts: Turning Real Time Data Into Immediate Action Bioprocessing teams generate mountains of valuable live data. But too often, that data is only useful in hindsight, analyzed after a costly run has already failed. With the launch of **Invert Alerts**, we are changing that. Invert Alerts transform real time bioprocess data into immediate, actionable notifications so scientists can act in the moment, protect their runs, and focus on advancing processes instead of manually monitoring them. ‍ ## From Watching to Acting: What Invert Alerts Does Until now, Invert customers have been able to stream live data directly from bioreactors and other lab equipment into a unified platform. That visibility is powerful, but it still required constant manual oversight to catch excursions before they became catastrophic. ‍ Invert Alerts closes that gap by continuously scanning live data against defined thresholds and immediately notifying the right people when conditions drift outside of range. Instead of relying on scientists to spot issues in dashboards, the system transforms real time data into real time action. Alerts are logged in the Invert interface with full visibility into status, timing, and resolution, ensuring not only quick interventions but also a permanent, auditable record of what happened. ‍ ![__wf_reserved_inherit](https://cdn.prod.website-files.com/6765896ee196438329424471/68d1eafd06f4024710a17630_alerts4.gif) How to set up an alert in Invert ## Why We Built Alerts: Customers Asked, and the Stakes Are High The need for Alerts came directly from the lab. Customers and prospects repeatedly shared the same frustration: equipment alarms were difficult to configure, siloed in individual systems, and easy to miss. Teams were losing valuable material overnight because no one knew when excursions occurred, and the consequences were devastating. A single missed excursion can mean hundreds of thousands of dollars in wasted material, weeks of delay while experiments are restarted, and in clinical settings, lost doses that never make it to patients. For scientists already stretched thin, the burden of constant vigilance added mental stress on top of the financial and operational risks. Alerts were built to solve this head on by putting real time control back in the hands of bioprocess teams and ensuring no run is lost to silence. ‍ ## The Impact: Protecting Runs, Protecting Progress With Alerts, the difference between failure and success comes down to minutes, not hours. Consider a lab scale experiment where a nutrient feed tube clogs and pH levels rapidly dip. Without intervention, the run is doomed, but with an instant alert, the team has a critical 10 to 40 minute window to step in and correct the issue. At pilot scale, a sudden spike in dissolved oxygen can compromise cell viability. A timely email or text notification allows a scientist to act before the damage is irreversible. And in clinical production, where every dose counts, Alerts ensure that any temperature excursion is flagged immediately, protecting both timelines and patients who depend on those therapies. ‍ In each of these scenarios, Alerts are not just about convenience, they are a safeguard for innovation. They prevent costly failures, protect valuable material, and reduce the mental load of manual monitoring, giving scientists confidence that their processes are always under control. ‍ ## How to Use It: Simple, Configurable, Scalable Getting started with Invert Alerts is straightforward: 1. Open the Alerts tab under the Runs section in the Invert UI. 2. Configure custom rules around your critical process parameters. 3. Apply reusable alerts across multiple runs and experiments. 4. Monitor and manage alerts with full traceability, including resolution history. Whether you need to track a single parameter like pH or build more complex conditions, Invert Alerts gives you the flexibility to start simple and the power to scale as your processes evolve. ‍ ![__wf_reserved_inherit](https://cdn.prod.website-files.com/6765896ee196438329424471/68d1ec54fca5f9ee4d08324b_CleanShot%202025-09-22%20at%2017.39.04%402x.png) Invert Report showing Alerts for pH excursions on time series data ## See Invert Alerts in Action Real time action from your data means fewer failures, less wasted material, and more confidence in your processes. ‍**Request a demo today** to see how Invert Alerts can protect your runs and free your team to focus on advancing science. --- kind: blog title: "Meet Emily Nault, Invert’s New SVP of Commercial" slug: meet-emily-nault-inverts-new-svp-of-commercial date: 2025-11-11 category: Team summary: "Invert is excited to announce that Emily Nault has joined the company as our Senior Vice President of Commercial." url: https://invertbio.com/blog/meet-emily-nault-inverts-new-svp-of-commercial markdown_url: https://invertbio.com/blog/meet-emily-nault-inverts-new-svp-of-commercial.md --- # Meet Emily Nault, Invert’s New SVP of Commercial Invert is excited to announce that Emily Nault has joined the company as our Senior Vice President of Commercial. ‍ Emily brings over two decades of valuable experience in strategic sales for the life sciences. With a background in bioprocess and supply chain management, she has a deep understanding of the technical solutions needed to address the industry’s most persistent challenges. ‍ Before joining Invert, Emily served as Senior Vice President of Platform Sales and Commerce at life sciences procurement platform LabViva, where she led commercial strategy. She held director-level positions at Aldevron, Cytiva, and Sartorius, overseeing global sales strategy, e-commerce, and management of key accounts in bioprocess. ‍ Her collective experience informs a grounded knowledge of biomanufacturing market needs, with her earlier roles at Sartorius and Genzyme giving her insight into bioprocess technical sales as well as marketing and purchasing trends. ‍ "I’ve spent my career helping biopharma teams overcome operational and data challenges—and I’ve also been a customer in biotech, so I know the pain points firsthand,” says Emily. “What drew me to Invert was the clarity of its mission and the strength of its platform. There’s a real opportunity to change how process development is done, and I’m excited to help more companies realize that potential.” ‍ Emily’s experience lies within the intersection of bioprocess and digital transformation—exactly where Invert focuses on advancing data infrastructure and decision-making. Her combination of technical knowledge, deep industry connections, and business acumen will help Invert expand the value it delivers to customers. ‍ Invert believes that Emily is an instrumental addition to leadership, and is confident that her expertise will be an invaluable resource to expand the company’s presence in the biopharmaceutical industry. --- kind: blog title: "On-Demand Webinar: The Future of Data in Bioprocessing" slug: on-demand-webinar-the-future-of-data-in-bioprocessing date: 2025-11-11 author: "Veronica French" category: Interviews summary: "Discover how bioprocessing data is evolving from spreadsheets to AI-driven insights. Watch the on-demand webinar to future-proof your digital strategy." url: https://invertbio.com/blog/on-demand-webinar-the-future-of-data-in-bioprocessing markdown_url: https://invertbio.com/blog/on-demand-webinar-the-future-of-data-in-bioprocessing.md --- # On-Demand Webinar: The Future of Data in Bioprocessing From spreadsheets to AI, data management in bioprocessing has undergone a dramatic shift. This webinar explores how legacy tools hold companies back, how modern architectures enable global integration, and how AI/ML can unlock predictive control, smarter optimization, and regulatory-ready insights. Watch on your own schedule to see what’s next for data in bioprocessing—and how to future-proof your digital strategy. Click the the image below to watch the on-demand webinar:
--- kind: blog title: "Top AI Tools Powering the Future of Biomanufacturing (2025 Edition)" slug: top-ai-tools-powering-the-future-of-biomanufacturing-2025-edition date: 2025-11-11 author: "Veronica French" category: Industry summary: "Explore the leading AI platforms transforming biomanufacturing in 2025. Learn what sets the best bioprocess AI software apart and how Invert is redefining real-time manufacturing intelligence." url: https://invertbio.com/blog/top-ai-tools-powering-the-future-of-biomanufacturing-2025-edition markdown_url: https://invertbio.com/blog/top-ai-tools-powering-the-future-of-biomanufacturing-2025-edition.md --- # Top AI Tools Powering the Future of Biomanufacturing (2025 Edition) ## The Acceleration of AI in Biomanufacturing Biomanufacturing has reached an inflection point. Rapid advances in process automation, analytics, and artificial intelligence are reshaping how therapies and sustainable products move from discovery to commercial production. AI tools are no longer optional. They have become the core infrastructure that powers **scale-up, process optimization, and manufacturing intelligence**. Companies that once relied on spreadsheets and manual monitoring are now building digital ecosystems capable of learning, predicting, and adapting in real time. But as the market expands, choosing the right platform becomes critical. Not all AI tools are built for the complexities of bioprocessing. Some focus on lab data capture, while others specialize in analytics or compliance. The best platforms are those that unify these functions to deliver **trusted, live insights** that directly impact product yield, quality, and time to milestone. ## What Defines the Best Bioprocess AI Software The term “AI in biomanufacturing” covers a wide range of solutions. To separate marketing buzz from true capability, leading organizations evaluate software based on five essential dimensions. 1. **Purpose-built architecture** The platform should be designed specifically for bioprocessing. Generic analytics or LIMS tools can visualize data, but they rarely understand the complex time-series relationships that define process behavior. Purpose-built AI platforms capture, harmonize, and contextualize that data in real time. 2. **Trusted, AI-ready data foundation** The quality of insights depends on the quality of data. Top systems continuously ingest and harmonize information from instruments, sites, and CDMOs to build an accurate, contextualized foundation for analytics and machine learning. 3. **Native intelligence layer** The most advanced platforms include real-time visualization, analytics, and transparent AI models within the product itself. This makes insights accessible without relying on separate BI dashboards or data scientists. 4. **Fast, low-risk deployment** Biopharma cannot afford long IT projects. The best tools integrate through prebuilt connectors, often delivering value within days rather than months. 5. **Compliance and traceability** AI in life sciences must operate under strict governance. Systems must maintain full traceability, data lineage, and audit trails that align with [GxP](https://ispe.org/) and [21 CFR Part 11](https://www.fda.gov/media/75414/download) requirements. ## The Leaders Shaping Bioprocess AI in 2025 Below is an overview of several platforms most frequently discussed by process development and manufacturing teams heading into 2025. Each brings a different approach to the same challenge: transforming complex bioprocess data into actionable insight. ## Genedata Genedata is widely used for bioinformatics and R&D data management. Its Bioprocess platform provides structured data capture and reporting, particularly suited for upstream process development. However, its AI capabilities remain limited to statistical analysis rather than predictive modeling or live decisioning. ## Bioraptor Bioraptor offers data integration and workflow management tools for biotechs. Its strength lies in R&D collaboration and experiment tracking. For manufacturing-scale applications, it still relies on external analytics tools to achieve full process intelligence. ## Scispot Scispot focuses on modular lab data infrastructure, with strong capabilities in LIMS-like data capture and workflow automation. While flexible for startups, its architecture was not built for continuous bioreactor data streams or GMP environments. ## Sigma Aldrich (Bio4C ProcessPad) Bio4C ProcessPad from MilliporeSigma enables batch reporting and dashboard visualization for production environments. It supports real-time monitoring but depends heavily on manual configuration for new process types or instruments. ## Invert Invert’s Bioprocess AI Software represents a newer category: **AI-driven manufacturing intelligence built for bioprocessing from the ground up**. The platform unifies fragmented datasets across instruments, systems, and CDMOs into a single, AI-ready foundation. On top of this trusted layer, Invert delivers a **native intelligence environment** with real-time visualization, analytics, and transparent AI. This design allows scientists and engineers to monitor processes live, detect deviations early, and make confident decisions that shorten scale-up cycles and reduce wasted runs. Because deployment is fast and IT-light, organizations can start realizing value within days. Invert stands apart by combining data unification, live intelligence, and transparent AI in one purpose-built platform. ## The Direction of AI in Biomanufacturing The next generation of biomanufacturing intelligence will not rely on siloed dashboards or delayed batch reports. It will operate in real time, with AI models learning continuously from connected process data. As companies build digital twins, automate tech transfer, and push toward lights-out biomanufacturing, AI will serve as the decision engine that connects science with scale. To make this possible, bioprocess teams need a foundation that is unified, contextualized, and compliant. Tools that simply visualize data will give way to platforms that actively interpret it and recommend action. Invert’s intelligence layer embodies that future, transforming live bioprocess data into clear, explainable insight that accelerates discovery and manufacturing outcomes. ## What to Look for When Evaluating AI Platforms When choosing an AI tool for bioprocessing, look beyond surface-level analytics or automation claims. Ask these questions: - Can the platform integrate with bioreactors, historians, and CDMOs without custom coding? - Does it harmonize and contextualize data automatically? - Are AI models transparent and explainable, or are they black boxes? - How quickly can it deliver measurable value after implementation? - Does it ensure compliance with regulatory frameworks? The answers will reveal which tools are future-proof and which may become limitations over time. ## Building the Future of Bioprocess Intelligence The evolution of biomanufacturing will depend on data that is both trusted and alive. AI is the engine, but unified data is the fuel. The companies leading this transformation are not simply adding algorithms to legacy systems. They are rebuilding their digital infrastructure around intelligent, connected platforms that empower scientists, engineers, and quality teams to collaborate seamlessly. The most successful organizations in 2025 will be those that bridge development and manufacturing through a single, AI-ready foundation. That is the vision of [Invert](https://invertbio.com): to close the gap between data and decision, helping biopharma move therapies to market faster and more confidently than ever before. --- kind: blog title: "What Is Bioprocess AI? How Artificial Intelligence Is Transforming Manufacturing Intelligence" slug: what-is-bioprocess-ai-how-artificial-intelligence-is-transforming-manufacturing-intelligence date: 2025-11-11 author: "Veronica French" category: Industry summary: "Learn how Bioprocess AI software unifies data, accelerates decision-making, and drives faster scale-up in biomanufacturing. Discover why the next wave of manufacturing intelligence starts with live, trusted data." url: https://invertbio.com/blog/what-is-bioprocess-ai-how-artificial-intelligence-is-transforming-manufacturing-intelligence markdown_url: https://invertbio.com/blog/what-is-bioprocess-ai-how-artificial-intelligence-is-transforming-manufacturing-intelligence.md --- # What Is Bioprocess AI? How Artificial Intelligence Is Transforming Manufacturing Intelligence ## The Rise of Bioprocess AI Bioprocessing has always been the beating heart of biomanufacturing. From upstream cell culture to downstream purification and formulation, every phase generates an enormous volume of data. Yet most of that data sits unused, scattered across instruments, spreadsheets, and sites. **Artificial intelligence (AI) in bioprocessing**, or _Bioprocess AI_, has emerged to solve that. It’s not just about analytics or automation. It’s about creating an intelligence layer that turns raw, fragmented data into trusted, real-time insight. The outcome is faster, more confident decisions that directly accelerate time to milestone and product launch. Platforms like [Invert](https://invertbio.com) are pioneering this category by combining deep bioprocess domain expertise with world-class technology. Instead of forcing teams to adapt generic tools, Bioprocess AI software is designed from the ground up for the complexities of USP, DSP, and scale-up manufacturing. ## Why Bioprocessing Needs AI Now Biopharma’s data challenge isn’t about volume, it’s about usability. Every bioreactor, sensor, and chromatography skid generates continuous time-series data, but those data streams often live in silos. Scientists spend countless hours cleaning and merging data manually, delaying analysis until long after a run is complete. That delay means missed insights, wasted batches, and slower tech transfer. Meanwhile, manufacturing teams lack the real-time visibility needed to adjust parameters mid-process or predict deviations before they occur. AI changes that equation by enabling **live, contextualized intelligence**. When built on a unified data foundation, AI models can surface anomalies, predict yield trends, and recommend corrective actions as events unfold, not days later. This evolution mirrors what’s already happened in other advanced manufacturing sectors. Automotive, aerospace, and energy have all transitioned from static data review to dynamic, AI-supported process optimization. Bioprocessing is following the same path, but with higher stakes and stricter regulatory requirements. ## How Bioprocess AI Software Works At its core, Bioprocess AI integrates three essential capabilities: ## 1\. Unified, AI-Ready Data Foundation The foundation begins with data harmonization. A modern platform continuously ingests data from instruments, historians, LIMS, and CDMOs, then harmonizes and contextualizes it in real time. That process transforms fragmented data into a structured, reproducible, and compliant dataset ready for analytics and AI models. This foundation also supports full traceability, ensuring every decision can be audited and every result can be reproduced, a must for GMP and [21 CFR Part 11](https://www.fda.gov/media/75414/download) compliance. ## 2\. Built-In Intelligence Layer Once data is unified, the intelligence layer activates. Real-time visualization, advanced analytics, and transparent AI models transform complex datasets into interpretable insights. Scientists and process engineers can monitor bioreactors live, compare runs instantly, and detect subtle process shifts before they become deviations. Platforms like Invert’s intelligence layer integrate these capabilities natively, eliminating the need for separate BI dashboards or fragile custom pipelines. ## 3\. Closed-Loop Decisioning The final evolution is closed-loop decision support, where insights feed directly into process control, optimization, and digital twin models. This continuous learning cycle connects development and manufacturing, helping organizations move from reactive troubleshooting to proactive process control. ## The Impact: From Development to Commercial Scale When applied correctly, Bioprocess AI delivers measurable benefits across the entire lifecycle: - **Faster scale-up:** Real-time analytics shorten the feedback loop between R&D and manufacturing, cutting time to milestone. - **Improved yield and consistency:** Predictive modeling identifies parameter interactions that drive variability, improving process robustness. - **Reduced cost and risk:** Early anomaly detection prevents wasted runs and reduces the likelihood of batch failure. - **Regulatory confidence:** AI-ready data structures and audit trails simplify documentation and validation for quality and compliance teams. - **Empowered teams:** Scientists spend less time reconciling spreadsheets and more time running experiments that matter. A recent [ISPE Biopharmaceutical Manufacturing Trends report](https://ispe.org/) highlighted digital transformation and AI adoption as top priorities for process development teams over the next five years. The companies that succeed will be those who build a trusted data foundation early. ## Real-World Example: Connecting Development and Scale-Up Consider a mid-size biopharma scaling a monoclonal antibody process from pilot to GMP production. Traditionally, the tech transfer involves multiple handoffs, each introducing risk, data loss, and manual reconciliation. With a Bioprocess AI platform, all development and pilot data remain unified, contextualized, and accessible through a shared intelligence layer. During scale-up, the manufacturing team can instantly compare new runs against historical models, visualize performance trends, and detect drifts in cell viability or metabolite consumption in real time. That visibility enables faster decision-making and a smoother path from development to commercial readiness. In short, Bioprocess AI bridges the long-standing gap between data and decisive action. ## Choosing the Right Bioprocess AI Platform Not all AI software is created equal. Many generic BI or LIMS tools offer visualization but lack the contextualization and automation bioprocessing demands. When evaluating solutions, organizations should look for: - **Purpose-built design:** Software architected specifically for bioprocessing, not retrofitted from other industries. - **Dual expertise:** Teams who combine real bioprocess experience with enterprise-grade technology know-how. - **Native intelligence layer:** Real-time analytics, visualization, and transparent AI built directly into the platform. - **Fast, low-risk deployment:** Prebuilt connectors for common instruments and CDMOs, minimizing IT burden. - **Trusted data foundation:** Continuous ingestion and harmonization to ensure accuracy, traceability, and compliance. Invert’s Bioprocess AI Software checks all of these boxes. Designed by experts who have lived both sides of bioprocess and technology, it transforms fragmented data into reliable, actionable insights, delivered instantly, without heavy IT lift. ## The Future of Bioprocess Intelligence The next frontier of manufacturing intelligence is live, connected, and explainable. The future bioprocess facility won’t just collect data; it will learn from it in real time. AI models will continuously optimize yield, resource efficiency, and sustainability. But success won’t come from AI alone, it will come from trusted data foundations, transparent intelligence, and human expertise guided by insight. Bioprocess AI isn’t replacing scientists or process engineers. It’s amplifying their impact by freeing them from data wrangling and enabling them to make faster, more confident decisions. That’s the promise of [Invert](https://invertbio.com): turning complexity into clarity, accelerating progress from development through scale-up, and helping bring life-changing therapies to market faster, because waiting is no longer an option. ‍ --- kind: blog title: "Why Bioprocess Data Fragmentation Is Slowing Down the Industry" slug: why-bioprocess-data-fragmentation-is-slowing-down-the-industry date: 2025-11-11 author: "Veronica French" category: Industry summary: "Learn how fragmented bioprocess data creates risk, waste, and delays, and how an AI-ready data foundation helps unify insights, accelerate scale-up, and improve manufacturing outcomes." url: https://invertbio.com/blog/why-bioprocess-data-fragmentation-is-slowing-down-the-industry markdown_url: https://invertbio.com/blog/why-bioprocess-data-fragmentation-is-slowing-down-the-industry.md --- # Why Bioprocess Data Fragmentation Is Slowing Down the Industry ## The Hidden Cost of Fragmented Bioprocess Data Every bioprocessing organization understands that data is valuable. The challenge is that most of it never delivers its full potential. From bioreactor outputs and chromatography logs to lab notebooks and CDMO reports, bioprocess data lives in isolated systems that rarely connect or align. The result is fragmentation. It is a quiet but expensive problem that slows every stage of development and manufacturing. Scientists spend hours merging spreadsheets and chasing missing context. Process engineers cannot easily compare runs across sites. Executives rely on dashboards that are already outdated. In an industry where timing directly affects patient access and market opportunity, this inefficiency becomes a real obstacle to progress. The next generation of biomanufacturing is moving toward a single goal: transforming fragmented bioprocess data into a unified, AI-ready foundation that supports faster and more confident decisions. ## What Data Fragmentation Looks Like in Practice In upstream development, key process parameters and performance indicators are logged by bioreactor systems and stored separately in data historians. Downstream teams often use completely different data structures, sometimes maintained in spreadsheets or paper-based formats. As processes move toward scale-up or tech transfer, these inconsistencies start to show. - **Lost lineage:** Parameter names and definitions differ across systems. - **Delayed insights:** It can take days or even weeks to clean and merge data after a run. - **Human error:** Manual aggregation introduces inconsistencies that affect results. - **Limited traceability:** Without harmonized records, audits become difficult and risk increases. A 2024 [BioProcess International](https://bioprocessintl.com/) survey found that more than sixty percent of biomanufacturing teams cite data silos as their biggest barrier to adopting advanced analytics and AI. Fragmentation is not only slowing science; it is preventing the industry from realizing its full digital potential. ## Why Traditional Systems Fall Short Many organizations try to fix data fragmentation with tools such as electronic lab notebooks (ELNs), laboratory information management systems (LIMS), or business intelligence (BI) dashboards. These tools help with structure and reporting, but they were never designed for the continuous, high-volume nature of bioprocess data. LIMS systems manage samples and workflows but do not support real-time analysis. BI tools provide visualization but lack context and lineage. Custom-built integrations can temporarily fill the gap, yet they often break when new sensors or data sources are added. This patchwork approach only treats the symptoms. It does not solve the underlying issue: the absence of a unified, harmonized, and AI-ready data layer. ## The Case for an AI-Ready Data Foundation To move beyond fragmented systems, leading biopharma organizations are investing in modern **bioprocess data platforms** that unify, harmonize, and contextualize information in real time. An AI-ready data foundation offers several key advantages: 1. **Continuous data ingestion** from instruments, sensors, and CDMOs. 2. **Automated harmonization** that standardizes names, units, and metadata across systems. 3. **Contextualization** that connects process parameters with quality and yield outcomes. 4. **Reproducibility and traceability** to meet regulatory expectations. 5. **Scalability** to power analytics, AI, and digital twins across the lifecycle. This foundation supports modern capabilities such as **real-time monitoring**, **AI-driven optimization**, and **digital twin simulation**. Each of these requires data that is reliable, structured, and accessible. Invert’s trusted data foundation provides this capability by continuously unifying time-series data across instruments, systems, and partners. The result is a single source of truth that is always current and ready for analysis. ## From Fragmented to Unified: What Changes When data becomes unified and AI-ready, every team experiences the difference. - **Scientists** can view live experiment data instead of waiting for cleanup. - **Process engineers** can compare runs and identify optimization opportunities in real time. - **IT teams** can retire manual scripts and focus on enabling digital transformation. - **Executives** can trust performance metrics based on accurate, traceable data. Organizations that move toward unification see faster scale-up, reduced process variability, and stronger collaboration across functions and geographies. ## The Path to an Integrated Bioprocess Data Ecosystem Building an integrated ecosystem does not require starting from scratch. It requires a structured approach that focuses on connection, context, and compliance. 1. **Connect systems first.** Establish a pipeline that links instruments, data historians, and CDMOs through secure connectors. 2. **Add context next.** Use metadata mapping and standard ontologies to ensure every variable and result is clearly defined. 3. **Introduce intelligence.** Apply analytics and AI tools directly to live data for immediate insight. 4. **Ensure governance.** Implement validation and traceability aligned with GxP and [21 CFR Part 11](https://www.fda.gov/media/75414/download). With these elements in place, teams can evolve from managing data manually to operating within a living, connected data ecosystem that powers every decision. ## How AI Accelerates Value Once Data Is Unified Artificial intelligence delivers its highest value when it is built on clean, contextualized data. Machine learning models can then predict growth rates, detect anomalies, and identify the process parameters that have the greatest influence on yield and quality. Without harmonized data, AI models struggle to learn. Their accuracy and reliability depend entirely on data consistency. Invert’s unified architecture solves this by continuously harmonizing massive time-series datasets across all process stages and locations. The intelligence layer built on top provides **real-time visualization**, **transparent analytics**, and **interpretable AI insights** that scientists and engineers can act on with confidence. Learn how Invert’s Bioprocess AI Software enables organizations to make faster, data-driven decisions across development and manufacturing. ## The ROI of Data Harmonization Companies that adopt unified, AI-ready data platforms consistently report measurable improvements: - 30 to 50 percent reduction in time-to-insight by eliminating manual data cleanup. - 20 to 30 percent faster scale-up through real-time visibility across sites. - Significant reductions in rework and batch failure risk. - Simplified audit readiness through traceable data lineage. Beyond operational efficiency, unified data becomes a strategic asset. It provides the foundation for automation, predictive modeling, and digital twin initiatives that will define the next era of manufacturing excellence. ## A Unified Data Foundation is the New Baseline for Innovation The era of isolated spreadsheets and disconnected systems is ending. The future of bioprocessing belongs to organizations that can access and act on their data in real time. By unifying fragmented information into a trusted, AI-ready foundation, companies can accelerate progress, reduce uncertainty, and move therapies to market faster. This is the philosophy behind [Invert](https://invertbio.com). The platform transforms complex, fragmented bioprocess data into a reliable source of truth that drives clarity and confidence in every decision. When teams stop wrestling with data, they can focus on innovation, discovery, and impact. In the world of bioprocessing, time matters. And waiting is no longer an option. ‍ --- kind: blog title: "Engineer Blog Series: Security & Compliance with Tiffany Huang" slug: engineer-blog-series-security-compliance-with-tiffany-huang date: 2025-11-07 author: "Invert Team" category: Product summary: "Welcome to Invert's Engineering Blog Series, a behind-the-scenes look into the product and how it's built. For our third post, senior software engineer Tiffany Huang speaks about how trust and security is a foundational principle at Invert, and how we ensure that data is kept secure, private, and compliant with industry regulations." url: https://invertbio.com/blog/engineer-blog-series-security-compliance-with-tiffany-huang markdown_url: https://invertbio.com/blog/engineer-blog-series-security-compliance-with-tiffany-huang.md --- # Engineer Blog Series: Security & Compliance with Tiffany Huang _Welcome to Invert's Engineering Blog Series, a behind-the-scenes look into the product and how it's built. For our third post, software engineering manager Tiffany Huang speaks about how trust and security is a foundational principle at Invert, and how we ensure that data is kept secure, private, and compliant with industry regulations._ ‍ ## How does Invert build security into every stage of product development? At Invert, security is part of our foundation. It's built right into our development lifecycle, not an afterthought. We start with risk assessments and we base development off of those risk-based approaches. We do peer reviews and have automated checks before any code is merged. We also follow strict security and compliance policies, such as data management and encryption standards, to ensure protection continues even after release. ‍ ## In biopharma and biomanufacturing, compliance is critical. How does Invert ensure adherence to key regulations like FDA 21 CFR 411 and EU Annex 11? We're very meticulous about compliance. For regulated data, every action, every change, every user interaction is fully traceable. It's timestamped and verifiable. We enforce role-based access controls and maintain detailed audit trails so electronic records are trustworthy. Beyond traceability, we also ensure that the data is reliable, tamper-proof, and compliant with FDA regulations. We'll continue to validate that those controls are in place with both internal and external audits as well. ‍ ## How does Invert maintain data integrity and auditability to meet those compliance standards? We maintain what's essentially a tamper-proof logbook. Every record is timestamped, it has a user attributed to it, and it's stored immutably so nobody can go back in and change it. We have continuous monitoring so data integrity holds up for audits, and ensures that auditors are able to reconstruct every step of an event and be confident that no regulated data was changed or lost. We also perform daily backups for our databases, along with restoration testing. All together, these measures ensure data is safe, resilient, and easily recoverable. ‍ ## How does Invert strengthen its overall security posture through ongoing monitoring, staff training, incident readiness? We continuously train employees at Invert, and we have regular training that keeps everyone sharp, from new hires to leadership. Our incident response tabletop exercises prepare us for anything that might happen, and we make sure we can act fast, minimize impact, and learn from any event that we do have. ‍ ## How do you ensure Invert’s AI features align with emerging regulations like the EU AI Act or Annex 22? We navigate AI governance by choosing the [National Institute of Standards and Technology AI Risk Management Framework](https://www.nist.gov/itl/ai-risk-management-framework) (NIST AI RMF) as our North Star, and align with the EU AI Act. Every AI feature goes through a structured risk assessment. We document everything thoroughly with human oversight before every release and keep that process transparent for customers. We also review practices regularly to make sure that we're always moving in the right direction as regulations evolve. In addition, we never use customer data for training for model training without explicit agreement. Every feature is opt-in by default and transparently labeled when content is AI-generated. ‍ ## How does Invert communicate and ensure transparency to build trust around data usage? Our guiding principle is that our customers' data is their data. As I mentioned, we never train AI models on customer data, unless there's a separate agreement that everyone's aware of. All AI features are opt-in by default, everything is transparent and fully documented—this is our trust contract with our customers. They know exactly when and how their data is being used. We also have third party vendors that are important for us to provide contracted services and maintain performance. They must adhere to our same rigorous standards, and we review them every year. We expect the same security requirements and controls that we use, if not better. ‍ ## What were some of the biggest challenges you faced in achieving security and compliance and of course how did you overcome them? One of the biggest challenges was balancing the speed of development with the rigor of compliance and security. We were building the plane and keeping it safe at the same time—the key was to embed security directly into our workflow using risk-based approaches, some automated tools, and standardizing templates that could be used across all our AI features. This way, we assessed the risk of each change upfront and had built-in security checks along the way. By doing that, we didn't have to sacrifice being secure to be fast. ‍ _Tiffany Huang is an engineering leader at Invert, where she drives AI governance, security, and compliance strategy—shaping how the company builds responsible, transparent, and trustworthy AI features. She helps teams innovate with confidence while keeping safety and integrity at the core of every product release_ ‍ --- kind: blog title: "Analyzing Real-Time Time Series Data in Bioprocess with Invert" slug: analyzing-real-time-time-series-data-in-bioprocess-with-invert date: 2025-11-06 author: "Invert Team" category: Product summary: "In modern biomanufacturing, success hinges on the ability to make informed decisions fast. The ability to analyze data directly impacts productivity, product quality, and ultimately, time to market, whether you're optimizing a fed-batch fermentation, troubleshooting a chromatography run, or validating a filtration process. However, its massive volume, high dimensionality, and low latency of time series data in bioprocess means that most software are not built to effectively capture, let alone analyze it." url: https://invertbio.com/blog/analyzing-real-time-time-series-data-in-bioprocess-with-invert markdown_url: https://invertbio.com/blog/analyzing-real-time-time-series-data-in-bioprocess-with-invert.md --- # Analyzing Real-Time Time Series Data in Bioprocess with Invert In modern biomanufacturing, success hinges on being able to make informed decisions fast. The ability to analyze data directly impacts productivity, product quality, and ultimately, time to market, whether you're optimizing a fed-batch fermentation, troubleshooting a chromatography run, or validating a filtration process. ## The Challenge of Real-Time Time Series Data in Bioprocess The massive volume, high dimensionality, and low latency of time series data in bioprocess make it especially unique and challenging to capture, manage, and analyze. Bioprocess equipment such as bioreactors, chromatography systems, and filtration units generate complex time series datasets, with facilities capturing millions of individual data points every day. Each sensor streams pH fluctuations, dissolved oxygen levels, pressure differentials, flow rates, among hundreds of other parameters in real time. Traditional approaches to data analysis often involve: - Exporting data to spreadsheets after a run completes - Manual calculations and plotting - Waiting hours or days for trend analysis - Retrieving siloed data across different systems and departments This reactive approach means critical process deviations may go unnoticed until it's too late, forcing costly batch rejections or suboptimal yields. ## What sets Invert’s ability to analyze time series data apart? Invert was not only built with this capability to capture real time time series data from ongoing runs at its core, but is also makes bioprocess data easy to interpret and analyze instantly across all scales of production. Features that set it apart include: ### Unified view of bioprocess data Invert centralizes data across upstream and downstream unit operations, from fermentation to purification. Offline and at-line data from analytical instruments is also integrated and automatically connected to relevant timepoints, allowing users to have full experimental context when interpreting results. This surfaces correlations that would otherwise be invisible when analyzing systems in isolation. ### Real-time statistical process control Instead of waiting for post-run analysis, users can monitor control charts and statistical trends as batches progress in Invert. If any parameter drifts out of spec, they can intervene early to keep processes within validated ranges. When combined with [Invert Alerts](https://invertbio.com/blogs/invert-launches-alerts-turning-real-time-data-into-immediate-action), users can receive instant notifications when that occurs, so they can act quickly without constant manual oversight. ### Compliant with industry regulations Invert maintains full audit trails and 21 CFR Part 11 compliance, as required in GxP manufacturing for regulated environments such as biopharma and biotech. ## Practical Applications **Bioreactors**: - Track cell growth kinetics, substrate consumption, and metabolite production throughout the process - Identify the optimal feeding strategy or media composition by analyzing correlations between different feed or growth rates and productivity markers in real time ![__wf_reserved_inherit](https://cdn.prod.website-files.com/6765896ee196438329424471/690c12d6f41551ec9be37663_CleanShot%202025-11-05%20at%2019.15.20.png) Visualize product titers, viable cell density, and feed profiles in Invert with a few clicks to compare performance between different clones **Chromatography**: - Overlay current runs against historical golden batches. - Detect column fouling early by analyzing pressure trends - Optimize pooling decisions by monitoring UV absorbance patterns ![__wf_reserved_inherit](https://cdn.prod.website-files.com/6765896ee196438329424471/690c10380fa2b6b285f81ecb_CleanShot%202025-11-05%20at%2019.02.34.png) Invert automatically generates visualizations of scale-normalized column performance for chromatography data ‍ ### Filtration - Predict membrane lifetime by monitoring flux decline rates and calculating area-under-curve metrics automatically ![__wf_reserved_inherit](https://cdn.prod.website-files.com/6765896ee196438329424471/690c0f78b9cad73cfab2aed8_CleanShot%202025-11-05%20at%2018.58.52.png) Invert automatically generates scale-normalized visualizations of filter performance across various scaling factors such as filter area ## Fast time series analysis enables proactive, decisive action The ability to analyze time series data quickly is a major determinant of success in bioprocess. When a single batch can be worth millions of dollars and patient lives depend on consistent product quality and availability, efficiency is paramount. Invert transforms users from observers into decision-makers, empowering them to take control of process outcomes and ensure that life-saving therapeutics are available to those who need them. ‍ --- kind: blog title: "AI That Bioprocess Teams Can Trust: Highlights from BioTalk Berlin" slug: ai-that-bioprocess-teams-can-trust-highlights-from-biotalk-berlin date: 2025-11-05 author: "Michael McCutchen" category: Industry summary: "Recorded live at BioTalk Berlin on September 18, 2025 — featuring Michael McCutchen, Senior Product Manager at Invert. Scroll to the end to watch the full talk." url: https://invertbio.com/blog/ai-that-bioprocess-teams-can-trust-highlights-from-biotalk-berlin markdown_url: https://invertbio.com/blog/ai-that-bioprocess-teams-can-trust-highlights-from-biotalk-berlin.md --- # AI That Bioprocess Teams Can Trust: Highlights from BioTalk Berlin ## The problem no one has time for (but everyone feels) Bioprocess data isn’t just fragmented — it’s fractured across ELNs, LIMS, historians, bioreactors, ÄKTAs, HPLCs, batch records, and CDMO file drops, each speaking a slightly different dialect of “pH” and “temperature.” The result: duplicate experiments, slow root-cause analysis, and painful tech transfer across scales and sites. Those costs are real: scientists burn cycles collating data instead of doing science; data quality and scope issues creep in; scale-up decisions get made with partial context. Meanwhile, the high-value opportunities (predictive models, ML-driven DoE, digital twins) stay theoretical because the data foundation isn’t ready. ## What “AI for bioprocess” actually requires At Invert, we start with a simple conviction: all bioprocess data should be accessible in a single, **batch-centric** system. That means ingesting from upstream and downstream equipment and systems, then making that data **unit-aware, metric-managed, and batch-aware**, with calculated features that reflect how bioprocess scientists really analyze runs (e.g., totals vs. instantaneous rates). This isn’t another generic BI layer. It’s a **trusted, AI-ready data foundation** with a native intelligence layer on top — live visualization, analytics built for USP/DSP, and a transparent AI interface — so teams can explore, compare, and decide without hand-stitching time series in Excel. Strong POV: Data alone is not enough. **Intelligence built on trusted data** is what drives faster, better decisions. ## Why “just slap an LLM on the database” fails Yes, modern language models are powerful. But when you point a stock LLM at raw bioprocess data, it stumbles on the things that matter most: **sequential, high-frequency time series** and **process optimization** questions. In our internal evals, a naïve approach produces sporadic “okay” answers on general reasoning — and near-zero capability on online data analysis. In other words: not production-ready. Even as LLMs improve generation to generation, that gap doesn’t magically close. You see uplift in general reasoning, but **no reliable trend on time-series comprehension** or optimization unless you add domain-specific scaffolding. ## The Invert approach: prompts, context, tools — and proof To make AI bioprocess-ready, we engineer around the model: - **Prompt engineering** to calibrate scientific reasoning (speculate where appropriate, avoid flights of fancy). - **Context engineering** to feed the right, batch-centric data at the right time. - **Tools/agents** that perform the domain work (e.g., time-series stats, chromatography overlays, growth-rate calculations) instead of hoping the base model “figures it out.” Then we do what scientists expect: **measure it**. ### Evals: assays for AI We use standardized prompts and auto-grading rubrics (0–1 scale) across four categories that map to real bioprocess work: 1. **General Reasoning** – find and interact with data 2. **Investigation** – pattern recognition & causality for root cause 3. **Online Data Analysis** – calculations and conclusions from time-series data 4. **Process Optimization** – prediction and next-best-action recommendations With Invert’s domain prompts, context, and tools, performance increases **immediately across all four** — including the historically tough **online data analysis** — and in several tasks our answers **saturate the scale (hit 1.0)**, forcing us to expand dynamic range with harder questions. That’s the standard you should demand before letting AI inform real decisions. ## From question to answer — without the swivel-chair During my live presentation, I showed prototypes of a chat interface that lets scientists ask natural-language questions (“What likely caused the titer drop in these runs?” “Recommend a scale-up DOE given these constraints.”) and receive answers backed by the right plots, stats, and context — not just text. The key: fast retrieval across **all** relevant runs and unit operations, with the guardrails to avoid apples-to-oranges comparisons. Because the platform is **batch-centric** and **unit-aware**, the AI can compare like with like, compute totals vs. rates, and pull DSP outcomes against upstream conditions — the pairwise links that matter for root cause and tech transfer — without hours of manual data wrangling. ## What this means for CMC leaders - **Accelerate answers.** Reduce deviation RCA from days to hours by traversing USP↔DSP data with context intact. - **Cut wasted runs.** Know what’s been tried, what worked, and what to change next; stop re-doing experiments due to missing context. - **De-risk scale-up and tech transfer.** Compare conditions and outputs across sites and scales with normalized, harmonized metrics. - **Make AI auditable.** Treat AI like a complex system you already know how to control: instrument it with evals, monitor drift, and hold it to measurable standards. Strong POV: **Delayed insights are wasted insights.** Live visibility and AI on a trusted foundation are now a competitive necessity. ## The takeaway - **Fragmented, unclean, and siloed data is holding back your AI-readiness.** - **A batch-centric, harmonized, unit-aware foundation** is the prerequisite. - **LLMs need domain scaffolding** (prompts, context, tools) to deliver. - **Evals are non-negotiable** — the assay that makes AI trustworthy in bioprocess.
### About Invert Invert is **Bioprocess AI Software** built by dual experts in bioprocess and technology. We unify, harmonize, and contextualize time-series data across instruments, sites, and CDMOs, then layer in real-time visualization, analytics, and a transparent AI interface — so teams cut wasted runs, lower cost and risk, and move therapies and sustainable products to market faster. Because waiting is no longer an option. **Interested in a deeper dive or a live demo?** Contact **Michael McCutchen** (Sr. Product Manager) at _michael@invertbio.com_ or **Hélène Panier** (Director of Strategic Partnerships, Europe) at _helene@invertbio.com_. _Speaker: Michael McCutchen, Senior Product Manager, Invert. Delivered at BioTalk Berlin (September 18, 2025)._ --- kind: blog title: "Introducing Invert Assist — Explainable AI for Bioprocess Quality Control, Monitoring, and Optimization" slug: invert-assist-ai-bioprocessing-quality-control-data-integration date: 2025-11-05 author: "Veronica French" category: Product summary: "Biopharma teams don’t fail at AI because models are weak. They stall because data is fragmented. In our new webinar, we introduced Invert Assist—the AI layer purpose-built for bioprocessing—and showed how pairing explainable AI with a trusted, harmonized data foundation accelerates scale-up, improves bioprocess quality control, and cuts wasted runs." url: https://invertbio.com/blog/invert-assist-ai-bioprocessing-quality-control-data-integration markdown_url: https://invertbio.com/blog/invert-assist-ai-bioprocessing-quality-control-data-integration.md --- # Introducing Invert Assist — Explainable AI for Bioprocess Quality Control, Monitoring, and Optimization In biopharma, teams don’t fail at AI because the models are weak. They fail because the data isn’t ready. That was the starting point of our latest webinar, where **Emily Nault**, Senior Vice President of Commercial, and **Masaki**, Head of Product at Invert, introduced **Invert Assist**, the new explainable AI assistant purpose-built for bioprocessing. Together, they demonstrated how Invert Assist, built on top of Invert’s trusted bioprocess data foundation, transforms fragmented bioprocess information into clear, auditable insights that accelerate development and strengthen bioprocess quality control. ## The Data Problem Behind Most Enterprise AI Failures As Emily explained, most enterprise AI projects — especially in pharmaceutical manufacturing — stall not because algorithms underperform, but because the underlying data is fragmented, inconsistent, and incomplete. Bioreactors, DSP systems, and CDMO data live in disconnected silos. Scientists can spend an undesireably large fraction of their time cleaning, merging, and reconciling information before analysis even begins. By the time data is aligned, the insight window has passed, milestones slip, and opportunities are lost. Invert was built to eliminate that barrier. Acting as a **bioprocess data integration platform**, it connects instruments, systems, and sites, continuously unifying and contextualizing time-series data in real time. Every measurement and every parameter is harmonized, versioned, and traceable — creating a single source of truth that’s **AI-ready** and built for regulatory-grade data stewardship. As Masaki put it, “It’s not a data lake. It’s not a LIMS. It’s a data foundation, designed specifically for bioprocessing.” ## Introducing Invert Assist: AI That Shows Its Work Once data is unified and reliable, intelligence can finally be layered on top. **Invert Assist** represents the next evolution — a **chat-based AI assistant for bioprocessing** that helps scientists, engineers, and operations teams move from raw data to real decisions. With Assist, users can ask natural-language questions like _“What caused the drop in yield last week?”_, _“Which parameters most affect titer?”_, or _“What experiments should we run next?”_ Unlike generic AI tools, **Invert Assist is fully explainable**. It doesn’t just provide answers — it shows how it got there. Each analysis generates transparent, reproducible code that users can inspect, edit, rerun, and export for audit. Every interaction is logged and version-controlled, ensuring traceability for teams working under GxP and 21 CFR Part 11 requirements. “It’s not a black box,” Masaki explained. “We’re generating code that runs on your own data, so you can verify every step.” ## Seeing Explainable AI in Action During the live demo, Emily and Masaki showed how Invert Assist turns days of manual analysis into minutes of insight. They explored a series of fermentation runs, quickly identifying a pH deviation that impacted titer. What would have previously required manual data stitching across multiple systems was solved in a few queries, with Assist pulling in the harmonized data, highlighting correlations, and explaining its reasoning step-by-step. They then moved to a design of experiments dataset, asking Assist which factors most impacted yield and what experiments should be run next. In seconds, the assistant identified pH as the dominant driver and proposed optimization runs, a clear example of how **AI-driven bioprocess optimization platforms** like Invert can help pharmaceutical manufacturing teams move from reactive troubleshooting to proactive process design. ## Data Security and Compliance Built In AI is only as valuable as the trust behind it, and that includes data security. Invert Assist operates within each customer’s isolated, secure environment. No data is ever shared across clients, and no customer data is used to train or fine-tune models. Invert runs its AI through **Amazon Bedrock** for added security, ensuring that intellectual property and sensitive process data remain fully protected. Every query, every analysis, and every generated code block is logged and auditable, giving QA, IT, and regulatory teams full visibility into how results were produced. This governance-first design makes **Invert Assist not just an analytics layer, but a cornerstone of bioprocess quality control and compliance**. ## Continuous Benchmarking and Evolution Invert Assist isn’t static software, it evolves with science. As Masaki shared, the product team continuously benchmarks the assistant against real-world bioprocess questions: correlating parameters with yield drift, comparing scale-up runs to development batches, and refining the assistant’s accuracy and reproducibility over time. Each release is tested against structured evaluations, ensuring continuous improvement aligned with how scientists actually work. Looking ahead, Invert is extending Assist’s capabilities from analysis to prediction. The next generation will integrate process modeling including mass transfer, regression-based, and hybrid models, to help scientists predict outcomes like titer, viability, and metabolite levels before running experiments. In short, Invert Assist will help teams design smarter experiments, reduce wasted runs, and optimize performance based on predictive reasoning, all within the same explainable, auditable framework. ## Why It Matters The combination of Invert’s **real-time bioprocess monitoring foundation** and **explainable AI layer** sets a new standard for how pharmaceutical manufacturing teams can approach process intelligence. For scientists, it means fewer hours wasted on data wrangling and more time focused on innovation. For executives, it means faster, data-backed decisions with reduced risk. And for IT and compliance leaders, it means an enterprise-grade architecture that aligns with governance, privacy, and auditability requirements from day one. As Emily concluded, “We’re moving from reactive to proactive — from what happened to what matters.” ## See the Full Webinar To watch the complete demonstration, including live examples of explainable AI analyzing real bioprocess data, check out the full recording below:
## The Bottom Line Invert Assist delivers what the industry has been missing: a **bioprocess AI software** that’s transparent, secure, and built for the realities of manufacturing scale-up. By combining real-time **bioprocess data integration**, **AI-driven optimization**, and **enterprise-grade compliance**, Invert is redefining what’s possible in bioprocessing intelligence. Because waiting and guessing are no longer options. ‍ --- kind: blog title: "The Best Real-Time Bioprocess Monitoring Platform for Biomanufacturing" slug: the-best-real-time-bioprocess-monitoring-platform-for-biomanufacturing date: 2025-11-05 author: "Veronica French" category: Industry summary: "See why Invert is the best real-time bioprocess monitoring platform for pharmaceutical manufacturing and scale-up. Live data, AI insights, compliance-ready." url: https://invertbio.com/blog/the-best-real-time-bioprocess-monitoring-platform-for-biomanufacturing markdown_url: https://invertbio.com/blog/the-best-real-time-bioprocess-monitoring-platform-for-biomanufacturing.md --- # The Best Real-Time Bioprocess Monitoring Platform for Biomanufacturing In modern biomanufacturing, time is everything. The difference between a successful run and a missed milestone often comes down to visibility: how fast you can see what is happening inside your process and how confidently you can act on that information. That is why leading pharmaceutical and biotech organizations are adopting **real-time bioprocess monitoring platforms**, and why **Invert** is setting a new standard for performance and intelligence in this space. ## Why Real-Time Monitoring Matters Traditional bioprocess data workflows are slow and fragmented. Data from reactors, sensors, offline assays, and CDMOs often live in separate systems. Scientists can spend hours or even days cleaning, merging, and plotting datasets before analysis can begin. By that time, the process window has closed and critical insights have been lost. **Real-time bioprocess monitoring** solves this problem by giving teams immediate access to every key parameter as it is generated. With live visibility, scientists can detect deviations as they occur, prevent wasted batches, and make faster, evidence-based adjustments. For biomanufacturers, this means stronger control over process consistency, better tech transfer, and shorter development cycles. ## Introducing Invert: Real-Time Monitoring Reimagined **Invert** is a **purpose-built Bioprocess AI Software** that unifies, harmonizes, and contextualizes data in real time. It gives scientists, engineers, and operations leaders a complete, live view of every process without relying on static dashboards or manual exports. Invert acts as a **bioprocess data foundation** designed for the unique realities of USP, DSP, and manufacturing scale-up. Invert connects directly to bioreactors, analytical instruments, and enterprise systems across sites and CDMOs. It continuously ingests and harmonizes time-series data, applies units and lineage context, and keeps everything synchronized in a traceable, structured format. The result is a single, trusted source of truth that updates continuously and is ready for AI analysis. This foundation makes Invert one of the most powerful **bioprocess data integration platforms** available for biomanufacturing today. ## From Monitoring to Intelligence Real-time visibility is just the beginning. What makes Invert stand apart from other **bioprocess monitoring platforms** is its **native intelligence layer**. Built directly into the platform, this layer adds real-time analytics and explainable AI that transforms live data into immediate, actionable insight. With **Invert Assist**, users can ask natural-language questions directly against their live process data. Questions like _“Why is pH drifting in this reactor?”_ or _“Which parameters are impacting yield right now?”_ receive clear, step-by-step responses. Assist does not just provide answers; it shows the reasoning behind them through transparent, reproducible code. This combination of clarity and speed makes Invert the first **AI-driven bioprocess optimization platform** that balances rapid decision-making with scientific rigor. ## Real-Time Bioprocess Monitoring for Pharmaceutical Manufacturing For pharmaceutical manufacturing teams, live data visibility is not just a convenience. It is a requirement for regulatory compliance and operational excellence. Invert is designed with **GxP and 21 CFR Part 11** in mind, providing version control, audit trails, and secure data management. Each customer operates in an isolated, encrypted environment, ensuring data integrity and IP protection. By connecting both upstream and downstream data—from reactor control systems to purification analytics—Invert provides **end-to-end process visibility**. Teams in quality, process development, and manufacturing science can monitor parameters in real time, correlate live data with historical performance, and intervene before deviations affect quality or yield. This governance-first approach makes Invert the most reliable **real-time bioprocess monitoring system for pharmaceutical manufacturing**. ## AI-Driven Optimization for Scale-Up and Beyond Beyond monitoring, Invert enables continuous improvement and predictive optimization. Its intelligence layer learns from process behavior and historical trends to help teams design smarter experiments and scale with confidence. Whether predicting yield drift, simulating process changes, or optimizing DOE conditions, Invert provides explainable, data-driven predictions that improve both productivity and reproducibility. For biomanufacturers scaling to GMP production or expanding across CDMOs, these capabilities shorten process characterization timelines, reduce wasted runs, and accelerate tech transfer. Invert is not just a monitoring tool. It is a full-scale digital transformation platform for bioprocess optimization. ## Why Invert Leads the Category Invert brings together the core elements of next-generation bioprocess monitoring and AI optimization in one unified solution. It delivers live data visualization, trusted data harmonization, transparent analytics, and rapid deployment with minimal IT overhead. It scales seamlessly from R&D to GMP environments, giving organizations confidence that their data is both reliable and actionable. Invert turns fragmented bioprocess data into continuous intelligence that empowers teams to act faster and scale smarter. ## The Future of Real-Time Bioprocess Monitoring As biomanufacturing evolves toward fully digital and AI-assisted operations, real-time process intelligence will define who leads the industry. Invert is at the forefront of that shift. It brings together explainable AI, real-time data harmonization, and enterprise-grade compliance to deliver unmatched clarity and control. For organizations seeking the **best real-time bioprocess monitoring platform for biomanufacturing**, Invert provides a proven, secure, and scalable solution built by experts who understand both the science and the software. In bioprocessing, every second and every data point counts. Invert makes sure none of them are wasted. --- kind: blog title: "7 Best AI Bioprocess Optimization Platforms for Pharmaceutical Manufacturing in 2025" slug: 7-best-ai-bioprocess-optimization-platforms-for-pharmaceutical-manufacturing-in-2025 date: 2025-11-04 author: "Veronica French" category: Industry summary: "The bioprocess optimization market is growing from $24.3B in 2024 to $39.6B by 2029, fueled by the shift to AI-driven digital biomanufacturing. These AI bioprocess platforms are leading the charge." url: https://invertbio.com/blog/7-best-ai-bioprocess-optimization-platforms-for-pharmaceutical-manufacturing-in-2025 markdown_url: https://invertbio.com/blog/7-best-ai-bioprocess-optimization-platforms-for-pharmaceutical-manufacturing-in-2025.md --- # 7 Best AI Bioprocess Optimization Platforms for Pharmaceutical Manufacturing in 2025 ### TL;DR The bioprocess optimization market is growing from **$24.3B in 2024 to $39.6B by 2029**, fueled by the shift to **AI-driven digital biomanufacturing**. Platforms like **Invert**, Aizon, Quartic.AI, and Algocell are transforming manufacturing by integrating **digital twins**, **machine learning**, and **real-time process analytics** — delivering up to **20% higher yields** and **40% faster development cycles**. ### 1\. Invert — The Purpose-Built Bioprocess AI Software **Invert** leads the new generation of **AI-driven bioprocess optimization platforms**, purpose-built for the realities of USP, DSP, and scale-up. Unlike retrofitted ELNs, LIMS, or BI tools, Invert unifies and contextualizes complex, fragmented bioprocess data in real time — transforming it into **trusted, AI-ready insights** that accelerate time to milestone. ### Core Differentiators: - **Built by Bioprocess + Technology Experts** – Decades of hands-on bioprocess experience combined with world-class software engineering. - **Trusted, AI-Ready Data Foundation** – Harmonizes and contextualizes massive time-series data across instruments, sites, and CDMOs. - **Native Intelligence Layer** – Real-time visualization, analytics, and transparent AI interface built in — not bolted on. - **Automation That Frees Expertise** – Eliminates manual cleanup and brittle pipelines, so scientists and IT focus on innovation. - **Fast, Low-Risk Deployment** – Integrates with existing bioreactors and CDMO systems in hours, not weeks. **Impact:** Organizations using Invert reduce wasted runs, improve reproducibility, and cut development timelines by 30–40%. By turning data into confident, real-time decisions, Invert helps manufacturers move therapies and sustainable products to market faster — because **waiting is no longer an option**. ### 2\. Aizon — GxP-Compliant Bioreactor Intelligence Aizon delivers AI-powered bioprocess optimization for regulated environments, combining predictive analytics and deep knowledge management. The platform enables **real-time deviation detection** and **root cause analysis**, improving yield by up to 20%. ### 3\. Quartic.AI — Manufacturing Operations Optimization Quartic.AI connects legacy operational tech with intelligent analytics for **real-time context across manufacturing systems**. Pharma users report up to 35% cycle time reduction and major improvements in process reliability. ### 4\. Algocell — Hybrid Digital Twin and AI Modeling Algocell applies hybrid modeling to integrate **mechanistic and machine-learning insights**, allowing accurate process predictions even with limited data. Results include 25–30% yield improvements and 60–70% fewer experiments. ### 5\. WisdomEngine — Bioprocess Intelligence and Reasoning AI WisdomEngine merges first-principles modeling with reasoning AI to provide interpretable, actionable insights. It transforms batch data into insights within minutes, accelerating development and reducing risk. ### 6\. Insilico Medicine — Integrated Drug Discovery and Bioprocess Optimization Insilico Medicine’s **Pharma.AI** suite integrates discovery through process optimization, enabling seamless **end-to-end acceleration** for biologics development. ### 7\. BioReact — Data Visualization and DoE Optimization BioReact focuses on AI-powered data visualization and **media optimization**, simulating up to 10,000 virtual experiments to minimize physical runs. ### Market Overview & Growth Drivers The pharmaceutical bioprocess optimization landscape has fundamentally transformed over the past five years. The global bioprocess optimization and digital biomanufacturing market expanded from $22.4 billion in 2023 to $24.3 billion in 2024, with projections to reach $39.6 billion by 2029, representing a compound annual growth rate of 10.2%. This explosive growth reflects industry recognition that AI-driven solutions directly enhance manufacturing efficiency, reduce development costs, and improve product quality across upstream and downstream operations. Key market drivers include unprecedented demand for biopharmaceuticals, driven by aging populations and chronic disease prevalence; the emergence of complex therapeutic modalities requiring sophisticated manufacturing control; and regulatory pressure for robust process understanding through quality by design frameworks. Contract manufacturing organizations and established pharmaceutical companies are racing to deploy advanced bioprocess optimization to maintain competitive advantage, with North America currently leading adoption while Asia Pacific shows the fastest growth rates. Monoclonal antibody production, vaccines, and advanced therapy medicinal products represent the primary application segments, each with distinct optimization challenges and substantial profit opportunities. Organizations implementing comprehensive AI-driven bioprocess optimization are achieving documented improvements including 10-20% yield increases, 30-50% reduction in batch variability, and 30-40% acceleration of process development timelines ### Core Technologies Powering the Shift Process Analytical Technology (PAT) Integration: Modern platforms embed PAT frameworks enabling real-time measurements of critical parameters, with FDA guidance emphasizing design of systems measuring quality attributes during processing rather than after batch completion. Digital Twin Simulation: Sophisticated bioprocess digital twins simulate entire process chains in real-time, enabling virtual experiments and predictive optimization before physical implementation. Hybrid Machine Learning Models: Advanced platforms combine mechanistic process understanding with machine learning pattern recognition, enabling accurate predictions with substantially reduced experimental requirements compared to purely data-driven approaches. Real-Time Sensor Networks: Integration of diverse sensor technologies including dissolved oxygen, pH, temperature, optical density, and advanced spectroscopy enables comprehensive process visibility with minimal contamination risk. ### Real-World Impact and ROI Organizations implementing leading AI bioprocess optimization platforms demonstrate substantial financial returns. Direct cost avoidance through automation, revenue gains from faster market entry, and reduced product recalls drive measurable ROI. One documented case achieved 75% reduction in design of experiments time through machine learning approaches. Another reported 40% reduction in wet-lab experiments, enabling process development with dramatically reduced resource consumption. Process optimization improvements including 10-20% yield increases directly reduce manufacturing costs, with each doubling of production volume reducing per-unit costs by approximately 30%. Faster development cycles enable earlier market entry and capture of valuable patent-protected revenue periods. Equipment reliability improvements through predictive maintenance prevent unexpected downtime, while accelerated troubleshooting through automated root cause analysis reduces investigation time by 40% or more. Strategic consulting analyses project that pharmaceutical companies fully embedding AI in operations could add $254 billion in annual operating profit globally by 2030. ## Frequently Asked Questions ### Q: How quickly can we implement AI bioprocess optimization? A: Implementation timelines vary based on current data infrastructure and complexity. Most organizations begin with focused pilot projects on specific manufacturing challenges, typically requiring 3-6 months for initial deployment with full ROI realization extending 12-24 months. ### Q: What data quality standards are required for AI platform success? A: Comprehensive data governance frameworks defining data ownership, quality standards, and documentation requirements are essential. Organizations should establish baseline metrics before implementation and implement data management platforms enabling contextualization and harmonization of disparate manufacturing datasets. ### Q: How do these platforms support regulatory compliance? A: Leading platforms integrate with quality by design frameworks and maintain comprehensive audit trails supporting FDA inspection readiness. GxP-compliant platforms maintain validated system documentation, user access controls, and electronic records meeting 21 CFR Part 11 requirements. ### Q: Can AI platforms work with existing bioreactor systems? A: Modern platforms integrate with legacy bioreactor equipment through standardized data connectors and integration APIs. Many solutions offer cloud-based analytics enabling retrofit of existing manufacturing facilities without requiring new capital equipment investment. ### Q: What skills are required to operate these platforms? A: While platforms are increasingly designed for accessibility by non-data-scientists, effective implementation benefits from interdisciplinary teams combining bioprocess engineering expertise, basic data interpretation skills, and quality system knowledge. Vendor-provided training programs support team capability development. ## Conclusion & Next Steps AI-driven bioprocess optimization has transitioned from emerging innovation to competitive necessity in pharmaceutical manufacturing. The convergence of sophisticated sensors, cloud computing, machine learning algorithms, and digital twin technologies enables pharmaceutical manufacturers to achieve unprecedented levels of manufacturing efficiency, product consistency, and development speed. Organizations implementing comprehensive AI bioprocess optimization are achieving documented improvements including 10-20% yield increases, 30-40% faster development timelines, and substantial reductions in manufacturing variability. The bioprocess optimization market's expansion to $39.6 billion by 2029 reflects broad industry recognition that these technologies deliver measurable business value. The question for pharmaceutical manufacturers is no longer whether to adopt AI-driven bioprocess optimization, but how quickly to implement and scale these capabilities across operations. Organizations that effectively leverage digital twins, real-time analytics, and machine learning models position themselves for long-term competitive advantage in increasingly complex biopharmaceutical manufacturing environments. Ready to optimize your bioprocess operations? Evaluate these leading platforms through pilot projects focused on specific manufacturing challenges, establish comprehensive data governance frameworks, and develop internal capabilities through strategic workforce development. The 10-20% yield improvements, faster time-to-market, and reduced manufacturing costs available through AI-driven optimization represent substantial value waiting to be captured in your organization. ### The Bottom Line AI-driven bioprocess optimization has moved from pilot to **strategic imperative**. Among the leaders, **Invert** stands apart as the **only platform purpose-built for bioprocessing** — designed by experts who have lived the complexity of manufacturing and engineered the technology to simplify it. For biopharma leaders aiming to accelerate time to milestone and reduce cost and risk across development and scale-up, **Invert delivers clarity, speed, and confidence — turning bioprocess data into decisive action.** ### References [Coherent Solutions – Artificial Intelligence in Pharmaceuticals and Biotechnology](https://www.coherentsolutions.com/insights/artificial-intelligence-in-pharmaceuticals-and-biotechnology-current-trends-and-innovations) [Körber Pharma – What is a Bioprocess Digital Twin](https://www.koerber-pharma.com/en/blog/what-is-a-bioprocess-digital-twin) [FDA – Process Analytical Technology Guidance](https://www.fda.gov/media/71012/download) [Bioprocessing Summit – Digital Transformation](https://www.bioprocessingsummit.com/digital-transformation) [PSC Software – Digital Twin Technology in Pharma & Biopharma](https://pscsoftware.com/digital-twin-technology-pharma-biopharma/) [BioProcess International – PAT in Powder Media Production](https://www.bioprocessintl.com/information-technology/process-analytical-technology-pat-in-powder-media-production) [BCC Research – Bioprocess Optimization & Digital Bio-Manufacturing Market](https://www.bccresearch.com/market-research/biotechnology/bioprocess-optimization-and-digital-bio-manufacturing-global-markets.html) [Frontiers in Bioengineering and Biotechnology – AI in Bioprocessing](https://www.frontiersin.org/journals/bioengineering-and-biotechnology/articles/10.3389/fbioe.2023.1112349/full) [Wiley Analytical Science – Digital Twin Applications in Biotech](https://analyticalsciencejournals.onlinelibrary.wiley.com/doi/10.1002/biot.70149) [GlobeNewswire – Bioprocess Optimization Market Forecast 2029](https://www.globenewswire.com/news-release/2025/04/04/3056163/0/en/Bioprocess-Optimization-and-Digital-Bio-manufacturing-Market-Forecast-2029.html) [PubMed – AI in Biopharma Studies](https://pubmed.ncbi.nlm.nih.gov/40884591/) [BioProcessing Journal – Machine Learning for Biologics Manufacturing](https://bioprocessingjournal.com/characterization-and-optimization-of-biologics-manufacturing-using-space-filling-designs-and-machine-learning/) [PubMed Central – AI-Driven Bioprocess Development](https://pmc.ncbi.nlm.nih.gov/articles/PMC12114689/) [BioProcess International – Continuous Process Control in Biomanufacturing](https://www.bioprocessintl.com/continuous-bioprocessing/controlling-integrated-continuous-processes-real-time-monitoring-with-feed-back-and-feed-forward-controls-enables-synchronization-and-enhances-robustness) [McKinsey – Generative AI in Pharma](https://www.mckinsey.com/industries/life-sciences/our-insights/generative-ai-in-the-pharmaceutical-industry-moving-from-hype-to-reality) [ETERNAL Project – AI and Big Data in Pharma Development](https://www.eternalproject.eu/downloads/publications/ai_and_big_data_in_pharma_dev.pdf) [Securecell – Advanced Real-Time Monitoring in Continuous Bioprocesses](https://www.securecell.ch/insights/advanced-real-time-monitoring-in-a-continuous-bioprocess) [BioProcess International – Data Overabundance in Biomanufacturing](https://www.bioprocessintl.com/information-technology/the-paradox-of-data-overabundance-in-biomanufacturing-data-literacy-is-key-to-unlocking-value) [FDA – AI/ML Software as a Medical Device](https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device) [Pharm Outsourcing – Bioprocessing 4.0 and Smart Manufacturing](https://www.pharmoutsourcing.com/Featured-Articles/568001-Bioprocessing-4-0-Where-Are-We-with-Smart-Manufacturing-in-2020/) [Advancing RNA – Process Monitoring and Data Management](https://www.advancingrna.com/doc/process-monitoring-and-data-management-approaches-for-today-s-bioprocess-challenges-0001) [FDA CDER – Artificial Intelligence in Drug Development](https://www.fda.gov/about-fda/center-drug-evaluation-and-research-cder/artificial-intelligence-drug-development) [PubMed Central – AI in Continuous Manufacturing](https://pmc.ncbi.nlm.nih.gov/articles/PMC12112940/) [F7i.ai – Predictive Maintenance Use Cases in Pharma](https://f7i.ai/blog/beyond-the-buzz-7-real-world-ai-predictive-maintenance-use-cases-in-pharma-for-2025) [Cell & Gene – Automation in Advanced Therapies Manufacturing](https://www.cellandgene.com/doc/manufacturing-automation-for-cell-advanced-therapies-0001) [GEA – Digital Twin Bioreactors](https://www.gea.com/en/news/trade-press/2023/digital-twin-bioreactors/) [Nanoprecise – Predictive Maintenance in Pharma](https://nanoprecise.io/predictive-maintenance-in-pharmaceutical-industry/) [BioProcess International – Skill Needs in ATMP Manufacturing](https://www.bioprocessintl.com/cell-therapies/skill-needs-for-advanced-therapy-medicinal-product-manufacturing-a-survey-report-and-proposed-skills-heatmap) [M-Star CFD – Pfizer Digital Twin Case Study](https://mstarcfd.com/resources/case-study/how-pfizer-leveraged-digital-twins-to-create-a-process-scale-up-roadmap/) [Algocell.ai – About](https://algocell.ai/about/) [WuXi Biologics – White Paper on COGS](https://www.wuxibiologics.com/wp-content/uploads/WuXi-Bio_White-Paper_COGS-031125.pdf) [Evotec – Benefits of Continuous Manufacturing](https://www.evotec.com/sciencepool/bologics-bottlenecks-pt-2-benefits-of-continuous-manufacturing) [Algocell.ai](https://algocell.ai) [BCG – Biopharma Manufacturing Cost Reduction](https://www.bcg.com/publications/2023/biopharma-manufacturing-cost-reduction) [PubMed Central – Hybrid Models in Bioprocesses](https://pmc.ncbi.nlm.nih.gov/articles/PMC8043180/) [Spectroscopy Online – Real-Time Monitoring via Raman Spectroscopy](https://www.spectroscopyonline.com/view/new-raman-spectroscopy-method-enhances-real-time-monitoring-across-fermentation-processes) [PubMed – Bioprocess Monitoring Studies](https://pubmed.ncbi.nlm.nih.gov/34147574/) [Bruehlmann Consulting – Digital Hybrid Modeling in Bioprocess Development](https://bruehlmann-consulting.com/data-analytics/the-case-for-digital-hybrid-modeling-in-bioprocess-development/) [Wiley – Hybrid Modeling for Bioprocess Optimization](https://analyticalsciencejournals.onlinelibrary.wiley.com/doi/10.1002/jrs.6841) [Körber Pharma – Design of Experiments in Bioprocessing](https://www.koerber-pharma.com/en/blog/bioprocess-design-of-experiments-doe) [DataHow – Impact of Hybrid Models on Bioprocess Development](https://datahow.ch/the-impact-of-hybrid-models-on-bioprocess-development/) [Deloitte – AI Investment ROI](https://www.deloitte.com/us/en/insights/topics/digital-transformation/ai-tech-investment-roi.html) [ThinkAes – Data Management Solutions](https://thinkaes.com/data-management/) [PubMed Central – AI in Pharma Process Development](https://pmc.ncbi.nlm.nih.gov/articles/PMC10781225/) [Bain & Company – AI ROI in Pharma](https://www.bain.com/insights/ai-roi-in-pharma-the-power-of-strategy-snap-chart/) [HighRes Bio – Lab Automation Software](https://highresbio.com/blog/lab-automation-software/les-lims-mes) [DiVA Portal – Digital Transformation in Bioprocessing (Thesis)](http://www.diva-portal.org/smash/get/diva2:1974609/FULLTEXT01.pdf) [PubMed Central – AI-Enabled Bioprocess Analytics](https://pmc.ncbi.nlm.nih.gov/articles/PMC10298952/) [Thermo Fisher – Process Intensification Strategies](https://www.thermofisher.com/blog/life-in-the-lab/bioprocessing-mabs-advanced-process-intensification-strategies/) [QbD Group – AI/ML Compliance in Pharma](https://www.qbdgroup.com/en/services/software-solutions-services/ai-ml-compliance) [Good Food Institute – Fermentation & Upstream Bioprocess Design](https://gfi.org/science/the-science-of-fermentation/deep-dive-fermentation-upstream-bioprocess-design/) [Sigma Aldrich – Seed Train Intensification](https://www.sigmaaldrich.com/US/en/technical-documents/technical-article/cell-culture-and-cell-culture-analysis/cell-culture-for-manufacturing/seed-train-intensification-high-cell-density-cryopreservation) [QbD Group – Regulatory Strategy in Pharma](https://www.qbdgroup.com/en/services/regulatory-affairs/regulatory-strategy-pharma) [Pamir LLC – Asia-Pacific Biotech Innovation Hub](https://pamirllc.com/blog/asia-pacific-is-becoming-a-global-hub-for-biotech-and-pharma-innovation-and-drug-discovery) [BioPharm International – Single-Use Bioreactors](https://www.biopharminternational.com/view/single-use-bioreactors-scale-or-scale-out) [PubMed Central – Bioprocess Digitalization Studies](https://pmc.ncbi.nlm.nih.gov/articles/PMC9605695/) [Bain & Company – APAC Biotech Report 2025](https://www.bain.com/about/media-center/press-releases/sea/apac-biotech-report-2025/) [Pharmaceutical Technology – Pros and Cons of Single-Use Bioreactors](https://www.pharmtech.com/view/pros-and-cons-single-use-bioreactors) [BioProcess Online – AI in Downstream Process Optimization](https://www.bioprocessonline.com/doc/incorporating-ai-tools-into-downstream-process-optimization-0001) --- kind: blog title: "Best Bioprocess Data Integration Platforms for Pharmaceutical Manufacturing Teams in 2025" slug: best-bioprocess-data-integration-platforms-for-pharmaceutical-manufacturing-teams-in-2025 date: 2025-11-04 author: "Veronica French" category: Industry summary: "Pharmaceutical manufacturers face critical data integration challenges as bioprocess operations generate unprecedented volumes of fragmented information. Learn more about which leading platforms are also offering AI." url: https://invertbio.com/blog/best-bioprocess-data-integration-platforms-for-pharmaceutical-manufacturing-teams-in-2025 markdown_url: https://invertbio.com/blog/best-bioprocess-data-integration-platforms-for-pharmaceutical-manufacturing-teams-in-2025.md --- # Best Bioprocess Data Integration Platforms for Pharmaceutical Manufacturing Teams in 2025 ## TL;DR: Pharmaceutical manufacturers face critical data integration challenges as bioprocess operations generate unprecedented volumes of fragmented information. Leading platforms like Invert, Qubicon, and Körber Pharma's PAS-X Savvy now offer AI-powered data unification, real-time monitoring, and regulatory compliance capabilities that reduce manual work by 90% while improving batch success rates by 25%. ## Bioprocess Data Integration Platform Comparison PlatformAI/ML CapabilitiesReal-Time MonitoringData ReductionBest ForInvertAdvanced predictive analyticsYes, AI-powered90% time reductionCDMO tech transferQubiconSoft sensors & KPIsReal-time comparison~75% manual effortProcess optimizationVimachemAI/IIoT analyticsYes, MES integratedPaperless operationsManufacturing executionSartorius (SIMCA)Multivariate analysisStatistical SPCPattern discoveryData science teamsArk BiotechIn silico simulationVirtual bioreactorEliminates test runsScale-up decisions ## Why Bioprocess Data Integration is Critical in 2025 Pharmaceutical manufacturers confront a data paradox: while bioprocess operations generate massive volumes of data globally each day, much of it remains siloed, fragmented, and underutilized. The industry faces persistent challenges with data quality, with a significant share of FDA warning letters in recent years citing data accuracy issues. Organizations without modern data integration face regulatory risk, operational inefficiency, and delayed decision-making that impacts time-to-market for critical therapeutics. The business case for modern bioprocess data integration platforms has never been stronger. Digital maturity has improved meaningfully over the past several years, yet many biopharmaceutical organizations still operate hybrid systems that combine digital and paper-based records. This transitional state creates operational complexity without delivering expected efficiency gains—particularly in technology transfer scenarios where CDMO partnerships require seamless data exchange. ## 1\. Invert - Real-Time Bioprocess AI Software Invert delivers purpose-built bioprocess AI software that transforms fragmented upstream and downstream data into real-time insights and AI-driven decisions. The platform unifies and harmonizes time-series data across instruments, manufacturing sites, and external CDMOs, establishing an AI-ready data foundation from day one. Invert’s intelligence layer provides transparent AI chat capabilities enabling scientists to ask complex questions about bioprocess data and receive instant, verifiable answers without writing code. Manufacturing teams implementing Invert have achieved measurable operational benefits: dramatic reductions in manual data cleanup time, cost savings through avoided wasted runs, and fewer batch failures through early detection and live visibility. The platform executes bioprocess analysis that typically requires expert teams hours to complete manually—condensing months of traditional work into seconds through AI assistance. ## 2\. Qubicon - Advanced Bioprocess Data Platform Qubicon centralizes bioprocess data from online, at-line, and offline equipment into a unified database with real-time comparison capabilities and intelligent alerting. The platform compares live quality data against reference runs, calculates key performance indicators in real-time, and applies custom soft sensor models for advanced process monitoring. Client-server architecture with broad access supports collaboration across development and manufacturing teams. ## 3\. Vimachem - AI-Driven Pharma MES Platform Vimachem provides a modular, composable Pharma 4.0 MES accelerating digital transformation through integrated machine connectivity, manufacturing analytics, and electronic batch records. The bioprocess monitoring layer uses AI and IIoT to track OEE and machine performance while ensuring compliance with electronic record standards and enabling paperless operations. ## 4\. Sartorius Data Analytics Suite - DOE and Real-Time Monitoring Sartorius combines Design of Experiments, multivariate data analysis, and real-time monitoring via MODDE and SIMCA. These tools support Quality by Design approaches, accelerate process development with efficient experimentation, and provide SPC/MPC methods for continuous manufacturing and batch optimization. ## 5\. LabKey Server - Scientific Data Management System LabKey Server is a customizable scientific data management system covering sample/LIMS workflows, ELN, and specialized biologics tools. It supports complex bioprocess and clinical data with audit trails and role-based security, helping large teams manage multi-study, multi-site programs. ## 6\. Ark Biotech - Virtual Bioreactor Simulation Software Ark Biotech offers high-fidelity in silico simulation to design, optimize, and scale cell culture processes through advanced multiphysics modeling. A no-code interface visualizes numerous time-series metrics and soft sensors, enabling rapid exploration of process variants and reducing the need for physical experimentation. ## 7\. ModelFlow (PolyModels Hub) - Digital Backbone for Pharma Process Development ModelFlow integrates models, scientific data, and insights into a cohesive platform for process development across modalities. Teams gain tailored modeling recommendations and streamlined workflows that reduce decision time and build reusable knowledge for future products. ## 8\. Körber Pharma PAS-X Savvy - Integrated Bioprocess Analytics PAS-X Savvy unites data management and analytics for development, scale-up, validation, and manufacturing excellence. It tackles data accessibility and structure challenges with comprehensive visualization, statistical evaluation, soft sensor support, and tools for scale correlation and Quality-by-Design. ## Critical Data Integration Features for Manufacturing Success When evaluating platforms, prioritize FAIR-aligned data foundations (findable, accessible, interoperable, reusable), robust integration with CPP/CQA monitoring, and automated audit trails supporting 21 CFR Part 11. Real-time harmonization across vendors and sites enables unified decision-making and breaks legacy silos. Establish clear data governance (ownership, quality standards, access controls), invest in workforce upskilling, and secure executive sponsorship to overcome adoption barriers and realize substantial timeline reductions. ## Frequently Asked Questions **What is the primary difference between data integration platforms and traditional LIMS systems?** Data integration platforms consolidate information from bioreactors, analytical instruments, MES, and EBR systems into unified environments with real-time analytics and AI. Traditional LIMS primarily manage sample tracking and testing workflows without deep bioreactor integration or advanced analytics. **How do these platforms address 21 CFR Part 11 compliance requirements?** Leading platforms include audit trails, access controls, e-signatures, and data integrity protections; they maintain activity logs and support role-based permissions to safeguard sensitive manufacturing data. **Can these platforms integrate with existing CDMO partnerships?** Yes. Modern platforms support secure data sharing via portals, standardized formats, and granular access controls—enabling contextualized data exchange, streamlined tech transfer, and full traceability. **What implementation timeline should we expect?** Timelines vary by complexity and scope. Focused implementations can land core functionality in months; enterprise rollouts generally progress in phases—pilot first, then scale. **How significant are the cost savings?** Organizations report large reductions in manual data prep, fewer wasted runs, faster release, and lower batch failure rates—driving strong ROI within typical enterprise investment horizons. ## Conclusion & Next Steps Biopharma manufacturers must accelerate timelines, enhance efficiency, and uphold compliance amid rising complexity. Modern data integration platforms turn fragmented data into decisions—automating manual work while improving batch outcomes. Evaluate options based on monitoring, analytics sophistication, compliance features, and integration flexibility. The advantage goes to teams who move now. ### References [Invert – Invert Assist Launch (Press Release)](https://www.biospace.com/press-releases/invert-launches-invert-assist-an-ai-powered-analysis-interface-built-for-bioprocess) [Ark Biotech – Technology Overview](https://www.ark-biotech.com/technology) [PolyModels Hub – Seed Round Announcement](https://www.polymodelshub.com/blog/seed-round-announcement-team) [Invert – Company Website](https://www.invertbio.com) [Ark Biotech – Company Website](https://www.ark-biotech.com) [PolyModels Hub – Platform](https://www.polymodelshub.com) [BioProcess International – FDA Prelicense Inspections](https://www.bioprocessintl.com/regulatory-affairs/key-considerations-in-fda-prelicense-inspections-ensuring-compliance-and-quality-in-biomanufacturing) BioPharm International – Digital Transformation in Biopharma [Scientific Bio – Quality by Design](https://www.scientificbio.com/blog/quality-by-design) [FDA – AI to Support Regulatory Decision-Making](https://www.fda.gov/regulatory-information/search-fda-guidance-documents/considerations-use-artificial-intelligence-support-regulatory-decision-making-drug-and-biological) [Deloitte – 2025 Life Sciences Executive Outlook](https://www.deloitte.com/us/en/insights/industry/health-care/life-sciences-and-health-care-industry-outlooks/2025-life-sciences-executive-outlook.html) [BioProcess International – Essentials in QbD](https://www.bioprocessintl.com/cell-line-development/essentials-in-quality-by-design) [Sartorius – SIMCA (MVDA Software)](https://www.sartorius.com/en/products/process-analytical-technology/data-analytics-software/mvda-software/simca) [Bruehlmann Consulting – Future of Bioprocessing](https://bruehlmann-consulting.com/bioprocessing/the-future-of-bioprocessing-industry-4-0-digital-twins-and-continuous-manufacturing-strategies/) [Sigma-Aldrich – Process Analytical Technology](https://www.sigmaaldrich.com/US/en/integrated-offerings/biopharma-4-0/process-analytical-technology) [Sartorius – MVDA Software Overview](https://www.sartorius.com/en/products/process-analytical-technology/data-analytics-software/mvda-software) [Bioprocessing Summit – Digital Transformation](https://www.bioprocessingsummit.com/digital-transformation) [Thermo Fisher – Process Analytical Technology](https://www.thermofisher.com/us/en/home/industrial/pharma-biopharma/manufacturing-control-pharma-biopharma/process-analytical-technology.html) [LabKey – LabKey Server](https://www.labkey.com/products-services/labkey-server/) [Körber Pharma – What Is a Soft Sensor?](https://www.koerber-pharma.com/en/blog/what-is-a-soft-sensor-or-software-sensor) [RAPS – FDA Finds Data Integrity Problems](https://www.raps.org/news-and-articles/news-articles/2025/3/fda-finds-data-integrity-problems-in-recent-warnin) [LabKey – LIMS Data Management](https://www.labkey.com/products-services/lims-software/lims-data-management/) [Körber Pharma – PAS-X Savvy](https://www.koerber-pharma.com/en/solutions/software/werum-pas-x-savvy) [FDA – Warning Letters](https://www.fda.gov/inspections-compliance-enforcement-and-criminal-investigations/compliance-actions-and-activities/warning-letters) [Nature – FAIR Data Principles](https://www.nature.com/articles/sdata201618) [Virto – Pharma Digital Transformation](https://virtocommerce.com/blog/pharma-digital-transformation) [BioProcess Online – Real-Time Cell Density via Soft Sensors](https://www.bioprocessonline.com/doc/how-to-measure-cell-density-in-real-time-with-soft-sensors-0001) [GO FAIR – FAIR Principles](https://www.go-fair.org/fair-principles/) [BioProcess International – Reimagining Digitalization (Article 1)](https://www.bioprocessintl.com/information-technology/reimagining-the-future-of-biopharmaceutical-digitalization) [BioProcess International – Soft Sensors for Bioprocess Monitoring](https://www.bioprocessintl.com/pat/soft-sensors-for-bioprocess-monitoring) [Qubicon – Platform](https://qubicon.io) [SILA Standard – FAQ](https://sila-standard.com/faq/) [BioProcess Online – Continuous Manufacturing: Why Few Have It](https://www.bioprocessonline.com/doc/continuous-manufacturing-many-want-it-but-here-s-why-few-have-it-0001) [PubMed Central – Article: PMC11718427](https://pmc.ncbi.nlm.nih.gov/articles/PMC11718427/) [BioProcess International – Reimagining Digitalization (Article 2)](https://www.bioprocessintl.com/information-technology/reimagining-the-future-of-biopharmaceutical-digitalization) [PubMed Central – Article: PMC11994081](https://pmc.ncbi.nlm.nih.gov/articles/PMC11994081/) [Vimachem – Bioprocess Monitoring Software](https://www.vimachem.com/pharma-mes-platform/bioprocess-monitoring-software/) [BioProcess International – Process Excellence](https://www.bioprocessintl.com/regulatory-affairs/process-excellence) [Körber – Digital Maturity Assessment](https://www.koerber.com/en/insights-and-events/pharma-and-life-sciences-insights/digital-maturity-assessment) [Vimachem – Electronic Batch Records](https://www.vimachem.com/pharma-mes-platform/electronic-batch-records-ebr-for-pharma-and-biopharma/) [BioProcess International – Hitchhiker’s Guide to Bioprocess Design](https://www.bioprocessintl.com/business/-hitchhiker-s-guide-to-bioprocess-design) [Deloitte – Digital Maturity Index](https://www.deloitte.com/de/de/issues/growth-competition/digital-maturity-index.html) [BioProcess International – Data Overabundance in Biomanufacturing](https://www.bioprocessintl.com/information-technology/the-paradox-of-data-overabundance-in-biomanufacturing-data-literacy-is-key-to-unlocking-value) [FDA – Part 11: Electronic Records/Electronic Signatures](https://www.fda.gov/regulatory-information/search-fda-guidance-documents/part-11-electronic-records-electronic-signatures-scope-and-application) [Tecnic – Trends in Bioprocessing for 2025](https://www.tecnic.eu/trends-in-bioprocessing-for-2025/) [BioPharm International – PDA 2025: Data Governance & AI](https://www.biopharminternational.com/view/pda-2025-data-governance-and-ai-s-impact-on-drug-manufacturing) [BioProcess International – 21 CFR Part 11 Revisited](https://www.bioprocessintl.com/regulatory-affairs/fda-21-cfr-part-11-revisited) [Bioprocessing Summit – Digital Transformation (Alt Link)](https://www.bioprocessingsummit.com/digital-transformation) [INFORS HT – 6 Bioprocess Software Must-Haves](https://infors-ht.com/en/blog/the-6-bioprocess-software-must-haves) [PubMed – Article 40481350](https://pubmed.ncbi.nlm.nih.gov/40481350/) [Invert – Bioprocess Tech Transfer: The Data Dilemma](https://invertbio.com/blogs/bioprocess-tech-transfer-navigating-the-data-dilemma) [BioProcess International – Digital Platform for Data Science](https://www.bioprocessintl.com/information-technology/establishing-a-digital-platform-for-data-science-applications-in-biopharmaceutical-manufacturing) [BioProcess International – June 2025 Issue](https://www.bioprocessintl.com/publications/bioprocess-international/june-2025) [Körber Pharma – Data Management for Successful Tech Transfer](https://www.koerber-pharma.com/en/blog/how-data-management-and-analytics-ensure-a-successful-tech-transfer) [IDBS – Importance of AI in Process Development](https://www.idbs.com/knowledge-base/importance-of-ai-in-process-development-in-bioprocessing/) [BioProcess International – Beyond Compliance for CGT](https://www.bioprocessintl.com/cell-therapies/beyond-compliance-for-cell-and-gene-therapies-technology-s-role-in-protecting-patient-data) [BioPhorum – Technology Strategy: Delivering ROI](https://www.biophorum.com/news/biophorum-technology-strategy-delivering-roi-for-manufacturers/) [BioProcess International – AI in Quality Management Systems](https://www.bioprocessintl.com/information-technology/a-vision-for-artificial-intelligence-in-biopharmaceutical-quality-management-systems) [HHS – HIPAA & Cloud Computing](https://www.hhs.gov/hipaa/for-professionals/special-topics/health-information-technology/cloud-computing/index.html) [BCG – Biopharma Trends 2025](https://www.bcg.com/publications/2025/biopharma-trends) ‍ --- kind: blog title: "Invert Launches Invert Assist" slug: invert-launches-invert-assist date: 2025-10-31 author: "Invert Team" category: Product summary: "Today, Invert launches Invert Assist, the first AI-powered analysis interface built for bioprocess. Using a simple chat interface, Invert Assist enables users to perform complex analysis that typically takes an expert team hours to code manually. With the freedom and flexibility of natural language, bioprocess scientists can turn days or even weeks of troubleshooting and optimization into a 5-minute conversation with their data." url: https://invertbio.com/blog/invert-launches-invert-assist markdown_url: https://invertbio.com/blog/invert-launches-invert-assist.md --- # Invert Launches Invert Assist ## Invert Assist: AI that speaks bioprocess—just like you do Today, Invert launches Invert Assist, the first AI-powered analysis interface built for bioprocess. Using a simple chat interface, Invert Assist enables users to perform complex analysis that typically takes an expert team hours to code manually. With the freedom and flexibility of natural language, bioprocess scientists can turn days or even weeks of troubleshooting and optimization into a 5-minute conversation with their data. ## Why now? We produce roughly 180 zettabytes of data per year. To put that number in perspective, that exceeds the number of stars in the observable universe. This massive amount of data means that most of it is under-utilized, and bioprocess data is no exception. The sheer volume of data produced means the vast majority of it is siloed and fragmented across upstream and downstream process, or between scales or sites of production. New modalities in cell and gene therapy and biologic manufacturing also contribute distinct challenges — tracking interdependent CQAs during viral vector production for gene therapies, or managing batch-to-batch variability for autologous cell therapies. It’s clear that the biggest bottleneck in data isn’t producing it, but in being able to use it to close the gap between analysis and action. As articulated by Amgen CTO David Reese, _“…_**_those who figure out how to organize, harness, and analyze that data will separate the winners from the rest.”_** Invert’s launching Invert Assist to create those winners. ## What can Invert Assist do? One of the main reasons why enterprise AI initiatives fail (source: [MIT](https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/)) is their inability to integrate deeply with domain-specific workflows. Invert Assist was designed, first and foremost, to fit seamlessly in bioprocess workflows. ![__wf_reserved_inherit](https://cdn.prod.website-files.com/6765896ee196438329424471/6903ca71a0a94f31d2347297_Invert%20Assist%20Screenshot4.png) Invert Assist is capable of performing both routine and more sophisticated analysis and modeling for bioprocess. It can detect trends and correlations between runs, conduct root cause analysis for deviations, construct predictive models and simulations of processes, and design future experiments based on current data. If you’re interested in learning more about how Invert Assist could support your processes, request a demo [here.](https://invertbio.com/assist) Our team would be happy to put together a demo tailored to your specific use cases and needs. ## Why Invert Assist? Invert Assist only sources high quality, fully contextualized data from Invert’s core software. With a foundation of AI-ready data, users can be confident that results are grounded in their own bioprocess data from the bench to the manufacturing floor. We’ve also developed in-house evaluations tailored to bioprocess specifically to assess Invert Assist’s performance, ensuring that its answers are consistently accurate and reliable. ![__wf_reserved_inherit](https://cdn.prod.website-files.com/6765896ee196438329424471/69003e1dd965fefe0a4eb2eb_251027_Invert-Assist-PR-Screenshots.png) Unlike general-purpose large language models (LLMs) like ChatGPT, Invert Assist was also built to be compliant with biomanufacturing industry regulations and adhere to international best practices for AI use and development. Results from Invert Assist are fully traceable and audit-ready, which means users will not only be able to access the Python code used to execute analysis, they’ll be able to reproduce and verify them as well. Just like Invert’s core software, Invert Assist is built with enterprise-grade infrastructure and adheres to strict industry standards for data privacy and security. Neither Invert nor our vendors ever train AI models on customer data. ## Experience Invert Assist today Seeing Invert Assist at work with your own process data is quick and easy. We’re introducing Invert Insight Sprint, a new POC offering that lets you get an immediate look into the power of AI-driven insights. All we need is a few files of your historical data, like historical runs, deviations, or yield summaries. After signing an NDA, we’ll upload that data onto Invert, where it’ll automatically be made AI-ready. Then, you’ll be all set to start talking to your data: - Ask real questions (“What parameters led to low yield last quarter?”) - Uncover hidden trends (“Which sites show the most variability?”) - Quantify what could have been prevented (“If we’d had Invert, we could have saved 10 runs.” In a few minutes, you’ll find out what you could’ve learned from your data if understanding it was as easy as a casual conversation with a colleague. Find out more about Insight Sprint by [reaching out on our website.](https://www.invertbio.com/assist) --- kind: blog title: "How Invert uses batch context to make your data valuable—instantly" slug: how-invert-uses-batch-context date: 2025-05-02 author: "Michael McCutchen" category: Product summary: "Bioprocess teams can spend dozens of hours every week exporting data and manually cutting it up to assign it to the batches they care about. We’ve built a way for users to easily add batch context, whether it’s just labeling data after a run, or pre-programming an integration to inject that context on the fly.We’ve built a way for users to easily add batch context, whether it’s labeling data after a run, or pre-programming an integration to inject that context on the fly." url: https://invertbio.com/blog/how-invert-uses-batch-context markdown_url: https://invertbio.com/blog/how-invert-uses-batch-context.md --- # How Invert uses batch context to make your data valuable—instantly Bioprocess data can be challenging to capture, and perhaps even harder to interpret. Data collection must be set up, whether it’s file uploads from a BioFlo® 320 or streaming data from an Ambr® 250 OPC UA interface. Streams of data from multiple systems must be mapped to scientific domains like pH and temperature. At Invert, we work with many types and configurations of process equipment—and we’re familiar with the challenge of understanding all the data that they generate. With thousands of data points arriving every minute, routine troubleshooting and monitoring of processes is complex. The potential answers to process challenges like cell growth delays or product aggregation are concealed within featureless traces of scrolling timeseries data. Bioprocess engineers have to export data and surgically reformat it in Excel, column by column, into something useful—every single time they’re trying to resolve production issues. ## Batch manufacturing adds complexity Biopharmaceutical manufacturing is commonly done in batches, unlike other chemical industries that operate continuous processes1. Batch manufacturing appears to simplify things by breaking up processing into discrete chunks. In reality, it adds complexity—timeseries data only makes sense in relation to other data. The data generated in batch manufacturing exists in two main forms. The first form is batch-level data that describes an output or input for the entire activity, such as the inoculation time, final titers or media lot numbers. The second form is timeseries data, measuring process parameters such as temperature or pH. Troubleshooting and optimizing processes requires connecting these two forms of data. For example, did fluctuations in temperature affect the final titer of a batch? It depends on batch context: where and when in the batch were these fluctuations occurring? Without batch context, timeseries data can show how certain parameters changed over time, but it cannot tell us how these changes will impact the process. Batch context allows us to derive meaning from data: a higher pH might be acceptable at inoculation, but unacceptable at transfection. This exists at multiple levels—first, is the data part of a batch at all? Some systems collect data even when not in use. If the data is part of a batch, it needs operation and phase context—where exactly did it occur? Lastly, the data needs to be normalized against a start time, so it can be analyzed alongside a golden batch or other reference data. ## Many systems are not “batch aware” Bioprocess development involves an array of data-generating equipment, from bioreactors, to purification skids, to scales and benchtop pH probes. All of these systems measure and report physical parameters. However, they usually only know what material they’re measuring if an embedded software interface captures that data from the user. While some systems have this, not all do. And the ones that do are subject to human error—batch IDs may be entered incorrectly, at the wrong time, or forgotten entirely. Even if this all works perfectly, users must still write queries to pull out data tagged with the right batch ID and normalize it from absolute to relative time. Many process engineers find that this workflow is brittle enough to avoid this functionality even if it’s available, and opt to organize data post hoc. Custom automation might help, but requires custom configuration and takes time to update. Bioprocess teams can spend dozens of hours every week exporting data and manually cutting it up to assign it to the batches they care about—taking part of a timeseries and copying it into one spreadsheet, and another part and copying it into another spreadsheet. Often this happens in the slivers of time between other, more urgent activities, resulting in data only being reviewed once or twice a week. ## Invert applies context on the fly Invert is built on the fact that batch-contextualized data is the essential input for bioprocess analysis. We’ve built a way for users to easily add batch context, whether it’s labeling data after a run or pre-programming an integration to inject that context on the fly. For users, this means no new tags need to be configured. You don’t need to designate triggers to catch batch starts and ends either (though we do that too, for parts of our integration portfolio). Instead, simply select the relevant runs and data and assign the relevant time window—regardless if that window was in the past, the future, or happening right now. ‍ ![__wf_reserved_inherit](https://cdn.prod.website-files.com/6765896ee196438329424471/68111964cdeec91976cf4ab9_Frame%204611.png) Data within a certain time window is assigned to a new batch, even for runs in progress. If that batch is in the past, Invert assigns the relevant data to the batch immediately. If the batch is in the future, Invert creates a trigger which will route incoming data into the desired batch when it starts, and continue until the batch completes. You can even assign data to a batch while it’s in-flight—Invert both assigns the already-ingested data and directs incoming data to the batch. ![__wf_reserved_inherit](https://cdn.prod.website-files.com/6765896ee196438329424471/6807baad6b2aa1f51f7cb8ee_batch.png) Add batch context to runs as data is actively being ingested. With large volumes of data, mistakes are inevitable. If the wrong batch ID is entered or the wrong time window selected, the consequences are hours in Excel of untangling fragmented, mislabeled data. With Invert, it’s a simple fix. Just archive the old data, and re-assign the data with a click. Once assigned, Invert always stores data within its batch context. If you export your data or access it through the Invert API, this association is preserved. For data already in a historian, this works similarly—Invert layers batch context on top, just like when it's streaming directly from equipment. ## Batch context makes data valuable Invert focuses not just on capturing data, but making it instantly useful. Batch context is part of that puzzle—it converts raw sensor readings into something you can interpret, right now. Old data benefits as well. When you return to it, it’s contextualized and ready to use. Currently, only a fraction of biomanufacturing data is truly used though it is expensive to generate, because contextualization is challenging to do at scale. Terabytes of data remain trapped in spreadsheets and databases. By automatically positioning all biomanufacturing data in its appropriate context, we unlock the knowledge contained within. 1. Khanal, Ohnmar, and Abraham M. Lenhoff. “Developments and Opportunities in Continuous Biopharmaceutical Manufacturing.” _mAbs_, vol. 13, no. (1 Jan, 2021), p:  e19036641- e1903664-13. [https://doi.org/10.1080/19420862.2021.1903664](https://doi.org/10.1080/19420862.2021.1903664) --- kind: blog title: "Why ELNs and LIMS are not enough for PD teams" slug: why-elns-and-lims-are-not-enough-for-pd-teams date: 2025-03-19 author: "Brian Fan" category: Product summary: "One of the questions we’re asked frequently is whether Invert is an ELN or LIMS—our answer is that Invert captures both systems’ strengths and fixes their weaknesses. We’ll discuss the trade-offs and shortcomings of ELN and LIMS when it comes to bioprocess data to explain how." url: https://invertbio.com/blog/why-elns-and-lims-are-not-enough-for-pd-teams markdown_url: https://invertbio.com/blog/why-elns-and-lims-are-not-enough-for-pd-teams.md --- # Why ELNs and LIMS are not enough for PD teams ## Why isn’t an ELN or LIMS sufficient for process engineers? Process engineers are often asked to fit their workflows into tools that were originally developed for different purposes. Two common product categories that encapsulate many of these tools are Electronic Lab Notebooks (ELNs) and Laboratory Information Management Systems (LIMS). But what happens when you try to fit a process to a tool isn’t designed for it?  One of the questions we’re asked frequently is whether Invert is an ELN or LIMS—our answer is that Invert captures both systems’ strengths and fixes their weaknesses. We’ll discuss the trade-offs and shortcomings of ELN and LIMS when it comes to bioprocess data to explain how. ## ELN vs. LIMS: what are the trade-offs? Both an ELN and a LIMS deal with data differently: an ELN makes it easy to record and store data, while a LIMS provides a structure to organize it. What are the trade-offs of each approach, and how do these trade-offs impact process engineers and scientists? ![__wf_reserved_inherit](https://cdn.prod.website-files.com/6765896ee196438329424471/67dae38b60cc06f659aeef6e_ELN%20vs%20LIMS%202.png) Recording data in an ELN is like storing information in a loose stack of paper: it's easy to add new information, but it's not organized. Using a LIMS is like storing it in a filing system: organized, but rigid and takes effort to design and maintain. ### An ELN is flexible, but unstructured ELNs are great at allowing scientists to input freeform data: text, numbers, images, uploaded excel sheets. There aren't any limitations on how or what kind of data goes in. They’re designed to track experimental data in a compliant manner, eventually supporting IND filings with the FDA. There's a trade-off to this flexibility: ELNs are often not connected to an underlying database. You wouldn't be able to aggregate results against a molecule or compound, or search across experiments by metadata tags. Setting up a  structured database can require extensive configuration or customization. In practice, this often means a subset of key data is more formally documented—compound IDs, cell lines, important assay results—and added to the database. ### LIMS are organized, but rigid On the other hand, Laboratory Information Management Systems (LIMS) are great at aggregating results across experiments and defining sample and data hand-offs between teams. They’re designed around sample-centric workflows to manage results and inventory in a compliant manner. In contrast to an ELN, most LIMS have a rigid data model. This makes them great for Analytical Tech Ops, when the goal is to characterize timepoint samples pulled from bioreactors. However,  they're not well suited to exploratory, one-off experiments  commonly performed in process development. Their inflexible structure means that process engineers have to spend extra time reformatting their data to fit into  pre-defined fields. With all that extra effort, most of them to revert to using Excel and their visualization tool of choice. ## Bioprocess generates data an ELN or LIMS can’t fully capture There are unique forms of data that bioprocess generates – and a lot of contextual information needed for bioprocess data to make sense. ### An ELN can’t capture time-series data For all their flexibility, ELNs aren’t set up to capture one key form of bioprocess data: large, time-series datasets from bioreactors. They also aren’t set up to perform multi-variate analysis and generate the kinds of visualizations needed to interpret and learn from a large number of process development runs. ### LIMS can’t contextualize offline data Bioreactor data also only truly makes sense in context. LIMS can organize offline data coherently, but struggle to contextualize it within the experimental conditions of a run. For most LIMS, overlaying a view of key online process data onto offline data remains challenging.    Most process engineers still find themselves using an additional visualization tool such as Spotfire or JMP to perform exploratory analysis or modeling. ![__wf_reserved_inherit](https://cdn.prod.website-files.com/6765896ee196438329424471/67dad9d55a88f2d817b3e902_ELN_LIMS%20Comparison_final.png) A comparison of features between an ELN, a LIMS, and Invert ## Why process development scientists use Invert alongside their LIMS Invert is a cloud-native application designed specifically for process engineers working on process development. The system collects and standardizes data from multiple sources and transforms it into analysis-ready datasets, giving you real-time online data in the context of the full run conditions. With Invert's native understanding of time series data and built-in unit awareness, scientists can easily write complex formulas to standardize calculations across teams. Essentially, Invert handles all the messy data processing behind the scenes, so you can focus on analyzing data. If you have an existing ELN or LIMS (or both), you don’t need to replace all your software. Invert can work with them—pre-built integrations mean that your cell line development and analytical teams can keep using the systems they’re already using. Relevant data is simply synced into Invert. With one centralized platform, your bioprocess engineers get a unified view of both online and offline data, alongside key metadata like experimental conditions. Invert also helps you deal with historical data. It cleans and adds it to the system so that past results become comparable and searchable. Train models on all your process data, using it  to power predictive modeling  that suggests optimal process parameters—specifically for your processes. Invert is designed to be easy to use and quick to set up. No months-(or years-)long implementations—we help customers get live in as little as 2-4 weeks. Get in touch to discuss how to get started. ‍ --- kind: blog title: "How Invert uses meta-learning to leverage old bioprocess data" slug: how-invert-uses-meta-learning-to-leverage-old-data-and-make-better-biomanufacturing-predictions date: 2025-03-18 author: "Karthik Sekar" category: Product summary: "With meta-learning techniques, Invert’s ML team was able to reduce new data needs by as much as 90% by leveraging old data from different, but loosely related, bioprocess programs." url: https://invertbio.com/blog/how-invert-uses-meta-learning-to-leverage-old-data-and-make-better-biomanufacturing-predictions markdown_url: https://invertbio.com/blog/how-invert-uses-meta-learning-to-leverage-old-data-and-make-better-biomanufacturing-predictions.md --- # How Invert uses meta-learning to leverage old bioprocess data Imagine launching a new drug program and already having 90% of your process development figured out. With meta learning, this is a real possibility – and this is a major focus for us at Invert. Our initial results in this product area are very promising. They suggest that for new projects, biomanufacturers can leverage older data (from different, but loosely related projects) to cut the need for new data by as much as 90%. ## The high cost of generating new bioprocess data In biomanufacturing, generating new data is very expensive – so we are very excited about this early breakthrough. Relatively small batches (1-20L) can cost on the order of $10,000 to $100,000 per batch, depending on complexity. Pilot-scale batches (50-500L) can reach $50,000 to $500,000 per batch, and in some cases, even higher. In addition to these hard costs, the value of speed (or, framed in the negative, opportunity cost of moving slowly) is immense. Biomanufactured products require a great deal of specialized labor, and involve opportunity costs associated with the use of expensive facilities and time-to-market. ## An example: leveraging data from previous drug programs Consider a major pharmaceutical company that specializes in monoclonal antibody-based therapies. They may have dozens of completed drug programs with valuable data stored in various filesystems and databases.  The cost of generating data for a single program is typically in the range of $50-200M in hard costs – so for a dozen such programs, the total cost including variable costs, capital costs, and labor is staggeringly high. When this pharmaceutical company develops a new drug program, process development scientists lean on intuitions (developed from previous work) about how best to design future experiments. But old data is rarely used to inform next experiments, especially with machine learning. Our early product work at Invert suggests that this old data – combined meta-learning techniques and hybrid models – can reduce the new data needs in new drug programs by as much as 90%. This translates to savings of tens of millions of dollars in hard costs, and dramatically faster time-to-market. ## Our early work in meta-learning at Invert We work with a range of customers at Invert, including Contract Development and Manufacturing Organizations (CDMOs) who service the bioprocessing needs of many clients. Often these CDMOs will specialize in a host organism or in a product area. Invert caters to CDMOs helping them manage their bioprocess data across all their different projects and clients. For predicting bioprocesses, we found that hybrid models work exceptionally well. They are founded upon some physical model, for example, ordinary differential equations that describe the dynamics of cell growth or product formation. The physical model is wedded to a black box approach such as a neural network, which provides adaptability to a variety of input data. Hybrid models are also amenable to meta-learning, where we first train a meta model that can be adapted for different projects. ![__wf_reserved_inherit](https://cdn.prod.website-files.com/6765896ee196438329424471/678fc3378d1fe0e247857af5_678fbe13262e3551ce7143fa_image%2520(13).png) Model error decreases significantly with some meta-training, which enhances the effectiveness of regular training over fine-tuning epochs. So far we’ve seen that with meta-learning approaches on a single CDMO customer, we can reduce the number of data needed by as much as 90% while maintaining 90% of performance as they embark on new projects.  Our clients are using models to design experiments and to help accelerate development on new projects. In the long run, we want to use the models to help with process monitoring and eventually self-driving their process to maximum performance. ## Why aren’t more biomanufacturers leveraging old data? Isn’t it obvious to leverage previous data to guide new experiments in a more explicit, ML-oriented way? It is easier said than done, and there are a few reasons for the current state of affairs: 1. Meta-learning is advancing rapidly, with major breakthroughs in recent years. Industry adoption is still a fairly recent phenomenon. 2. Data is inaccessible. For this data to be used in AI/ML applications, machine learning scientists must be able to access it. Today, this data is typically distributed across various file systems, databases, and hardware. 3. Data is not standardized. As the saying goes, when it comes to AI/ML, it’s garbage in, garbage out. It’s common for this data to be in a bad state, with inconsistent labels, poor context, outliers, errors, and other issues. As an aside: we operate with an unfair advantage at Invert (and so do our clients), because our product automatically cleans, standardizes, and centralizes bioprocess data. As such, our internal AI/ML team has the benefit of working with clean data that is ready for modeling and meta-approaches. We expect that pharmaceutical companies and synthetic biology companies will prioritize data preparation and accessibility in the coming years, either using products like Invert or through internal initiatives. With AI-ready data in hand, we expect more biomanufacturing companies to unlock the value of their historical data in the future and to develop bioprocesses faster and cheaper. --- kind: blog title: "Pioneers in Bioprocessing: Q&A with Biopharma Executive, Steven Lang, Ph.D." slug: pioneers-in-bioprocessing-q-a-with-biopharma-executive-steven-lang-ph-d date: 2025-01-21 author: "Alex Felt" category: Interviews summary: "Biotech and synthetic biology companies have much to do. Handling everything in-house—from strain design to process development—can stretch funding and resources thin." url: https://invertbio.com/blog/pioneers-in-bioprocessing-q-a-with-biopharma-executive-steven-lang-ph-d markdown_url: https://invertbio.com/blog/pioneers-in-bioprocessing-q-a-with-biopharma-executive-steven-lang-ph-d.md --- # Pioneers in Bioprocessing: Q&A with Biopharma Executive, Steven Lang, Ph.D. _Invert is back with another installment of our Q&A series,_ **_Pioneers In Bioprocessing_**_, where we chat with experts in biomanufacturing and biotechnology to discuss their work._ _This time, Invert spoke with Steven Lang, Ph.D., MBA, a pharmaceutical and food tech executive with over 20 years of leadership in large biopharma corporations, small contract research organizations, and startups, including time spent working on biologic drugs at_ [_Johnson & Johnson_](https://www.jnj.com/) _and_ [_Genentech_](https://www.gene.com/)_. Most recently, Steven served as VP of Bioprocess Development at_ [_Upside Foods_](https://upsidefoods.com/)_, a food tech company that became_ [_the first-ever cell-cultivated meat company to sell its product in America_](https://upsidefoods.com/blog/breaking-new-ground-upside-foods-makes-history-with-first-cultivated-meat-serving-in-the-us)_. Steven’s expertise spans strategic planning, bioprocess improvement, and CMC activities from cell line development to regulatory filing. He has led a number of efforts to transition biologics from discovery through development, paying special attention to increasing throughput and sourcing efficiencies through workflow digitization and bioprocess data analytics._ _Note: this interview was edited for length and clarity._ ## To start us off, tell us how you got into bioprocessing. My first job was as a postdoc within J&J. That gave me great exposure to small molecule discovery and a really great introduction to industrial science. From there, I landed a job within J&J as a cell line developer and what was the flagship organization for biologics within J&J, called Centocor. In those early days of biologics, we were establishing a lot of critical technology as well as exploring countless new avenues. It was a really exciting time, even though we didn’t realize how big of an impact biologics were going to have. ## What do you see as your greatest success so far? One of the things that I think I am most proud of is the collaboration we had with Genmab back at J&J when we in-licensed Daratumumab as well as the Duobody technology. Because we had built out a great due diligence and developability package within J&J, we were able to uncover liabilities early on and then mitigate them very rapidly in both upstream and downstream processing. Ultimately, this helped get these bispecifics into the clinic as quickly as possible. ## From your experience, what would you say are the main obstacles to transitioning biologics from discovery through development? I would say balancing risk and speed is really the biggest challenge, and it has been a consistent theme over the last ten to fifteen years. As biologics become more competitive, getting to the market first is paramount. Especially for newer modalities like gene and cell therapies, being a first mover is really important. I learned very early in my career that it’s a challenge to move science at the pace that business needs it to. Having the right data in place is paramount to being able to balance business demands with the scientific challenges of moving at a very fast pace, while still delivering a safe and efficacious product at the end. ## What can companies do to increase the likelihood of success during cell line and bioprocess development? The one that I always lean back on is to delay major decisions. While you need to keep progress moving rapidly, kick the can down the road as long as you can before locking your molecule sequence or process. But, doing that very effectively requires parallel processing. With the automation and data systems of today, it’s not that challenging or expensive to do parallel processing and run multiple molecules, or a panel of molecules, that may have different mechanisms of action and attributes in their design. Keep those in play as long as possible before you start generating your GLP tox data or your GMP material. That allows you to see if one will outperform others in later stage _in vitro_ or _in vivo_ studies and provide a better competitive edge. ## How does bioprocess data management play into commercial efforts? Why is it important? We always have to remember that the data and the information that our scientists and engineers generate are the most valuable things that we can produce. I’ve been in both new and mature businesses where bioprocess data has been an afterthought and not valued as it should. Instead, we’ve got to think about this data as a precursor to the knowledge that’s going to really make us successful. The hodgepodge that we’ve stitched together to just get to our bioprocess data and make it accessible is a huge disservice to the science and innovation that we’re trying to execute. But, I think we’re now getting to the right end of it. We’re seeing the value that biomanufacturing can bring, and by capturing all of that bioprocessing data in an appropriate place that’s accessible to the people who are actually developing the processes, we can make huge advances. That’s where digital transformation and technologies are just so important. I would say so much information is being lost between the cracks just because we don’t have good integrated systems to capture everything. If we provide that data to bioengineers, they can build better processes. ## Are you surprised that the application of digital technologies in bioprocessing and biomanufacturing at biopharma companies hasn’t received as much attention as the discovery side? I am a little bit surprised because we generate so much data. Once you get down to a commercial process, you’re running the same thing over and over again. That is just fertile ground for machine learning and artificial intelligence. But, you have to be able to grab all that data and get it to a place where AI and ML can actually work on it. I think the problem is that we’ve really kind of thought of biomanufacturing as just an operation rather than a source of new information. As we develop these bioprocesses and get them into different phases and different scales, we can learn so much, but there’s still data and information that we’re not using effectively or completely missing. > _I think the problem is that we’ve really kind of thought of biomanufacturing as just an operation rather than a source of new information…We need to be able to capture all that info one way or the other. And, it’s either going to be captured in a bioengineer’s head or as institutional knowledge for the business._ We need to be able to capture all that info one way or the other. And, it’s either going to be captured in a bioengineer’s head or as institutional knowledge for the business. Naturally, it’s much more valuable as institutional knowledge that can be shared and reused for future projects. Whereas, an individual’s knowledge is not very useful unless it is shared and codified in institutional knowledge. Today’s digital technologies can facilitate both individual and institutional learning. ## What roles do you think digital technologies can play in successful bioprocess development efforts in the future? If you’re launching a commercial process, so much effort is focused on the verification runs for both food and biopharmaceuticals. But imagine a world where you could easily aggregate all of the data you’ve collected from the early stages of product and process development when you are trying to understand your biologic’s expression, potential liabilities, post-translational modifications, and how the process affects them. You could then wrap all of that data up neatly into your package. In addition to your verification runs, you’d be able to show that there’s a huge depth of data demonstrating that the process is robust and controllable throughout various scales. As another example, I think using digital tools to institutionalize high-content data, like genomics or mass spec, is really critical for biomanufacturing. There’s so much data there you can revisit. But, how do you parse it? How do you make it useful? That type of automation is really going to be valuable for us going forward. ## Reflecting on your work on bispecifics and cultured meat, how do bioprocess development and CMC activities differ for newer modalities and product types compared to more established ones? It requires a lot more data generation and analytics. As you’re thinking about a new pharma product, you want to be able to understand both the known liabilities as well as the unknown liabilities. The same is true in the food space. As you go into a new food, you’ll have to have different compositions, different allergy concerns, _et cetera_. ## What insight did you gain from working at a contract research company that changed your perspectives when it comes to hiring and working with CROs, CDMOs, and CMOs? Working on the other side of the desk, if you will, in a service organization and providing service gave me a great perspective on how better to do it in the future. When you’re thinking of externalizing or outsourcing R&D work, you really need to think of it as an external workbench. So, make sure that you have a great fit between the project’s technical needs and the capabilities of the CROs. Perhaps most importantly, you must build productive relationships with your contract partners to not only capitalize on their expertise but also understand their work and how it gets done in their labs. That helps you not only deliver on the technology but also communicate and really interrogate the data as much as possible. That kind of collaboration can lead to valuable serendipitous findings along the way. ## After working for years in biopharma, what brought you to work on cultured meat at Upside Foods? I think it’s a fascinating field, to be really honest. Everyone knows it is extremely expensive to produce a commodity product such as food using biotechnology, and cultured meat still requires some major innovations. But having watched huge advances in biotechnology for the last 20 years, the trajectory is very promising. It was really that trajectory of success that I think drew me to use biotechnology for food. To my knowledge, reaching a 50,000 to 100,000-liter bioreactor scale has not been done successfully for animal cell culture, but that’s part of the aim of cultured meat. The need to reduce the cost of goods sold (COGS) down to pennies on the pound is mind-blowing. But being a part of the innovation required to get to that stage is a lot of fun. In addition to the thrill of working on solving that technical challenge, I was also motivated to work with great colleagues and peers. I had previously worked with a number of folks at Genentech who are now in the food industry. Knowing that respected experts have made that jump already kind of opened my eyes to the potential of food. ## How are bioprocess development and biomanufacturing different between biopharmaceutical and food products? The processes that we are using for generating food products are largely the same as what we’re using for biopharmaceuticals. It’s the inputs and the outputs that are different. In biopharmaceuticals, we’ve been using the same cell lines for numerous years. We have a wealth of knowledge about how those cell lines behave, and we can apply that knowledge to the next project coming along. In the food space, we’re going to have a lot of different inputs from different cell substrates, species, and cell types. Each one of those cell types may require bespoke media and process development, which will take time. On the outputs side, the quality systems and regulatory requirements are different. And then, ultimately, because of the COGS demands and distribution mechanisms for a food product, your supply chain is going to be quite distinct as well. ## In your opinion, what are the most exciting things happening in biomanufacturing, biotech, and biopharma right now? Renewed focus on bioprocess data. Getting data into the hands of people developing the processes has been a blind spot for the industry for many years. So, I’m really glad that we’re seeing some attention brought to maturing data systems, and companies like Invert are helping us pull all this data together. > _Getting data into the hands of people developing the processes has been a blind spot for the industry for many years. So, I’m really glad that we’re seeing some attention brought to maturing data systems, and companies like Invert are helping us pull all this data together._ I also find the focus on using biotech for environmental causes is really invigorating. I just love to see people at conferences talking about sustainability and using biotechnology in other fields that maybe would’ve been prohibitive previously. The food industry is one of them. ## What are your thoughts on the application of AI and ML in bioprocess development and commercial biomanufacturing? AI certainly can change a lot of what we do if we can get the data in the right place. There are great opportunities for AI and ML to look at all of the data that we generate from R&D all the way out to commercial, and help us notice things that we’ve not paid attention to before. Could it be bioreactor diameter versus sparging? There may be an important connection there that we’ve just never explored that AI/ML can uncover. Ultimately, these types of automated data systems will take some of the routine activities out of biopharmaceuticals. Analyzing images, studying clonality, and looking for particles in vials, are all things that we can train AI to do. ## How do you think we can best grow the bioeconomy as a community? What improvements and solutions do bioprocessing and biomanufacturing still need? I think what we need to do is, of course, get more people into this field. I’m really looking towards government support of institutions to specialize in biomanufacturing. On a more tactical level, I really want companies to recognize the value of the data and build those systems early so that we’re not constantly fighting with IT and bolting on solutions that are just going to cost us money and slow us down. Taking that holistic data approach from the very beginning is really critical for our ongoing success. ## What are the most valuable lessons you’ve learned in your career? The one that I always think about comes from early on in my career as a postdoc. I realized that most investigators are not good managers or leaders because they’re trained as scientists, where they’ve had to really put their heads down and solely focus on a technical field. Working on my MBA was an eye-opener for me because it helped me better understand leadership, mentoring, and organizational dynamics. All of which, we require in scientific endeavors, even though they are not usually taught to scientific leaders. > _I firmly believe that prioritizing the well-being and professional development of our scientists and engineers leads to a more innovative and successful organization. When we invest in our people, we empower them to deliver exceptional work. The products will come later if you focus on the people first._ I firmly believe that prioritizing the well-being and professional development of our scientists and engineers leads to a more innovative and successful organization. When we invest in our people, we empower them to deliver exceptional work. The products will come later if you focus on the people first. ## What advice do you have for others aiming to make bioproducts? When I mentor others, I talk to them early on about the value of having technical depth. We are in a very technical, scientific-focused arena, and that is table stakes. So, having some depth of knowledge in a very specific field or specialization is critical. But from there, you need to branch out. I encourage people to join cross-functional teams and get their voices heard. Always think of everyone around you as somebody to learn from. ## When you’re not working on bioprocesses, what do you like to do with your free time? I’ve taken up cycling, so I try to do that every day. And then the other thing I’ve been doing, much to my wife’s chagrin, is renovating our house. There’s very little you cannot do with the right tool and YouTube! _Huge thank you to Steven for chatting with us and sharing his thoughts. Thanks for reading, and stay tuned for the next edition!_ --- kind: blog title: "A Guide to Build vs Buy: Biotech Software Edition" slug: a-guide-to-build-vs-buy-biotech-software-edition date: 2025-01-12 author: "Alex Felt" category: Industry summary: "For biotech and synthetic biology companies, there’s always A LOT of work to do. As these companies quickly learn, trying to handle everything in-house–from strain design and high-throughput screening to process development and beyond–can spread funding and resources thinly." url: https://invertbio.com/blog/a-guide-to-build-vs-buy-biotech-software-edition markdown_url: https://invertbio.com/blog/a-guide-to-build-vs-buy-biotech-software-edition.md --- # A Guide to Build vs Buy: Biotech Software Edition For biotech and synthetic biology companies, there’s always A LOT of work to do. As these companies quickly learn, trying to handle everything in-house–from strain design and high-throughput screening to process development and beyond–can spread funding and resources thinly. This is especially true in difficult funding environments. So, biotech companies must carefully decide what they will take on internally and what they will purchase or outsource. Software is one of the best examples of this in the biotech sector, mainly because these companies tend to develop core IP and competencies in biology or chemistry, not software engineering. However, biotech software can be essential to the success of these companies, especially during research, development, and biomanufacturing activities. In these scenarios, biotech companies must decide whether to build or buy software. Given Invert’s focus on developing software for bioprocessing, our team has a lot of experience with this critical decision. Having come from the lab with significant experience deliberating on both options, our product team collected some food for thought and useful questions to help stimulate productive decision-making for biotech companies weighing the choice to build vs buy software. ## Getting Started: The Big Picture ## Vision & Ambition **First, you have to know the vision and ambition of your company.** Though that might sound like a “squishy” line of thought, it’s an essential factor in the calculation. If you don’t see software as a differentiator for your company, that creates philosophical pressure toward buying because it indicates that software is really a means to an end. If it is part of your differentiation, then there may be more pressure towards building, depending on what resources are at your disposal. Even if you do have a team for the job, remember that if you elect to build one thing, you’re also electing not to build another. Lastly, don’t forget about your timeline! Buying will probably get you there faster, but implementing purchased software and training users takes time, too. ### Focus Questions - Do you currently see software as one of your differentiators within the bio space? - Do you have developers, engineering managers, and product managers on staff (or plan to soon) to build software? - Do you trust your software team to pull it off? - Do they have the expertise to build it in a scalable way? - What will you be pulling them away from to work on this? - How soon do you _need_ a software solution? ## Innovation Doesn’t Always Mean Building In software development, there’s a general sentiment that you “build for advantage and buy for parity.” Though a nice platitude, the decision is usually more complicated than that, especially in the unique circumstances of biotech. According to Michael McCutchen, Product Manager at Invert, “Differentiation comes from good scientific and business processes irrespective of anything digital. The goal is to reflect those in your software.” To highlight this, let’s consider [Moderna](https://www.modernatx.com/en-US). Moderna is regarded as one of the earliest digitally differentiated biopharma companies, describing itself as a “[digital-first](https://investors.modernatx.com/news/news-details/2023/Moderna-Highlights-its-Digital-and-AI-Strategy-and-Progress-at-Second-Digital-Investor-Event/default.aspx)” company since Day 1. Over the years, [Moderna has talked openly about its digital strategy](https://www.modernatx.com/en-US/media-center/all-media/blogs/building-the-first-digital-biotech). Importantly, Moderna did not put out a lengthy manifesto about why they built everything from scratch. Instead, [they described](https://assets.ctfassets.net/87hacmv3x18u/1OUQZKc6OICHogQ5uxfVSC/335cda7362a13fa1689b953338603bfd/Moderna_Digital_WhitePaper_Digital_biotech.pdf) focusing on buying the best software (where available) and integrating them in a way that best enables the business alongside custom-built components. **Put simply, how software is implemented and integrated often matters a lot more than the sum of its parts.** ## Evaluate the Market Do your homework and search around to understand what commercial off-the-shelf (COTS) or software-as-a-service (SaaS) solutions exist, if any. Some companies provide biotech software solutions for strain development, process optimization, and scale-up. As one more unique example, [Emerald Cloud Lab](https://www.emeraldcloudlab.com/) provides remotely operated research offerings through a medley of digitally accessible tools. Here at Invert, we offer software for bioprocess data management that enables superior data traceability, live data access, analysis, machine learning-based modeling, and decision-making. For help determining what’s out there, check out this [Life Sciences Software Landscape](https://www.figma.com/file/VCl5eiJqgngwBgz3xi4WOi/Life-Sci-Software-Landscape-\(Public\)?type=whiteboard), which lists existing biotech software providers and their tools. If the solutions already exist, it’s probably better to buy. **It is difficult to gain a competitive advantage by building something your competition can purchase.** By buying established software, you can focus on your core competencies, creating greater efficiency and accelerating product development timelines. ### Focus Questions - Does a software solution already exist in the market? - How close are existing software tools to meeting your specific needs? - Does it matter to your internal team if the software is cloud-based? ## Implementation and On-Going Use ## The Costs and Complexity of R&D and Biologic Data Heavier investment in R&D is a hallmark of biotech compared to many other industries. This difference in resource allocation creates different circumstances. - First, biotech companies that must spend a lot of funds on R&D will have less to spend on tools outside of R&D. - Second, the high R&D spending means that efficiencies found in R&D have an outsized impact. - Third, biology has more complex data sources that factor into decision-making, and you often need to synthesize **all** of your sources to extract value. In other industries, 80% percent synthesis might be good enough. In biotech, even one missing data source can skew vital decisions and create manual bottlenecks, diminishing benefits. As a result, you generally want “enabling” software, especially on the R&D side. “Enabling” software provides deeper understanding, insights, and efficiency regarding your data, processes, and applications. Think of enabling software as a tool to more reliably find missing knowledge you are already generating. ## Integration Since research and bioprocess development require many more data sources to make decisions, you must surface data from more groups, connect more variables, and synthesize information from more collaborations and teams. That’s **a lot** of integration. You need to have a clear view of your existing hardware, software, and data sources from across teams and understand how a new software fits into the picture. As part of this, look into the interdependence of your current hardware and software. Getting custom and off-the-shelf software to “play nice” isn’t trivial. If you currently have a lot of custom applications that need to be closely integrated, a built solution might make sense. But, if you’re using other off-the-shelf options, there’s a good chance the provider is aware of common integrations and has already implemented them. Finally, don’t forget about the human side of integration: training! Given that your team is probably much more versed in biological sciences, they might not have much additional time and energy to decipher and learn convoluted software. Seek out software with an intuitive user interface that requires minimal training. ### Focus Questions - How much existing software do you use? - What existing custom-built software/applications do you have (if any)? - What is the interdependence of your current hardware and software? - What percent of your existing relevant data sources can be fully integrated with the software? - How intuitive is the available software to use? - How much training will it require to onboard the team? ## Additional Considerations for Buying Software ## Pressure Test with an Open Mind If you’ve moved closer to the decision to buy software, take this opportunity to ask the provider questions and pressure test the most critical functions. You need to ensure the software solves your primary challenge(s). At the same time, keep an open mind about functionalities outside your checklist since the software might have powerful features available that simply aren’t on your radar. ## Fitting Your Niche Since a commercial software solution is designed to meet a range of customers and users, there’s always a possibility that it will have gaps in your specific niche. The more specialized your bioproduct, bioprocess, analyses, or operations are, the more likely this becomes. So, when engaging a biotech software company, talk through the specific details of workflows with the provider to confirm that their offering applies. ### Focus Questions For Software Providers - Does the tool do what we need and expect of it? - What else does it do? - What are the support costs for the software? - What will it cost to integrate their software with your current infrastructure? - What is the cost of ingesting existing data? - How long is it expected to train a team to use it? - How flexible is the software? - What IT certifications do they have? - _Note: SOC 2 Type II, ISO 27001, and 21 CFR 11 are usually the big three._ - Ask for a list of compliance details. - What audit tracing and other security measures are in place? - What fail safes exist? - When new features are released, is it optional to control their incorporation without losing functionality? _(in the event they are not compliant)_ ## Additional Considerations for Building Software ## Manage the Life Cycle Biotech companies change fast. But, in building a piece of software, you must recognize that it’s never really “done.” Creating a piece requires a lifelong commitment to upkeep it, even as your organization changes rapidly. So, you must manage the risk associated with company changes and staff turnover, especially if you build your own software. Imagine the headache if the only person on your team who knows how to use a specific important software leaves the company. Regardless of whether an internal or external team built the software, define who on your team is involved and who “owns” it going forward. In short, you need to know who exactly is responsible for managing upkeep, training new users, and maintaining institutional knowledge of the software. It is also important to note that even built software is not a permanent solution. **“It’s exciting to talk about building a solution. It’s less exciting but equally important to plan how to retire that solution gracefully,” says McCutchen.** You will eventually sunset software. So, you may want to focus less on building the perfect solution and instead on building timely solutions that on-/off-board most effectively. ## User Experience Matters, Especially with Customers Software that helps you interface with your customers or partners can significantly impact your bottom line. User-friendly online portals provide a lot of value very quickly, regardless of the rest of your data infrastructure. For example, a CDMO or CMO with an excellent customer interface for [tech transfer](https://blog.invertbio.com/bioprocess-tech-transfer-navigating-the-data-dilemma/) and client communication is more likely to retain business. ## Outsourcing a Build If you’re trending towards building software, you also can opt to have a custom software development company execute the vision instead of your team. Conceptually, one is not better than the other, but they are different. For outsourcing, you have to have a clear idea of what you want well ahead of time, whereas internally, you can be more agile. Notably, full outsourcing can create headaches if the service contracts are poorly constructed. Usually, this means you will pay a high management fee or need to find a way to maintain it yourself. Since a software is never really “complete,” outsourcing a build requires you to keep someone on staff or on retain to keep fixing it. ### Focus Questions for Building - Who is the primary person or team responsible for managing upkeep, training new users, and building institutional knowledge? - How do you plan to manage the life cycle of new software? - Will your customers ever need to interface with the software? - Ask contract custom software development companies… - What do their standard service contracts look like? - What is the annual cost? - What is the expected lifetime of software compared to the length of the service contract? ## You Got This! If you’re not living and breathing biotech software, the decision to build or buy can feel overwhelming. Though it is a complex decision, it becomes easier as you understand the big picture, clarify your organization’s needs, think through implementation, and ask good questions of providers and your team. We hope these considerations help you navigate this decision more confidently so that you can focus on what you care about most: impactful biology. Otherwise, if you need more help with biotech software or are looking for a better way to manage and analyze bioprocess data, reach out to Invert and [request a demo](https://invertbio.com/?signup=1). --- kind: blog title: "Bioprocess Tech Transfer: Navigating The Data Dilemma" slug: bioprocess-tech-transfer-navigating-the-data-dilemma date: 2025-01-12 author: "Masaki Yamada" category: Industry summary: "The transfer of institutional data and knowledge is critical to the development of biotechnologies, bioproducts, and biotherapeutics." url: https://invertbio.com/blog/bioprocess-tech-transfer-navigating-the-data-dilemma markdown_url: https://invertbio.com/blog/bioprocess-tech-transfer-navigating-the-data-dilemma.md --- # Bioprocess Tech Transfer: Navigating The Data Dilemma The transfer of institutional data and knowledge is critical to the development of biotechnologies, bioproducts, and biotherapeutics. At some point in the lifespan of a biotech company, they will need to perform bioprocess technology transfer to deliver information from one facility to another. It’s an unavoidable step for bringing a bioproduct to market. Usually, this occurs when a company hires a contract development and manufacturing organization (CDMO) or contract manufacturing organization (CMO) for bioprocess development, scale-up, and at-scale biomanufacturing. The [contract biomanufacturing market and capacity continue to grow](https://www.marketsandmarkets.com/PressReleases/biotechnology-ccontract-manufacturing.asp), especially to accommodate [the production needs of innovative and early-stage bioproduct companies](https://cen.acs.org/business/biobased-chemicals/US-aims-close-fermentation-capacity/101/i9) that lack internal infrastructure for bioprocess development and scale-up.  Bioprocess tech transfer also is done internally among companies, where processes developed in research and development groups must be passed to their pilot and manufacturing counterparts. However, transferring massive reserves of information is far from a trivial process, often presenting as a [difficult and risky step](https://www.biopharminternational.com/view/handling-risky-business-how-ensure-successful-technology-transfer) along the path to commercialization. Through no fault of bioprocess teams, [tech transfer struggles can add significant expense and delay time to market](https://bioprocessintl.com/business/risk-management/modern-technology-transfer-strategies-biopharmaceutical-companies/). Both can be devastating, particularly for early-stage innovators competing in a challenging market with limited funds. Though there are [different aspects that can make transferring bioprocesses demanding](https://bioprocessintl.com/manufacturing/manufacturing-contract-services/unraveling-the-complexities-of-tech-transfer/), [sharing organized, detailed, and contextualized bioprocess data remains a core challenge](https://www.biopharminternational.com/view/addressing-the-key-pitfalls-hindering-technology-transfer-success). Made worse, as bioproduction approaches continue to diversify and get more complex, this challenge is multiplied. Despite this, there is a temptation to underestimate tech transfer. As many bioprocess experts would agree, it’s [better to start planning for tech transfer early](https://bioprocessintl.com/business/risk-management/modern-technology-transfer-strategies-biopharmaceutical-companies/) (or better yet, right now) than shortly before kick-off. To encourage thinking ahead, this blog contextualizes the difficulty of bioprocess tech transfer, especially as it relates to managing bioprocess data and building coherent transferable institutional knowledge. After reviewing the key challenges, the blog will also discuss Invert’s framework for improving how companies share vital biomanufacturing information. ## Understanding the Cost and Risks of Tech Transfer First, it helps to acknowledge that tech transfer is just hard. It requires many stakeholders to come together to share and understand complex biological data and bioprocess information, and every detail matters. Complicating the picture, everyone is time-limited and bioprocesses, including required upstream and downstream infrastructure, vary significantly. There is not a one-size-fits-all approach to biomanufacturing, even within common umbrella production formats (like fed-batch microbial fermentation, mammalian cell culture, cell-free systems, etc.). This means there are many differences across bioprocesses and bioproducts as you increase scale. In addition, and something that people don’t always expect to face, you also have to learn your CDMO/CMO team’s capabilities, work habits, preferred terminology, and idiosyncrasies to find effective communication mechanisms. ### The Cost Given wide market and product variability, the exact costs of tech transfer can be hard to pin down. But as an example, [Seqens](https://www.seqens.com/), a small molecule-focused pharmaceutical and specialty ingredient CDMO, once indicated that it’s common to [expect to pay $6,000-$10,000 per week for tech transfer and familiarization](https://www.seqens.com/how-to-keep-cmo-costs-down-during-the-process-optimization-stage-of-drug-development/) (though some, like Seqens, charge a flat rate). Given that [biomanufacturing tech transfer realistically can take up to 6 to 9 months](https://bioprocessintl.com/business/risk-management/modern-technology-transfer-strategies-biopharmaceutical-companies/) with some going much longer (as opposed to 8 to 16 weeks for small molecules), it becomes pretty clear that making the tech transfer process more efficient can save a lot of expense. ### The Risk Though scale-up is routinely understood to be a hyper-critical juncture, the specific impact of bioprocess tech transfer gets baked into the larger step and is often lost in context. Simply put, a lot of things need to go right during bioprocess tech transfer for a successful scale-up outcome. Large sums of money are spent on bioproduction runs, making this stage a risky and vulnerable time. To put this into clearer context, [one 2019 study from Contract Pharma](https://www.contractpharma.com/issues/2020-01-01/view_features/biopharma-contract-manufacturing-pricing-analysis/), collected 37 reported entries of CMO batch prices for 500-liter GMP biopharmaceutical mammalian cell culture. At this time, batch prices averaged at $726,000 ± $149,000. The report also indicated that these prices also likely excluded the costs of “engineering runs, change orders, raw materials, and consumables.” While the cost of production runs can vary significantly, the [Atos group published a blog](https://atos.net/en/blog/faster-easier-and-cheaper-technology-transfer-a-new-differentiator-for-pharma-and-biotech-companies) earlier this year that shared their own estimates of average batch costs in biotechnology, reaching $2.5 million per batch including (both the tech and validation batches), though scale and bioproduct information were unclear. Hitting snags in productivity or performance at scale, whether due to the bioprocess itself or its execution, can trigger costly rework. Digging through massive amounts of data to find a root cause, doubling back on R&D efforts, and ultimately booking more production runs, leads to increased spend and delayed timelines (especially if the CMO’s capacity is booked out for months). Without a doubt, the need to spend (or raise) additional funds because of inefficiencies or challenges in sharing, deciphering, and utilizing complex bioprocess data–however legitimate–throws a pall over the whole operation. Thus, despite its challenge, the operability of bioprocess data holds a central importance. ## The Status Quo of Bioprocess Data Management in Tech Transfer Sharing process information, internal data, analysis workflows, and experimental results lie at the heart of bioprocess tech transfer. To get a new team up to speed for further bioprocess development and scale up they need to know what’s been done, what the outcomes were, where the existing bioprocess edges are, and more. Ideally, by the end of tech transfer, these teams are working off the same research knowledge as the original team, such that they can build off it efficiently. ### A Lack of FAIRness However, biotechnology companies amass considerable expertise and institutional context as they research and develop their products. Data sets, analyses, or familiar terminology that appear clear to your team may not be as readily understood by an outside party (like, a CMO). Plus, the sheer scale of this data can be difficult to appropriately collect and share, let alone be understood. For example, an internal team member may know which spreadsheet tab has the answer they are looking for, but that information is lost in translation once removed from the immediate team. In addition, if your internal group spans multiple teams and sites, it makes the process all the more arduous. A lack of [FAIR data management principles](https://www.nature.com/articles/sdata201618) (**F**indable, **A**ccessible, **I**nteroperable, **R**eusable) at a product company becomes a barrier to working efficiently with CDMOs and CMOs. Without implementing FAIR principles, it becomes exceedingly painful to parse through minimally curated data to bring a contract partner or internal production site up to speed and then transfer their data back out to the product company. Unfortunately, FAIR practices are not as common as you might think, due to the difficulty of manually implementing them. ### Tech Transfer Packages Data being accessible and well-structured is still not enough to have this process go smoothly. Bioprocess engineers must also properly communicate contextual information about things like process design, performance, and at-scale predictions based on well-characterized workflows. To accomplish this, product companies develop [Tech Transfer Packages](https://www.linkedin.com/pulse/key-tactics-scaling-fermentation-bioprocesses-confidence-omtqc/?trackingId=djl1myh2RvSDT11RY0A1LQ%3D%3D) (TTPs) to consolidate vital bioprocess data and information in a collection of long documents, PDFs, and supporting data in spreadsheets. Manually generating and sharing these static TTPs can create complications, both during external transfer and data transfer back to the product company. ### External Bioprocess Tech Transfer Product companies first need to pull together their tech transfer package, including all their bioprocess information and data. The amount of time and effort this takes depends significantly on the level of FAIR principle adherence and site-wide standardization of analysis workflows. Even still, the most diligent teams that established these principles well in advance still need to neatly accumulate ALL of this information such that it can be easily understood by external teams. In less ideal scenarios, teams can struggle with inaccessible data spread across many files and varied analysis workflows. As a result, teams can inadvertently cherry-pick their best runs or exclude critical details, creating mismatched expectations, a lack of awareness of failure modes, and beyond. A bioprocess scientist or engineer should be able to run a model across their entire dataset to get a complete picture of their process. From there, they should be able to select representative runs and easily identify runs that performed differently due to known deviations. Though it should be easy to share process execution and determine the performance landscape, bioprocess engineers simply don’t have the resources, time, or tools to make this happen. From there, TTPs are sent via email as static files that require careful version control. As additions and edits are made, new versions must be emailed back and forth. Naturally, this requires that all relevant stakeholders have the same access and maintain a keen eye on the versions. As you might expect, messages asking “What version is the latest one that I need to look at?” are all too common. This requires the most plugged-in stakeholders to re-familiarize themselves, confirm the details, and communicate back to the wider group. ### Transfer Back to the Customer Even after laboring through all that tedious bioprocess data management and TTP development, the customer still needs to ingest information from their external partner(s) and analyze the results against internal runs. Something we hear A LOT at Invert is how manual and time-consuming it is to take data from the CMO, bring it into your existing analysis environment, and analyze the results against your internal, well-characterized process data sets. Since every CMO is different, data packages are all formatted and laid out differently. Plus, these are often very complex data sets: with a mix of measurements, observations, execution documentation, and final results. This makes tight timelines and turnaround times more stressful and less effective than they deserve to be. According to an Invert customer, **“We are forced to spend all of our time on data engineering, leaving little to no time on data science.”** Just processing the data package to find insights takes all of their bandwidth instead of actually leveraging the data. ## Tech Transfer with Invert Biotechnology companies need a more modernized tech transfer tool to make the process more straightforward. Instead of the status quo, product and contract teams need to be seamlessly speaking the same tech transfer “language.” That language requires managing, processing, and analyzing data in a common, accessible, and more automated environment. Invert’s core focus is on operationalizing bioprocess data and information more effectively. Thus, we recognized that when designing our [bioprocess data management software](https://www.invertbio.com/) that it needed to streamline bioprocess tech transfer. Ultimately, a key goal was to transform [tech transfer packages](https://www.pharmoutsourcing.com/Featured-Articles/182576-Technology-Transfer-Packages-for-Contract-Manufacturing-Iterations-During-the-Development-Cycle/) into “living” knowledge hubs. Put another way, Invert’s software can act as one central “source of truth” shared between both product and contract manufacturing companies. Invert’s software emphasizes the sharing of information between companies with improved traceability and auto-contextualization with detailed record keeping: complete with raw data, metadata, and calculated results. As an added benefit of using Invert, CDMOs and CMOs have found that they are able to market a streamlined tech transfer process and provide those benefits to their customers. Invert’s software drives tech transfer through **secure** data sharing, report generation, and improved planning and communication features. On the **data sharing** side, customers can provide contextualized data instead of inconsistent and complex spreadsheets, documents, and PDFs while providing relevant layered data (like events, protocols, time-series data, and scale context). For **reporting**, customers get structured access to contextualized data they can then compare to secure historical data, with complete traceability. To support improved and more direct **planning and communication**, CDMOs and CMOs can share explicit details about the planned tech transfer experiments to confirm alignment and understanding of the appropriate campaigning strategy with the customer. Looking ahead, Invert is also adding a number of additional features to further streamline tech transfer. Among those, we are building out automated tracking of acceptance criteria and deviation reports for quality tracking, as well as scaling calculators with predicted performance modeling capabilities. If you’re interested in planning ahead for bioprocess tech transfer or want to see how our software can improve your next contract partnership, reach out to [book a demo](https://invertbio.com/?signup=1) today! Otherwise, if you are looking for a broader introduction to working with contract manufacturers, you can read [our recent Inside Biomanufacturing newsletter](https://www.linkedin.com/pulse/key-tactics-scaling-fermentation-bioprocesses-confidence-omtqc/) co-written with Liberations Labs. ‍ --- kind: blog title: "From Data Lakes to AI: An Interview with Invert’s Head of AI" slug: from-data-lakes-to-ai-an-interview-with-inverts-head-of-ai date: 2025-01-12 author: "Holger Thorup" category: Team summary: "Suhas Guruprasad, the Head of AI at Invert, dives into the evolution of artificial intelligence, distinguishing between the actual capabilities of AI and the often exaggerated hype surrounding it." url: https://invertbio.com/blog/from-data-lakes-to-ai-an-interview-with-inverts-head-of-ai markdown_url: https://invertbio.com/blog/from-data-lakes-to-ai-an-interview-with-inverts-head-of-ai.md --- # From Data Lakes to AI: An Interview with Invert’s Head of AI _Suhas Guruprasad, the Head of AI at Invert, dives into the evolution of artificial intelligence, distinguishing between the actual capabilities of AI and the often exaggerated hype surrounding it. He also explores the potential implications and transformative effects of AI on the bioprocessing industry._ ## You were the Head of Engineering at Zalando, tell me about your role there. What is one thing you built that you’re proud of? **Suhas:** I joined as a Senior Engineer in 2017 and was later promoted to management roles. Initially, my focus was on Data Lake and Big Data technology. I was part of the team that wrote Data Lake v1 at Zalando. It had the capability to process, transform and ingest Terabytes of data in real time! I was always both a mathematician and a computer scientist at heart, so I have had an attraction to AI since as long as I know. In school, I’d build several toy apps that tried to mimic intelligence, but failed each time of course. Now I realize how far we’ve come in terms of AI as a society. Primarily due to the exponential reduction in data storage costs and exponential increase in computing power. At Zalando, I built the team that created one of the first ML platforms in Europe. We had a working end-to-end ML platform all the way back in 2018… that feels so far away now. When I left in late 2022, there were hundreds of ML models trained and deployed every single day on the platform, safely and securely. We reduced the lead time to shipping AI products by months, or in some cases years. The lever that it created was massive. ## What made you decide to join Invert? **Suhas:** When I joined Invert, I’d been working in e-commerce for over 7 years and, by then, running a large organization. I definitely wanted to try something radically new. I guess I was looking for a combination of building something with my own hands and running a team. I was talking to several founders, mostly early stage start-ups, both in Fintech and Biotech. I’m actually quite passionate about Fintech, so I was very sure I’ll end up in one. But luck has it otherwise, and how lucky I’ve been. In my first conversations with both Martin and Holger, I remember coming out of it being absolutely impressed with them. I sensed how smart and passionate they were, and thought that I’d definitely want to be around them. Ultimately, I’m an accelerationist at heart. Biology and its applications are so tangible and magical at the same time. Being able to deploy technology to accelerate the development of life changing products such as therapeutics, food, and materials is something that I’m so lucky to be able to do. ## Has your lack of a biology background been a challenge at all, or is it also a strength? **Suhas:** It definitely can be a challenge. I certainly believe domain knowledge is essential to building great products. So right now I’m on a mission to study as much biology as possible in as little time as possible. Having a vision is only possible when you really know your field inside and out. You know it so well that you’re able to see the future. Although I’m not yet an expert in bioprocessing, Invert is filled with people who are. So we’re really able to lean on each other to develop solutions that address the evolving needs of bioprocess scientists and engineers. Also, to my delight, a lot of the literature coming out of academia is closer to computer science than I expected. I read a lot of these papers, and they’re all talking about things like PCA, Neural Nets, Hybrid models, data quality and cleanliness, Bayesian thinking and all those usual topics that one would find in a good computer science college course. The additional advantage I believe I have is close to 15 years of shipping software products. So, I come with an absolute focus to ship software for the biotech industry. ## AI can be a bit ambiguous – what excites you about that tech? How has it been over-hyped? **Suhas:** Most of the AI conversations today are around LLM’s or Large Language Models. If you look a bit at AI history since the 1950s, we have experienced high and low seasons – we call them summers and winters. [The wikipedia article on AI winter is actually great and I recommend reading it](https://en.wikipedia.org/wiki/AI_winter). We’re typically in 10-year hype cycles with AI and currently, we’re in the hottest of summers. Since the 2010’s, a series of neural network architectures have taken us by storm and perform quite well on various tasks. The latest is something called a transformer network, which improves on recurrent networks (RNNs) and long short-term memory networks (LSTMs). Google Translate, for instance, switched to LSTMs back in 2016 and the team had achieved the same performance in 9 months that the previous team had taken 10 years to do. So essentially a bunch of people from Google figured out a magic trick in 2017. They called this “attention”, like in the same context as, “hey, pay attention”. Forbes actually has a good article on this for the casual reader called, [“Transformers revolutionized AI. What will replace them?”](https://www.forbes.com/sites/robtoews/2023/09/03/transformers-revolutionized-ai-what-will-replace-them/?sh=71e41e659c1f). What we see today, for example with ChatGPT, is stemming from this little magic trick. If you’d asked me 2 years ago what AI is, I would’ve told you that AI is just a marketing term. You know, it might actually be different today. It’s all so dramatic in AI right now. If you’re following the conversation these days, there are two camps emerging: the accelerationists,  led by Marc Andreessen, Yann LeCun, and Andrew Ng; and the doomsayers, including Elon Musk, Eliezer Yudkowsky, and AI pioneer Geoff Hinton who is often called the Godfather of AI. Imagine, the Godfather of AI calling for a stop on AI research. The accelerationists say we should keep going, and do even more with AI. While the doomsayers are calling a stop to all AI development. Although, recently we have seen a complete reversal from Elon Musk with the release of Grok, a competitor to ChatGPT. Some of the hype is well deserved — we have for the very first time in history passed many Turing test equivalents. GPT-4 has passed several human-level exams, like the Bar exam with 298/400, LSAT with 163, SAT with 700/800, GRE with 163/170. These are all impressive numbers. So in that sense, everything about AI is exciting right now. It’s a melting pot of political, economical, technological and anthropological ideas, all happening in real time, at warp speed. ## What are some of the ways you see AI/ML playing a role in biotechnology R&D and MFG in the next 5 years? **Suhas:** I have to obviously start by mentioning Copilot, which we just shipped to all of our customers. With Copilot, people are able to interact with bioprocess data in natural language, just like they would with ChatGPT. One of our internal conversations was to see how easy it would be for scientists to just think of a question, ask that out loud to Invert, and Copilot creates a set of analyses using large amounts of contextualized bioprocess data. No coding involved. It’s a definite UI/UX breakthrough for data exploration, and very relevant for speeding up experiments as well as removing the barrier between users and their data. The more exciting thing that comes to mind is the proliferation of models like AlphaFold and AlphaMissense. So I’m following the work of Isomorphic Labs religiously. They’re that DeepMind spin off company which now focuses only on Biology problems, with a special focus on AI for drug discovery. If you step back and look at information manipulation as the primary lens, biology is probably running a couple decades behind. We were manipulating bits with vacuum tubes as early as the 1900’s, though semiconductors came much later. In contrast, things like recombinant DNA came out in the late 1970’s. So one can look at the arc of computer science to look for clues on what’s possible with manipulating information once the power to manipulate is there. The real challenge though, and I can’t stress this enough, would be to productize and ship AI so that it’s applied to the right problem at the right time. Ideas in AI/ML have existed for a while, but productizing it, and making it available on top of any data size, and at any scale, will be a key differentiator. Those who are able to do this will pull ahead, and those who don’t will fall back. ## What are some of the challenges of applying AI/ML to bioprocess data? **Suhas:** The first thing that comes to my mind is data quality. I often see poorly organized lab sheets where data is spread across different sheets, workbooks, you name it. Time measurements are not synchronized. Missing data. Unstructured data. All sorts of issues with data quality. [At Invert, we actually call this the 6s problem.](https://blog.invertbio.com/the-bioprocess-data-management-challenge/) But to be fair, data quality is an issue in every industry vertical — anywhere there are humans! It’s important to note that all of modern deep learning is possible due to massive amounts of labeled data. A model is only as good as the quality of data that you have, “garbage in, garbage out” as the saying goes. One of the things that we’re doing at Invert is figuring out how much we can remove that human-in-the-loop. It’s funny that I think about this. In Reinforcement learning, human-in-the-loop is so useful as a rewarding mechanism for AI, but for structured data curation, it’s best humans are not there at all. So we’re writing these agents that directly read measurements from bioreactors. No lab sheets! We’ve come a long way on this by already supporting some of the major reactors in the market like Sartorius, Eppendorf, Solaris, and many others. I think that’s really exciting to me – by having a real-time stream of high-quality bioprocess data directly from the reactors, across various scales, imagine the possibilities. Things like automatic anomaly detection with recommended interventions, mid-run performance predictions, and even intelligent process control and scale-up. The other macro challenge I see is in what I call world modeling. For instance, in e-commerce when you’re predicting sales, or in stock markets when you’re predicting stock price, you consider some world model. A model with world events, news, weather etc. That is always a challenge. You never really know what’s really going to happen in the future. Just like world modeling at the macro-level, there are a lot of similarities when modeling at cellular level within a bioreactor environment. I see some advances with metabolic modeling, for instance the [CHO Simulator post recently published by Asimov](https://www.asimov.com/blog-post/metabolic-simulator#title1). That really is quite exciting. ## What are you most looking forward to building at Invert or is there a particular project you’re excited about? **Suhas:** The obvious cop out answer is everything! I really feel excited about the momentum that we have. We’re also not doing any tricks. We’re not an “Uber-for-X”. We’re a bunch of smart people building software for biomanufacturing. This, as a premise to me, when put in context on how this industry vertical is set to grow, is kind of mouth watering. If you want me to pick something in particular, then I’d say I’m really excited about building all kinds of process intelligence into our offering. Think about it. We automatically get data in. We visualize it beautifully. Now, we also provide intelligence and guidance on the processes. That’s a whole productivity package. The perfect biomanufacturing software platform. It also completes my love story with productivity tooling and reducing lead time. I want biologists to ship their product much faster than the status quo today. It’s the accelerationist mindset. ## What advice would you give to biotech companies who are interested in using ML/AI/Data Science to further accelerate their development? **Suhas:** [Talk to Alex about Invert!](https://calendly.com/a-felt/invert-product-demo?month=2023-11) Right, so one of the biggest mentoring challenges that I faced at Zalando was helping team leads make the right tooling decision and picking the right tech stack for their product. There was always a build vs buy battle — should I build something myself or buy something someone has already made? It’s actually coming down to a bunch of variables — skills and talent pool, dollar amount left in the bank, product maturity, senior management vision and so on. I also really like the concept of the [Idea Maze, by Marc Andreessen here](https://a16z.com/tag/idea-maze/). If you’re a leader, you should really know what decision to take next, because you’ve kind of exhausted all other possible ideas in the idea maze. So in that context, I think this is leadership advice. Good leadership is half the job. No matter whether or not you plan on investing in AI either internally or externally, make sure you are always maintaining clean and contextualized (labeled) data. It will help your efforts today, and also provide you with that foundation in the future if and when you decide to deploy AI. All else aside, if there’s one key takeaway advice I have to give, it would be this — AI is here, and it’s real. The world’s smartest and most talented people are jumping on the AI wagon and trying to find every nail to hit with this Thor’s hammer. Before you know it, someone might disrupt you. Therefore, approach AI with a heightened sense of urgency. That would be my advice. --- kind: blog title: "Pioneers in Bioprocessing: Q&A with Alexi Goranov of SCiFi Foods" slug: pioneers-in-bioprocessing-q-a-with-alexi-goranov-of-scifi-foods date: 2025-01-12 author: "Alex Felt" category: Interviews summary: "Exciting news! Invert is launching a Q&A series, Pioneers in Bioprocessing, to chat with experts in bioprocessing and biotechnology to discuss their work, the potential of the bioeconomy at large, and the personal viewpoints of the individuals who make this work possible." url: https://invertbio.com/blog/pioneers-in-bioprocessing-q-a-with-alexi-goranov-of-scifi-foods markdown_url: https://invertbio.com/blog/pioneers-in-bioprocessing-q-a-with-alexi-goranov-of-scifi-foods.md --- # Pioneers in Bioprocessing: Q&A with Alexi Goranov of SCiFi Foods _Exciting news!_ **_Invert is launching a Q&A series, Pioneers in Bioprocessing,_** _to chat with experts in bioprocessing and biotechnology to discuss their work, the potential of the bioeconomy at large, and the personal viewpoints of the individuals who make this work possible._ _To lead off the interview series, Invert spoke with_ [_Alexi Goranov, Ph.D._](https://www.linkedin.com/in/alexi-goranov-1292b12a/)_, VP of R&D at SCiFi Foods._ [_SCiFi Foods_](https://scififoods.com/) _is an alternative protein company that_ [_aims to create the world’s first cultivated beef_](https://scififoods.com/about)_, beginning with a burger made of both SCiFi’s beef cells and a proprietary formulation of plant-based ingredients._ [_SCiFi’s hybrid burger_](https://www.vox.com/the-highlight/23378912/meat-animals-beef-cultivated-in-vitro-food-plant-based-animal-welfare-impossible-burger) _aims to remedy the “beef-taste” limitations of plant-based meats while also addressing the_ [_significant cost-of-goods issues_](https://thespoon.tech/cultivated-meat-is-on-sale-but-its-pricey-a-new-study-shows-how-to-bring-the-cost-down/) _that currently plague cultured meat start-ups (especially as it relates to_ [_cell media_](https://gfi.org/resource/analyzing-cell-culture-medium-costs/) _and_ [_manufacturing sites_](https://www.sciencedirect.com/science/article/pii/S2666154322000916?via%3Dihub)_). Impressively, SCiFi Foods has already managed to_ [_reduce their cultivated beef production costs by 1,000-fold_](https://www.fooddive.com/news/scifi-foods-cell-based-cultivated-beef-1000-cost-reduction/627122/) _through novel cell line research and development initiatives using targeted CRISPR gene editing._ _Note: this interview was edited for length and clarity._ ## To start us off, tell us a bit about your background and how you ended up coming to SCiFi Foods. My background is in molecular biology and genetics. After coming out of academia, I worked at Zymergen for almost six years as a senior scientist and research director, getting my first exposure to fermentation technology and how important it is for manufacturing goods. From there, I joined SCiFi Foods. It was an exciting new field to dive into, even knowing full well how challenging and difficult it would be. But it felt nice to know that the goal was to get the cells to do what they want to do (which is to grow), as opposed to making the cells produce things they normally wouldn’t, as is common in the broader synthetic biology space. ## In your view, what is the problem with animal agriculture as it stands now? There are several. The intensity of it, especially in industrial agriculture is very heavy. That’s leading to deforestation and leading to significant greenhouse gas emissions. [Animal meat production](https://thebreakthrough.org/issues/food-agriculture-environment/livestock-dont-contribute-14-5-of-global-greenhouse-gas-emissions) is responsible for something like [12%](https://foodandagricultureorganization.shinyapps.io/GLEAMV3_Public/) to [20%](https://www.nature.com/articles/s43016-021-00358-x) of the greenhouse gases of humanity. So, if all of us care collectively about climate change and want to do something about it, then that is one area where we have a reasonable shot at making a difference. Specifically, [beef is the worst offender](https://www.fao.org/documents/card/en/c/cc9029en), especially if you just look at the [feed-to-meat efficiency](https://ourworldindata.org/grapher/energy-efficiency-of-meat-and-dairy-production). So, that seemed to be the thing to focus on, and that’s what we are doing at SCiFi Foods. ## What would you say makes SCiFi Foods’ approach unique? I think we’re starting with the end in mind. From our early techno-economic analysis, it became clear that the process needs to be really, really simple. The first cultivated meat companies started about a decade ago, with [the first lab-grown meat burger in 2013](https://www.bbc.com/news/science-environment-23576143). Originally, a lot of folks started with wanting to produce a tissue, which is a lot more expensive and takes longer. We decided to actually do something different by creating a product that is not 100% animal cells. Instead, we use some percentage of beef cells in the product, but it’s mostly plant-based. Three years ago, I would actually say we’re probably the only ones thinking that way. Now, a number of other companies are actually starting to come to the same realization. In addition, we deploy genetic engineering as well. Though genetically modified foods are a little bit of a touchy subject for some, we do feel that if we really want to give this a shot that was something we had to do. Otherwise, the chance of success would be close to zero, for beef at least. ## Can you give us a peek at some specifics of what you’re working on right now at SCiFi Foods? The R&D work is actually progressing quite fast. We currently have beef suspension cell lines growing without using [microcarriers](https://www.frontiersin.org/articles/10.3389/fnut.2020.00010/full). We also have those cells growing well in a simplified media that lacks various common animal-based media ingredients. Now, the team is really working to figure out how to increase the yields, intensify the bioprocess, and reduce expensive components by leveraging our engineering, media, and testing capabilities. ## Recognizing that this is very hard to predict with certainty, when do you think it will be commonplace for people to be grilling burgers from cultured cells? I’d say at least a couple of years. There are a small handful of companies right now that have the approval to produce. But, I am a little bit skeptical that they can produce at a scale that makes the product easily available, especially to go and purchase at your favorite supermarket. Most likely, they’ll first be available at some restaurants and special events. In addition to availability, there are multiple other considerations when you decide to sell directly to consumers because of supply, labeling, etc. ## Beyond supply and distribution, what are some of the other primary hurdles? Assuming that everybody gets the regulatory approval, there is also obviously consumer acceptance. For us right now, as far as we understand, [the biggest challenge there is taste](https://gfi.org/resource/consumer-insights/). I think a lot of people are willing to try cultured meats, but are they going to be return customers? I’m a carnivore. I would not give up meat, but I would like to diversify my diet and have a lesser environmental impact. “**That to me is the question, can cultured meat producers make products that taste similar to animal meats at a low enough cost such that consumers can feel good reaching for a more sustainable meat product?**“ ## What would you say makes cultured meat bioprocesses unique to other biomanufacturing efforts? There are a lot of unknowns. Everybody knows how to work with common mammalian cells like, CHO and HEK. Though they’re both mammalian, they’re not the same and come with their own little caveats that must be studied and learned for effective application. Over decades, we’ve come to understand these systems very well. I joke with the R&D team all the time that we are establishing a new model organism. I mean, we literally started with a biopsy from a live, young, and healthy cow and then we isolated different cell types, and then we proceeded to work with them. And yes, you can go read papers about how some primary cells behave from beef culture, but once we start playing with conditions, developing new phenotypes, and the cells start growing well in suspension in internally developed media, it really becomes its own unique system. We are really in uncharted territory, so it starts with basic things. ## Tell us a bit about how genetic engineering factors into cultured meat at SCiFi Foods? It allows us to get the cells in a happier state so that we can start asking them to behave the way that we want them to behave. More specifically, some cells are very, very flexible and can actually adapt and start doing crazy things sometimes very quickly. With many cells from small mammals or chickens, you can immortalize them and put them in suspension with very minimal modification. For example, you have the classical T3T cell lines from mice and a number of [new reports from chicken](https://www.nature.com/articles/s43016-022-00658-w). For larger mammals, including bovine cells, that’s been trickier. I don’t think many people have succeeded in growing bovine cells in suspension without microcarriers or even immortalizing them very easily. Genetic engineering allows us to introduce several mutations to the cells to _very politely_ nudge them to start adapting and moving in the desired directions. Importantly, we do not introduce genes from other organisms into our cells, which would make them _bona fide_ transgenic organisms. If we want to do genetic modifications, we either remove a function that already exists or control inherent bovine-specific gene functions. Basically, we stick to the types of changes that can naturally occur in the actual cell. We’re just speeding it up a little bit. ## How does cell line and bioprocess development relate to lowering the cost of goods for the production of cultured meat and making scale-up more efficient? Since we were developing a brand new system, cell line development was particularly important. First, primary cells don’t divide enough times for us to do a full manufacturing production. We had to develop a beef cell line that was able to grow for more than 40 generations (doublings). That means right off the bat we had to develop cell lines that are suitable for longer growth. In addition, we needed cell lines that could grow in suspension and utilize cheaper media. The bioprocess development work goes hand-in-hand. We all know we need to optimize the process to increase yield and lower costs. This means asking a lot of questions. What’s the optimum temperature? What’s the optimum pH? What’s the cheapest, most effective media? How often do we feed? What process should we use? Should we use perfusion? And so on. You really have to start playing around to find efficiency. ## How does the management of all of that bioprocess data play into successful R&D and commercial efforts? It’s absolutely essential because we need to understand how we make those decisions and what parameters we need to track to make those decisions. Afterwards, we need to be able to go back and ask, “Was that the right decision?” And the other part is that a lot of data comes out of this. You have basically thousands of lines of online data. If you just see this in a spreadsheet, it doesn’t mean a lot. You need to be able to see trends, look at data graphically, and ideally, compare it to multiple previous runs. I can’t hold all of that data in random spreadsheets with multiple tabs unless it’s some sort of summary. It’s important to go back and figure this out so that you can start to determine what to fix. “**Keeping data well-collated, accessible, and searchable is absolutely key.”** Another thing that I learned is that complexity is not always better. Sometimes it’s the simplicity and keeping the things you care about at your fingertips that is the most valuable. ## Can you just tell us a little bit about your experience with** [**Invert’s bioprocess data management software**](https://invertbio.com/)**? I’ll speak mostly from my own personal view. Basically, when there is a bioreactor going, I live on Invert. I always have a window open. It helps me a lot because I can basically track runs as they’re going in real-time, whether I’m at the office or at work, and I don’t have to ask my team, “Hey, what happened in the run? How does this compare to the previous one?” I can just basically go and pull the trends, the charts, and the KPI summaries. With Invert, I have pretty much all of the data in the same place and can look at it in an organized way. That allows me to very quickly get a sense of what’s happening, and how things compare, and ask more intelligent questions of my team. Invert also pulls offline data and combines three different platforms right now for us, so I can compare across platforms. It doesn’t matter what we are running; I can actually pull it and start doing comparisons to ask about equivalency in scale-ups and so on. From my perspective, it’s giving me what I need. It’s keeping it sufficiently simple. It’s easy to access and it’s very easy to select the runs and the various parameters that I need to see. ## What would you say are some of the most exciting things happening in biomanufacturing and biotechnology? One of the more exciting things that I’m paying attention to right now is bioreactor design to see if there is anything groundbreaking that’s going to come out. Even some old technologies are regaining popularity. Everybody, especially investors, is really worried about the cost of manufacturing plants, the cost of bioreactors, CapEx, and cleaning. It’s a huge investment and they are hesitant to make it. In response, there is now the push to make bioreactor manufacturing cheaper and also to make bioreactors cheaper to operate. Unlike biopharmaceutical manufacturers, we are working on producing a commodity where every cent matters. ## How do you think we can better grow the bioeconomy as a community? Two things that are hard to come by: time and money. I would actually say that the government needs to invest even more heavily and support the bioeconomy. [They are already](https://www.whitehouse.gov/briefing-room/presidential-actions/2022/09/12/executive-order-on-advancing-biotechnology-and-biomanufacturing-innovation-for-a-sustainable-safe-and-secure-american-bioeconomy/), and they are providing some grants. But if you compare it to what is given to other industries, it pales in comparison. There are so many people who are dedicated to the future of this. It’s just a matter of giving us that little bit of financial breathing room to do the magic. That takes a lot of infrastructure and support, which governments are in the best position to supply. ## What valuable lessons have you picked up in your career? Appreciating folks and making sure everybody understands that this is a group effort is key. There is often a tendency for specific recognition, and that often leads to pretty bad outcomes very quickly. Though it’s hard, you want to create unity and an environment where people know that the person next to them has their back. It’s collective failures or collective successes. The second critical aspect is trusting the technical folks doing the work. They have a better understanding of the challenges and opportunities from a scientific and technical perspective. I spend a lot of time gathering feedback on how things are going and how they would propose we move forward. Another is having backup plans. I don’t want to underemphasize that. “**Science is difficult to predict, so have multiple contingency strategies for every technical initiative. That is key in R&D. To this day, that’s what gets me out of hot waters every single time.**“ ## So when you’re not working on developing the future of cultured meat, what do you like to do with your free time? The one hobby that probably takes the most of my time is Bonsai. Well, “hobby” is an understatement, it’s more of an addiction. I have way too many trees in the backyard. At this time of year it’s very busy because it’s the time that you can do almost anything to the tree. I do exhibit them when I find the time to prepare, and I’ve even had trees in national exhibitions. It’s really amazing to think about how I am taking care of trees that are a lot older than me. Some are 100s of years old, and it’s something that I can pass on for somebody else to enjoy. It’s biology on one hand, but also as a child, I grew up on a farm. So, I was always around plants, taking care of and growing stuff. That comes to me very naturally, and I really enjoy it. I don’t think of it as science. I think of it more as an art. ## _Huge thank you to Alexi for taking the time to have a great conversation with us. We hope you enjoyed it as much as we did. Thanks for reading and stay tuned for the next edition of Pioneers in Bioprocessing._ --- kind: blog title: "Why Have Bioprocess Data Solutions Been Overlooked…Until Now?" slug: why-have-bioprocess-data-solutions-been-overlooked-until-now date: 2025-01-12 author: "Masaki Yamada" category: Industry summary: "Given the glaring challenge of managing bioprocess data, the question remains: Why have bioprocess data solutions been historically overlooked?" url: https://invertbio.com/blog/why-have-bioprocess-data-solutions-been-overlooked-until-now markdown_url: https://invertbio.com/blog/why-have-bioprocess-data-solutions-been-overlooked-until-now.md --- # Why Have Bioprocess Data Solutions Been Overlooked…Until Now? Given the glaring challenge of managing bioprocess data, the question remains: Why have bioprocess data solutions been historically overlooked? This is particularly surprising when you consider that the last few decades have resulted in [a lot of powerful software solutions for other aspects of biotechnological research](https://a16z.com/2023/02/14/doing-more-with-moore/), including tools for DNA design (like [Cello](https://www.nature.com/articles/s41596-021-00675-2), [Asimov](https://www.asimov.com/news/computer-aided-design-of-biology), and [Snapgene](https://www.snapgene.com/)) bioinformatics (like those from [Nextflow](https://bioinformaticsworkbook.org/dataAnalysis/nextflow/01_introductionToNextFlow.html#gsc.tab=0), [Latch](https://latch.bio/), [AWS](https://aws.amazon.com/blogs/hpc/helping-bioinformaticians-transition-to-running-workloads-on-aws/), [Geneious](https://www.geneious.com/)), and strain design (like, [Gingko Bioworks’](https://www.ginkgobioworks.com/offerings/strain-optimization-services/) custom-built recommendation engine), and beyond. Yet, software tools for managing bioprocess data are largely absent in the biomanufacturing sector, especially for earlier-stage companies powering the synthetic biology sector. As [Prabha Ramakrishnan](https://www.linkedin.com/in/ramakrishnanprabha/), VP of Partnerships & Strategy at Invert, puts it, “fermentation historically has been underpowered with software.” > **_Fermentation historically has been underpowered with software.Prabha Ramakrishnan, VP of Partnerships & Strategy_** This blog explores why bioprocess data management has received minimal devoted attention from software developers. In addition, this piece will address Invert’s bioprocess software and how it closes the remaining gap in the biotech software continuum. ## A Lack of Volume One explanation is that there was not enough volume (until recently) to justify a company devoting the resources towards developing a turnkey bioprocess data solution. With too few customers to sell to, Software-as-a-Service (SaaS) business models struggle to find a path to profitability. Most major biotech innovations of the past few decades (like genome sequencing, CRISPR, etc.) and biotech IP are upstream of bioprocess development. So, every new biotech company starts with molecular biology and translational research, whereas bioprocess development comes much later. This explains why software developers have heavily weighted their efforts towards research tools. Research software tools enjoy applications much further upstream than bioproduction, resulting in a wider user base made up of academic researchers and many companies engaged in early-stage research and development. Since their application is well ahead of the “[valley of death](https://www.sciencedirect.com/science/article/pii/S2667041022000118),” they can de-risk their operations by drawing from a wider pool and remaining unaffected by user commercialization failures. Even when the biotechnology community was much smaller, research software tools could thrive simply because they drew from more of the community. Though biomanufacturing efforts succeeded in the previous decades, there still were not enough companies or products to develop software around. From a business perspective, building R&D software has historically made more sense because they can serve a much wider audience and have a much larger target addressable market (TAM). Put succinctly, every biotech company does wet lab R&D, but not every biotech company runs bioreactors. Only recently did the biotech industry grow much more prominent, leading to an explosion in biomanufacturing demand and capacity. Though the valley of death persists and failure rates remain high, a higher number of bioproduct companies now reach the commercialization stage. Thus, only now has the market for bioprocess data management software become sizable enough to encourage investment into a new solution. ## Bioprocess & Software Engineering: A Venn Diagram Without Much Overlap Separate from the available market, it’s quite challenging to build an effective bioprocess data management software without a deep understanding of biomanufacturing. Software development depends on engineers able to understand the context of what end-users need from the software. However, it’s already hard enough to recruit software developers, let alone those who also understand bioprocessing at an expert level. Simply put, these two groups only marginally overlap. For those with both knowledge sets, their talents are in demand, making them expensive and difficult to recruit, especially when large tech companies can hire them at high salaries. With limited ability to recruit a team capable of building a proper bioprocess data solution, companies couldn’t create software that solves the challenge across the board. ## Large Biopharma Companies Built Their Own Data Tools Large pharma and biopharma companies heavily dominated the early stage of modern biomanufacturing, beginning in the 1980s through the 2010s. The first biologic therapeutics (like recombinant human insulin and blockbuster monoclonal antibody therapeutics) required organizations with deep pockets. Though no true commercial bioprocess data solution option existed, trailblazing big pharma companies had the resources to build and implement their own data management software to support greater productivity and ensure regulatory rigor. In addition, the profitability of the resulting approved biologics could greatly offset their capital expense. Once completed, these companies had no incentive to share their software, which meant that new biomanufacturing players would also need to build their own, further entrenching decisions to build instead of buy. Large pharma companies also enjoyed the resources to hire and retain the few individuals with both software development and bioprocess expertise, which minimized attempts from others to harness the talented few to build more broadly applicable bioprocess data solutions. Even still, it is important to note that these software didn’t solve every challenging aspect of managing bioprocess data. In effect, they only really met the last generation of data analytics needs well enough to just get the first biological drugs to the market. Since they designed the software in the context of specific leading biologics, the tools quickly became clunky and obsolete as the industry became more competitive and biologic drugs became more diverse. As the biomanufacturing industry continued to evolve, so too did its data analytics needs. Understanding new bioprocesses and making new bioproducts while keeping costs low required increasingly advanced data analytics and more flexible software. ## Attempts to Solve The Bioprocess Data Management Challenge As more precision fermentation and other biomanufacturing efforts cropped up, more companies needed to confront the challenge of competently managing the bevy of bioprocess data they generated. As a result of this critical mass, more bioprocess teams attempted to rectify the problem with varying success. Like the big pharma biomanufacturing pioneers, most companies opt to create homegrown solutions to better manage bioprocess data. In these cases, engineers piece together [patchworks of legacy software systems](https://blog.palantir.com/biomanufacturing-of-tomorrow-requires-a-connected-company-today-5c0e81333a41) and data pipelines, build spreadsheets, and deploy process agreements to force organization, albeit imperfectly. Similarly, some opted to hire information technology companies to construct a solution specifically for them. Regardless, the expense of these custom-built solutions is significant and comes with the additional cost of service agreements to fix new problems and expand capabilities as needed. Some custom-built systems sometimes stemmed from manufacturing execution systems (MES) bases, [which originated in the 1990s](https://www.aptean.com/en-US/insights/blog/what-is-mes). However, these MES systems were designed to drive manufacturing execution for enormous sectors like the petroleum and automotive industries. Thus, these systems struggled to manage the inherent complexity of biological systems and many different variables that must be diligently tracked and analyzed. Over time, some bioprocessing technology companies began offering [Process Analytical Technology (PAT)](https://www.federalregister.gov/documents/2004/10/04/04-22203/guidance-for-industry-process-analytical-technology-a-framework-for-innovative-pharmaceutical) Software alongside their hardware. While these software could manage bioprocess data and perform analytics, they are often hardware-specific and sold as add-ons for specific bioreactor systems. So, maximizing the full impact of PAT software depends on using a single hardware provider, which may not benefit the biomanufacturer or may not be possible given the many different data sources. In recent years, several electronic laboratory notebooks (ELN) and laboratory information management systems (LIMS) have grown popular in research and laboratory settings, including at biotech companies. As a result, many users opted to rig up their existing ELN and LIMS systems to serve this purpose. To do so, they need to force-fit fermentation/biomanufacturing data to integrate it into the system. Unfortunately, ELN and LIMS systems companies did not design these handy products for this purpose. So, while this offered some improvement over manual pipelines and reduced costs from data management systems built from scratch, these piecemeal ELN/LIMS approaches never get to 100% integration, meaning that bottlenecks continue to hamper their efficacy. Furthermore, ELNs and LIMS were [not explicitly designed for environments like biomanufacturing, where different individuals or groups collect and analyze the data](https://scalingbiotech.com/2022/02/23/a-better-eln-wont-solve-your-problems/). So, they can struggle to provide appropriate context from scientific and engineering teams to decision-makers. ## The Modern Solution: Invert Increasingly, companies (especially those in synthetic biology) are realizing the importance of bioprocess development and commercialization. Only through well-designed bioprocesses can companies make enough product to sell at margins that turn a profit. As more individuals target complex bioprocesses, the more they recognize the importance of accelerating process development, reducing risk along the commercialization path, and increasing the predictability of how their processes scale. More people than ever realize that these capabilities are the primary drivers of whether they will succeed or fail. But, to do this, bioproduct companies need to properly leverage their R&D and scale-up data. Given the limitations of patchwork data systems and existing bioprocess software, the biotech industry needs a more seamless bioprocess data management and analysis software. So, we made one! > **_Given the limitations of patchwork data systems and existing bioprocess software, the biotech industry needs a more seamless bioprocess data management and analysis software._** Combining the skillsets of both bioprocess and software development experts, Invert takes your bioprocess data from lab to production and shortens Design-Build-Test-Learn (DBTL) cycles by turning scattered data into actionable insights faster than ever. Invert designed our bioprocess software for intelligent automated data ingestion no matter the source, allowing users to unify all their bioprocess data with minimal effort. Invert can connect to any bioreactor, off-line equipment, ELNs, LIMS, or other databases to readily sync data between your tools. Invert can easily handle large sets, allowing users to readily analyze and compare on-line and off-line data across runs, scales, time, and events while creating advanced graphs, calculating derived parameters, and executing statistics. Invert also provides complete process traceability and makes it easy to share data and information across teams and collaborators. Invert contextualizes bioprocess data while keeping all data and its full historical record secure (SOC2 compliant and ISO27001 certified). With end-to-end encryption, you can keep your single source of truth safe. Plus, we designed our software to work with virtually any biological system and target bioproduct. Whether you’re in pharma, alternative protein, synthetic biology, or contract manufacturing, Invert can empower you to improve process outcomes. If you’d like to learn how Invert’s bioprocess software can support your biomanufacturing efforts, reach out today! --- # Section: Legal --- kind: legal title: "Terms of Service" slug: terms lastUpdated: 2026-04-16 url: https://invertbio.com/terms markdown_url: https://invertbio.com/terms.md --- # Terms of Service ## 1. Introduction Welcome to Invert, Inc. ("Company", "we", "our", "us"). These Terms of Service govern your use of the public website located at [invertbio.com](https://invertbio.com/) (the "Site"). They do not apply to the Invert platform or other services we make available, which are governed by separate terms with your organization. Our Privacy Policy also governs your use of the Site and explains how information is collected, used, and disclosed. These Terms and the Privacy Policy set forth the agreement between you and the Company regarding your use of the Site. If you have entered into a separate written agreement with the Company (including any order form, SaaS services agreement, statement of work, or similar), that agreement governs your access to and use of the Invert platform and related services, and will prevail over these Terms to the extent of any conflict. By accessing or using the Site, you agree to be bound by these Terms. ## 2. Use of the Site The Site is provided for you to learn about Invert and, where we make them available, to contact us, request information, or use other features we expressly offer on the Site. You agree to use the Site only for lawful purposes and in accordance with these Terms. You are responsible for ensuring that your use of the Site complies with all applicable laws and regulations. You will not misuse the Site by attempting to disrupt, scrape, or gain unauthorized access to our systems, or by submitting false or misleading information. Access to and use of the Invert platform and related services are not governed by these Terms and are subject to your organization's agreement with the Company (and any additional terms presented when accessing those services). ## 3. Authority The Site is intended for business and informational purposes. If you access or use the Site on behalf of an organization (for example, by submitting a form or request in that capacity), you represent and warrant that you have the authority to bind that organization to these Terms. ## 4. Data and Privacy Information you provide through the Site (for example, via contact or inquiry forms) is handled in accordance with our Privacy Policy. Data submitted to the Invert platform is governed by your organization's agreement with the Company and is not subject to these Terms. ## 5. Acceptable Use You agree not to use the Site to: - violate any applicable law or regulation - attempt to gain unauthorized access to our systems, accounts, networks, or data - interfere with or disrupt the operation of the Site or our infrastructure (including by introducing malware, denial-of-service activity, or excessive automated traffic such as scraping) - submit false, misleading, or fraudulent information through the Site. We may restrict or block access to the Site for violations of this section. ## 6. Fees and Payment The Site is provided for informational purposes. Fees for the Invert platform and related services are set forth in your organization's agreement with the Company (including any applicable order form). Platform fees and payment terms are not governed by these Terms. ## 7. Third-Party Services The Site may use or link to services operated by third parties (for example, hosting, analytics, or embedded content). We select service providers that support the Site based on security, privacy, and operational requirements and require them to meet appropriate contractual and security obligations. However, these providers operate independently. We do not control their services and are not responsible for their content, policies, or availability. Your use of third-party services may be subject to their own terms and privacy notices. ## 8. Intellectual Property The Site and its content, including text, graphics, logos, and other materials, are owned by or licensed to the Company and are protected by applicable intellectual property laws. You may use the Site for your own informational purposes. You may not copy, reproduce, distribute, or create derivative works from the Site or its content without our prior written permission. ## 9. Disclaimer of Warranties The Site is provided on an "AS IS" and "AS AVAILABLE" basis. We do not warrant that the Site will be uninterrupted or error-free. Information on the Site is provided for general informational purposes only and does not constitute professional advice. To the fullest extent permitted by applicable law, we disclaim all warranties, express or implied, including implied warranties of merchantability, fitness for a particular purpose, title, and non-infringement, except where such disclaimers are prohibited by law. ## 10. Limitation of Liability To the fullest extent permitted by applicable law, the Company will not be liable for any indirect, incidental, special, consequential, or punitive damages, or for any loss of profits, revenues, or goodwill, arising out of or relating to the Site or these Terms. ## 11. Termination We may restrict or block your access to the Site if you violate these Terms or if we determine that such action is necessary to protect the Site, our users, or third parties. You may stop using the Site at any time. Provisions that by their nature should survive termination will remain in effect. ## 12. Governing Law These Terms are governed by the laws of the State of Delaware, without regard to conflict of law principles. ## 13. Changes to the Terms We may update these Terms from time to time. The current version will be posted on this page with an updated "Last updated" date. By continuing to use the Site after changes are posted, you agree to the revised Terms. ## 14. Contact If you have questions about these Terms, contact us at [support@invertbio.com](mailto:support@invertbio.com). --- kind: legal title: "Privacy Policy" slug: privacy lastUpdated: 2026-04-16 url: https://invertbio.com/privacy markdown_url: https://invertbio.com/privacy.md --- # Privacy Policy ## 1. Introduction Invert, Inc. ("Company," "we," "our," "us") operates [invertbio.com](http://invertbio.com) and the Invert platform (the "Service"). This Privacy Policy describes how we collect, use, and disclose personal information when you use the Service. ## 2. Roles and Scope When you use our website or interact with us directly, Invert acts as a **data controller**. When you use our platform as part of an organization, Invert generally acts as a **data processor** on behalf of that organization, which controls the data processed through the Service. We may still act as a controller for certain information, such as account, billing, security, and how we operate and improve the Service, as described in this policy and, where applicable, in our agreements with your organization. ## 3. Information We Collect ### Information you provide - name and contact details (such as email) and, where relevant, professional or organization details (such as company name) - account and authentication information - information and content you or your organization submit in the platform (such as files, configurations, or operational data) - communications with us ### Information collected automatically - usage data (such as pages visited and actions taken) - device and browser information (which may include IP address and approximate location derived from it) - cookies and similar technologies We do not intentionally collect sensitive categories of personal data through our website or standard marketing forms. Your organization may submit a wider range of information through the platform; that processing is governed by our agreement with them where applicable. ## 4. How We Use Information We use personal information to: - provide and operate the Service, including authentication, accounts, and security - support and respond to inquiries - improve and monitor the Service - meet legal and contractual obligations Where required by applicable law, we process personal information: - to perform a contract with you or your organization - based on our legitimate interests, where permitted - with your consent, where required - to comply with legal obligations When we act as a processor for your organization, we process personal information in accordance with our agreement with that organization and their instructions. We may send service-related communications. You can opt out of marketing communications at any time. ## 5. Customer Data Personal information included in customer data processed through the platform is controlled by our customers. In this context, Invert acts as a data processor. We process this information to provide, secure, and support the Service, in accordance with our agreements with the customer and their instructions. Our use of AI and limited metadata is described in Section 6 (AI and Data Use). We implement logical and technical controls to isolate customer data between organizations. ## 6. AI and Data Use AI features are optional and only used if enabled by you or your organization, as applicable. Invert may use limited customer metadata, such as filenames, within narrowly scoped internal classification systems to improve product usability (for example, mapping selection). These systems: - use limited metadata inputs - do not process underlying datasets - do not generate content - do not affect customer-specific outcomes Customer data is not used to train general-purpose or cross-customer models unless expressly agreed in writing. ## 7. Sharing of Information We may share personal information with: - service providers that support our operations - affiliates within our corporate group - legal, regulatory, or law enforcement authorities when required by law - a successor in connection with a merger, acquisition, or sale of assets involving Invert We require service providers to process personal information only on our behalf and in accordance with appropriate security and confidentiality obligations. A list of subprocessors is available upon request or in our Trust Center. ## 8. Data Retention We retain personal information only as long as necessary to: - provide the Service - meet legal obligations - resolve disputes and enforce our agreements When personal information is no longer needed, we delete it, subject to reasonable technical and operational constraints (such as backups) and legal requirements. Retention of customer data processed on behalf of our customers is governed by our agreements with them. ## 9. International Transfers We process personal information in the United States and, where we use service providers or infrastructure in other countries, in those locations as necessary to provide the Service. When we transfer personal information across borders, we implement appropriate safeguards required by applicable law, such as standard contractual clauses or equivalent measures. ## 10. Security We use technical and organizational measures designed to protect personal information, including encryption, access controls, and monitoring. We continuously maintain and improve our security practices. ## 11. Your Rights Depending on your location, you may have rights to: - access, correct, or delete your personal information - restrict or object to certain processing - request data portability - withdraw consent where processing is based on consent To exercise your rights, contact us at [support@invertbio.com](mailto:support@invertbio.com). We may need to verify your identity before responding. If you use the Service through an organization, please contact your organization first. We may only be able to assist as permitted by our role and our agreement with them. ## 12. Cookies We use cookies and similar technologies to: - operate and secure the Service - maintain sessions and user preferences - analyze usage to improve the Service Some cookies are essential and required for the Service to function. Others, such as analytics cookies, are optional and used only where permitted by applicable law. You can manage your cookie preferences through the cookie banner or your browser settings. You can update your preferences at any time. We do not sell personal information. ## 13. Changes to This Policy We may update this Privacy Policy from time to time. The "Last updated" date reflects the latest version. We will post updates on this page. ## 14. Contact If you have questions about this Privacy Policy, contact us at: [support@invertbio.com](mailto:support@invertbio.com)