How Two Life Science Leaders Unlocked Pharma 4.0 Success
Life science manufacturers are trying to become more agile and use information more effectively to drive their businesses. Yet the pressures in life science manufacturing are unique. Products have a significant impact on human life, tolerances are tight, regulatory requirements are increasing, and there is a high cost of failure.
These pressures make implementing new technology more challenging than general industrial verticals. There are constraints from existing processes, tools, and IT systems that must be considered when deploying new systems, including:
- Architectural fragility previously introduced by custom code and point-to-point integrations
- Lack of data standardization and normalization across machinery
- Massive data lakes of machine data without context
- Siloed laboratory, maintenance, and supply chain data
These constraints create an agility paradox, as processes need to be closely controlled, while analytics requires a real-time, iterative approach. These constraints can also lead to rising and unpredictable cloud ingestion and processing costs and inefficient use of human resources. As such, many life science manufacturers are still struggling to scale their Pharma 4.0 use cases beyond pilot.
Yet there are life science manufacturers who are seeing success and leading the way. A panel discussion hosted by ABI Research—featuring digitalization leaders from Alcon and Catalent—provides some very useful guidance. Here’s what they had to say.
Alcon
Alcon is a pharmaceuticals device company specializing in eyecare products, including contact lenses and ocular surgical equipment.
Like many manufacturers, Alcon was struggling to scale their Industry 4.0 use cases beyond pilot. Across 17 manufacturing lines, they had thousands of pneumatic cylinders and motors, all sending vast quantities of data directly to a cloud data lake. They initially planned to manually standardize this data in the cloud, but quickly realized the personnel requirements to do so would be prohibitive.
To scale these use cases across the enterprise, they would need:
- To minimize one-off integrations
- A comprehensive, flexible data infrastructure
- All new applications to be at least 50% pre-built
In their search to address these needs, Alcon discovered HighByte Intelligence Hub. Using the Intelligence Hub, they were able to templatize their 17 manufacturing lines, handling months of manual standardization work in a fraction of the time.
This approach also allowed them to first process their data on-premises to reduce cloud ingestion costs and contextualize the data closer to the data source and domain expert. With their newly standardized data, Alcon was able to launch a predictive maintenance program that they could easily adapt to work on additional sites in weeks rather than years.
Return on Investment
Proving ROI to decision-makers has not been difficult for the Alcon team. To justify these investments, John Patanian, Data Analytics Manager at Alcon, provided the following advice:
- Tie infrastructure investments to other key performance indicators (KPIs) to justify funding requests.
- Demonstrate how infrastructure investments contribute to improved plant performance and other measurable outcomes. For Alcon, this meant highlighting three key areas: Cost reduction through optimization, increased efficiency leading to value-added activities, and improved productivity.
This approach has enabled Alcon to better develop future funding requests for new use cases aimed at increasing efficiency and lowering costs.
Catalent
Catalent is a contract development and manufacturing partner for personalized medicine, drugs, and consumer brands.
Catalent faced a multifaceted challenge with their bioreactor data. They wanted to model data from their bioreactors to make it accessible in their Unified Namespace (UNS), but the data lacked the context and format necessary to be usable by data consumers. To give their data context, they relied on a manual process in which data scientists spent time labeling data. There were two key problems with this approach:
- It was costly for high-value personnel to spend their time labeling data.
- It wasn’t scalable, as simply hiring more personnel doesn’t address the core problem.
While looking for ways to automate data contextualization, Catalent decided to implement HighByte Intelligence Hub. Catalent leveraged the Intelligence Hub to build a replicable model that added the context needed to make their data fit the ISA-95 format of the UNS, freeing their data scientists from almost all manual labeling.
In one case, Catalent used HighByte Intelligence hub to model data from a set of 48 bioreactors, each generating 100 tags. Manual entry routinely took hours to accomplish, but with the Intelligence Hub, Catalent built a data model that they could replicate for each bioreactor. As a result, they were able to model 24 of the 48 reactors in less than 1 hour, ensuring that data could be ingested by the UNS without manual effort. The model can now be altered and applied across the enterprise for any similar bioreactors.
Return on Investment
Increased productivity and value generation can be difficult to quantify, but Chris Demers, Global Lead for Plant Data and Analytics at Catalent, clearly sees the value. Chris advises that manufacturers measure ROI in digital infrastructure investments by calculating the time savings achieved by eliminating manual data entry. This time savings frees up highly skilled personnel to focus on critical thinking and problem-solving and ultimately do more with less.
HighByte Intelligence Hub for Life Science Data Architecture
HighByte Intelligence Hub is an Industrial DataOps software solution designed specifically for industrial data modeling, delivery, and governance.
Alcon and Catalent have both implemented HighByte Intelligence Hub as a core element of their architecture. While they have taken different approaches, both companies have both improved data access, built a resilient data architecture, and ensured scalability.
Based on learnings from these companies and other life science manufacturers, I recommend the following:
- Measure your digital maturity. It is critical to understanding your current state and guiding your data architecture evolution.
- Find an Industrial DataOps solution that can conform to your existing infrastructure. This is ideal for highly regulated industries like the life sciences, as it will minimize interruptions to critical compliance processes.
- Work with software solutions that provide scalability and maintainability. This will enable rapid deployment and access to information across the enterprise, including vendors and contract manufacturers.
- Work across departments to design a data architecture that’s reusable, including asset model portability.
- Define data usage and payloads before moving data to the cloud. I strongly advocate for pre-processing data before sending it to the cloud to ensure it is in a usable format.
- Define the right balance between edge and cloud computing for your unique organization, with data being sourced, processed, and analyzed in different locations depending on user needs and latency requirements.
To learn more about HighByte or Alcon and Catalent’s approaches to building a scalable data architecture, watch the panel discussion or download this ABI Insight report from ABI Research.