HighByte Blog
Read company updates and our technology viewpoints here.
|
Read company updates and our technology viewpoints here.
|
Time to read: 9 minutes In an earlier blog, “The power of payloads in your unified namespace,” I discussed the use of complex payloads combining multiple unified namespace (UNS) data streams to make the architecture more responsive to the diverse needs of consuming personas and systems. In this post, I want to show what these complex payloads might look like, how data models can enable a UNS architecture, and how easily HighByte Intelligence Hub can provide consuming systems with the necessary data—when and how it’s needed. Time to read: 9 minutes I consistently hear that many manufacturers are drowning in data and struggling to make it useful. Why is that? A modern industrial facility can easily produce more than a terabyte of data each day. With a wave of new technologies for artificial intelligence and machine learning coupled with real-time dashboards and prescriptive insights, industrial companies should be seeing huge gains in productivity. Unplanned asset and production line maintenance should be a thing of the past. But we know that is not the case. Access to data does not make it useful. Industrial data is raw and must be made fit for purpose to extract its true value. Furthermore, the tools used to make the data fit for purpose must operate at the scale of an industrial enterprise. For many industrial companies, this is a daunting task requiring alignment of people, process, and technology across a global footprint and supply chain. At HighByte, we’re putting our best foot forward to solve this data architecture and contextualization problem from a technology perspective. But what about people and process? To pull it all together, we recently published a new guide, “Think Big, Start Small, Scale Fast: The Data Engineering Workbook.” The guide provides 10 steps to achieving a scalable data architecture based on the best practices we’ve learned from our customers over the last several years. Time to read: 7 minutes The Unified Namespace (UNS) architecture pattern has proven to be an effective means to opening industrial data access up to the entire business, but the road to implementation is not without a few speed bumps. First, as industrial companies start to establish their hierarchy and build their UNS, they may find it difficult to get their data to follow their own rules. By its nature, UNS architecture draws from a multitude of different data sources, most of which present data in unique formats. Even superficially similar assets can format the data they generate in completely unique ways, and differences in data generated by wholly different machines, systems, and PLCs are even more stark. To limit problems in creating and operating a UNS, some industrial companies simply publish data from each system and device directly to an MQTT broker in their own topic namespace. This practice is not truly a UNS, and it offers little of the data accessibility and usability promised by this architectural pattern. Second, the UNS topic space typically follows the hierarchy: Site, Area, Line, Zone, Cell, and Asset. At each level, the information may include data from multiple systems including PLCs, SCADA, MES, CMMS, QMS, ERP, etc. On the consuming side, many users have unique needs that the UNS alone may not be able to meet. These challenges are what make consistent, easily scalable abstraction a critical part of your UNS. Time to read: 7 minutes The Unified Namespace (UNS) is among the fastest-growing data architecture patterns for Industry 4.0, promising easy publish-subscribe access to hierarchically structured industrial data. At HighByte, we define a UNS as a consolidated, abstracted structure by which all business applications can consume real-time industrial data in a consistent manner. A UNS allows you to combine multiple values into a single, structured logical model that can be understood by business users across the enterprise to make real-time decisions. But many industrials are finding that though they’ve loaded their device telemetry data in their UNS, they are struggling to use it. The UNS’ uniform data standards, hierarchical structure, and publish-subscribe pattern do an excellent job of providing easy, logical access to data, but business and analytics users often discover that they must subscribe to multiple data streams from separate levels of the hierarchy to get what they need for their applications. There are two problems with this approach: Time to read: 7 minutes The efforts of standards organizations like OPC Foundation, Eclipse Foundation (Sparkplug), ISA, CESMII, and MTConnect represent a significant step forward for the advancement of Industry 4.0 in manufacturing. But industry standards only go so far. Businesses need data to tell the story of what is happening, why it is happening, and how to fix it. Multiple pieces of information must be assembled with other pieces of information from other sources to tell the use case story—just like words must be combined into sentences and sentences combined to form stories. Data standards can’t tell the use case story—they can only provide a dictionary. Standardizing the device-level data into structures is key, but only the beginning. Data standards alone will not solve your interoperability problems because they don’t provide the use case related context you need to make strategic decisions. Here are four key reasons why you still need an Industrial DataOps solution like the Intelligence Hub—even with the introduction or evolution of new standards. Time to read: 6 minutes Data modeling is all about standardization. It enables interoperability, shows intent, determines trust, and ensures proper data governance. Given the criticality of usable data at scale for Industry 4.0, many manufacturers have turned to ISA-95—probably the most commonly recognized data-modeling standard around the world—for guidance. Created by a standards committee at the International Society of Automation, the ISA-95 specification defines in detail the electronic information exchange between manufacturing control functions and other enterprise functions, including data models and exchange definitions. The purpose of ISA-95 is “to create a standard that will define the interface between control functions and other enterprise functions based upon the Purdue Reference Model”. Per the committee, the goal is to reduce the risk, cost, and errors associated with system integration. Historically, ISA-95 has been the guide for many off-the-shelf and bespoke manufacturing execution systems (MES). Today, ISA-95 also helps industrial organizations implement data integrations that link MES, enterprise resource planning (ERP) systems, IIoT platforms, data lakes, and analytics solutions. It also eases the implementation of a unified namespace (UNS) for enterprise data integration. The specification defines a hierarchal model for systems, detailed information models, and a data flow model for manufacturing operations management (MOM). Let’s take a look at these 3 key attributes in more detail and uncover how the ISA-95 specification can be applied within HighByte Intelligence Hub. Time to read: 6 minutes In my last post, “An intro to industrial data modeling”, I shared my definition of a data model and why data modeling is important for Industry 4.0. I’d like to take that a step further in this post by explaining why you need a dedicated abstraction layer for data modeling to achieve a data infrastructure that can really scale.
Time to read: 7 minutes
The data model forms the basis for standardizing data across a wide range of raw input data. An industrial DataOps solution like HighByte Intelligence Hub enables users to develop models that standardize and contextualize industrial data. In short, HighByte Intelligence Hub is a data hub with a modeling and transformation engine at its core.
But what exactly is a data model, and why is data modeling important for Industry 4.0? This post aims to address these questions and provide an introduction to modeling data at scale.
Time to read: 6 minutes
Let’s talk about getting OPC data into Microsoft Azure. When you search this phrase in Google, 90% of results provide this use case: streaming sensor data to the Cloud.
If your Industry 4.0 solution is streaming sensor data to the Cloud, you're doing it wrong. Now let me explain. On the factory floor, we have machines driven by PLCs, and we typically have an OPC server connected to those PLCs that feeds data into an HMI. OPC servers and HMIs work with tags, which are discrete streams of data. For example, one tag might be for pressure and another might represent the on and off state of the machine. When cloud technology like Microsoft Azure first entered the scene, vendors created IoT gateways to connect to the OPC server and send tag streams to the Cloud in a JSON format. It was the easiest thing to do, and once that connection was made, we thought we were done.
Time to read: 14 minutes
If you know me well, then you’ve probably heard me say words matter. A shared vocabulary—and a shared understanding of a word’s meaning—is a simple but powerful tool when two bodies approach a problem from different perspectives.
Two bodies that often approach problems, projects, and process from different perspectives are IT and Operations Technology (OT). While the industrial automation community has been writing and discussing the necessity of IT-OT convergence for nearly a decade, this functional collaboration still remains a stumbling block for many industrial companies on their Industry 4.0 journeys. The good news is that the emerging concept of Industrial DataOps can provide some common ground. DataOps is a new approach to data integration and security that aims to improve data quality and reduce time spent preparing data for use throughout the enterprise. Industrial DataOps provides a toolset—and a mindset—for OT to establish “data contracts” with IT. By using an Industrial DataOps solution, OT is empowered to model, transform, and share plant floor data with IT systems without the integration and security concerns that have long vexed the collaboration. If we see the value in IT-OT collaboration, the first step is getting these functions to speak the same language. This post aims to document key terms surrounding Industrial DataOps and provide IT and OT with a common dictionary. Some of these definitions are more technical in nature and others are more business oriented. Let’s dive in.
Time to read: 6 minutes
A modern industrial facility can easily produce a terabyte of data each day. With the proliferation of sensors and the recent wave of real-time dash-boarding, artificial intelligence, and machine learning technologies, we should be seeing huge productivity gains. Unplanned maintenance of assets and production lines should be obsolete.
But this is not the case. Access to data does not mean it is useful. Industrial data is very raw and must be made “fit for purpose” in order to extract its true value. Furthermore, the tools used to make the data fit for purpose must operate at the scale of an industrial facility. With these realities in mind, I’ve written a practical, seven-step guide for manufacturers and other industrial companies to make their data fit for purpose. |
|