HighByte Blog
Read company updates and our technology viewpoints here.
|
Read company updates and our technology viewpoints here.
|
Time to read: 7 minutes When I first joined the HighByte team, I knew two things. First, modeling industrial data is immensely powerful. After spending a decade interacting with tags and seeing firsthand how building context from tags in the Cloud is painful, I knew that modeling data at the Edge would be a game changer. The second thing I knew is that we were going to build a lot of connectors. This is par for the course in the industrial world where a mix of legacy and new equipment is the norm. We started with the most common and generalized standards, like OPC UA, HTTP, MQTT, and SQL to cast a wide net for connectivity options inside the factory. But it was clear that as we progressed, the market would demand explicit connectors for common systems. That is why I am excited to announce new connectors in version 2.2 for OSIsoft PI System (now part of the AVEVA portfolio), InfluxDB, and Oracle Database. All three connectors support both reading and writing data, and interacting with these systems in advanced ways, without needing to read a manual. Time to read: 10 minutes The promise of Industry 4.0 has many manufacturing leaders thinking big. They envision a future in which real-time access to data opens the door to unprecedented levels of operational flexibility, predictability, and business improvement. For many, early-stage wins often lead to larger projects that stall or fail to scale because their data infrastructure couldn’t support the increasing project complexity. Enter Industrial DataOps. DataOps (data operations) is the orchestration of people, processes, and technology to securely deliver trusted, ready-to-use data to all the systems and people who require it. The first known mention of the term “DataOps” came from technology consultant and InformationWeek contributing editor Lenny Liebmann in a 2014 blog post titled, “DataOps: Why Big Data Infrastructure Matters.” According to Leibmann: “You can’t simply throw data science over the wall and expect operations to deliver the performance you need in the production environment—any more than you can do the same with application code. That’s why DataOps—the discipline that ensures alignment between data science and infrastructure—is as important to Big Data success as DevOps is to application success.” Time to read: 7 minutes I love the chaos of an early market like DataOps for Manufacturing. It’s clear that things are changing, but what technologies and approaches will win out is less obvious. In these types of markets, as a solution provider, it’s equally fun to watch them mature. One sign of a maturing market is the type of questions early customers ask about a solution. At first, the questions are different variations of “Does it work?” or “How is it different than a, b, or c?” as customers try and understand the solution and how it solves their problem. As the market matures, the questions shift focus to technical requirements like “What’s the performance with 10,000x?” or “Does it support high availability?” Here at HighByte we’re seeing more scale and reliability questions in early engagements, a sign that both the market and the product are maturing. That’s why I’m excited to announce some key features in version 2.1 that make HighByte Intelligence Hub more scalable and reliable to fit the needs of your production environment. Time to read: 7 minutes Manufacturers and other industrial companies adopting Industry 4.0 want to make industrial data available at scale across the enterprise to drive business decisions. Yet as these companies connect more processes, systems, and machines, their data modeling and integration needs have become more complex. Industrial DataOps solutions like HighByte Intelligence Hub provide an answer to this complexity. The software provides a dedicated data modeling management and abstraction layer that helps users streamline their data architecture and reduce time to deploy new systems. In fact, as companies have expanded their usage of HighByte Intelligence Hub, they’ve begun to implement deployment architectures beyond a single hub. In a recent poll of HighByte Intelligence Hub users, we asked how many instances they plan to run at a single site. The results validated the demand for a multi-hub architecture: Half of the respondents expect to deploy two to five hubs per site; nearly one-quarter said they plan to use six to 10 hubs per location. Time to read: 5 minutes Since releasing HighByte Intelligence Hub version 1.0 in January 2020, our customers have been successfully deploying solutions to simplify the integration of existing operational technology (OT) and new Industry 4.0 solutions that deliver rich information to IT, data scientists, and other stakeholders. Throughout the past year, we have focused on building out a connectivity library that enables users to connect to AWS IoT SiteWise, Azure IoT Hub and Event Hubs, REST, SQL, MQTT / Sparkplug, OPC UA, and CSV files, providing the market with the interoperability needed for Digital Transformation. In addition to connectivity, HighByte Intelligence Hub introduced a no-code approach to modeling assets, systems, processes, or systems of systems that are centrally managed and automatically transformed into a usable format for any one of our connectors. This has enabled customers to scale their plant to cloud initiatives in days and weeks, rather than months and years. Time to read: 7 minutes Update: HighByte Intelligence Hub has evolved since this blog first published in July 2021. Please visit this post to learn how the Intelligence Hub now provides a complete UNS infrastructure solution. The unified namespace (sometimes referred to as the UNS or universal namespace) can be an allusive concept for many of us as we move to an Industry 4.0. world. At HighByte, we define the UNS as a consolidated, abstracted structure by which all business applications are able to consume real-time industrial data in a consistent manner. The benefits of a UNS include reduced time to implement new integrations, reduced efforts to maintain data integrations, improved agility of integrations, access to new data, and improved data quality and security. We are often asked if HighByte Intelligence Hub is a UNS. The answer depends on your priorities and project scope. We typically see three architectural patterns for implementing the UNS. HighByte Intelligence Hub can play a key role in each approach by both providing access and structure to the UNS or acting as the UNS. Time to read: 6 minutes Data modeling is all about standardization. It enables interoperability, shows intent, determines trust, and ensures proper data governance. Given the criticality of usable data at scale for Industry 4.0, many manufacturers have turned to ISA-95—probably the most commonly recognized data-modeling standard around the world—for guidance. Created by a standards committee at the International Society of Automation, the ISA-95 specification defines in detail the electronic information exchange between manufacturing control functions and other enterprise functions, including data models and exchange definitions. The purpose of ISA-95 is “to create a standard that will define the interface between control functions and other enterprise functions based upon the Purdue Reference Model”. Per the committee, the goal is to reduce the risk, cost, and errors associated with system integration. Historically, ISA-95 has been the guide for many off-the-shelf and bespoke manufacturing execution systems (MES). Today, ISA-95 also helps industrial organizations implement data integrations that link MES, enterprise resource planning (ERP) systems, IIoT platforms, data lakes, and analytics solutions. It also eases the implementation of a unified namespace (UNS) for enterprise data integration. The specification defines a hierarchal model for systems, detailed information models, and a data flow model for manufacturing operations management (MOM). Let’s take a look at these 3 key attributes in more detail and uncover how the ISA-95 specification can be applied within HighByte Intelligence Hub. Time to read: 6 minutes Our latest release is packed with new features and capabilities that make common Industry 4.0 use cases not just easy, but fun! Do you need to get SQL data into your Unified Namespace (UNS)? How about data from your test equipment that’s sitting around in CSV files? Maybe you’re moving away from SQL and experimenting with NoSQL alternatives because your data models are in a state of change? Even better, maybe you have your Edge-to-Cloud strategy figured out, and now you’re looking at “Cloud-to-Edge”, trying to get alerts generated by machine learning back to the factory floor. These are just a handful of use cases we hear from customers and new use cases we’ve enabled in HighByte Intelligence Hub version 1.4. Time to read: 6 minutes In my last post, “An intro to industrial data modeling”, I shared my definition of a data model and why data modeling is important for Industry 4.0. I’d like to take that a step further in this post by explaining why you need a dedicated abstraction layer for data modeling to achieve a data infrastructure that can really scale.
Time to read: 7 minutes
The data model forms the basis for standardizing data across a wide range of raw input data. An industrial DataOps solution like HighByte Intelligence Hub enables users to develop models that standardize and contextualize industrial data. In short, HighByte Intelligence Hub is a data hub with a modeling and transformation engine at its core.
But what exactly is a data model, and why is data modeling important for Industry 4.0? This post aims to address these questions and provide an introduction to modeling data at scale. Time to read: 7 minutes Based on my conversations with more than 500 manufacturing companies and integrators over the past five years, I believe the Industrial Internet of Things (IIoT) will continue to be a paramount part of the manufacturing landscape in 2021. The new year will bring a continued increase in digitalization across enterprises. While we have seen an increase in “digital transformation” initiatives among manufacturing companies for several years, the COVID-19 pandemic and the challenges it created for production, safety, remote access, and supply chain have accelerated the urgency to make digitalization a reality. I also believe IIoT projects will continue to scale because of changes we are seeing in people, processes, and technology. Here are five predictions for 2021. Time to read: 7 minutes Industry 4.0 solutions start with the same problem. How do I collect critical data from the factory floor? This sounds easy, but in reality, factory floors are highly heterogenous environments. It's common to have a newer machine that is highly connected sitting next to a 30-year-old machine with no connectivity at all. This forces teams to get creative. They might use an OPC UA server for one machine, SQL for the next, and retrofit another with new sensors that publish data via REST or MQTT. Each situation is unique, and teams need flexible solutions to leverage the connectivity options they have in place today. That’s why I am excited to announce the release of HighByte Intelligence Hub version 1.3. This release is full of new capabilities that allow our customers to gather data from many sources in the factory, rapidly add context to the data, and reliably deliver it to their platforms of choice. These additional capabilities greatly expand the connectivity options available to our customers. Here are the highlights: Time to read: 8 minutes How much time do you spend cleaning data? If your factory is like most connected operations, you probably have tons of raw data streaming from connected devices to existing enterprise systems, bespoke databases, and a cloud data lake. This architecture often leads to inconsistent or even unusable data for several reasons. We know the Cloud is a key tool for digital transformation. It provides the scalability and storage capacity you need to collect and interpret vast amounts of data coming from the operations level. However, by nature, cloud platforms are IT-focused tools. They structure data differently than operational systems, which means IT must spend a lot of time cleaning the data before it can be used. And if the data moves directly to different enterprise systems, multiple teams across the organization will clean the data independently, leading to different versions of the truth.
Time to read: 6 minutes
Let’s talk about getting OPC data into Microsoft Azure. When you search this phrase in Google, 90% of results provide this use case: streaming sensor data to the Cloud.
If your Industry 4.0 solution is streaming sensor data to the Cloud, you're doing it wrong. Now let me explain. On the factory floor, we have machines driven by PLCs, and we typically have an OPC server connected to those PLCs that feeds data into an HMI. OPC servers and HMIs work with tags, which are discrete streams of data. For example, one tag might be for pressure and another might represent the on and off state of the machine. When cloud technology like Microsoft Azure first entered the scene, vendors created IoT gateways to connect to the OPC server and send tag streams to the Cloud in a JSON format. It was the easiest thing to do, and once that connection was made, we thought we were done. Time to read: 4 minutes The future of Industry 4.0 is open: open standards, open platforms, and open thinking. In today’s ecosystem, realizing the full potential of Industry 4.0 requires a mesh of products working together to fulfill each layer of the technology stack. Open standards and platforms simplify these integrations and speed up the time-to-value for Industry 4.0 solutions. That is why I am excited to announce the release of HighByte Intelligence Hub version 1.2, enabling our customers to deploy their Industry 4.0 solutions even faster by leveraging additional open standards and platforms. Time to read: 4 Minutes Communication within a start-up is pretty straightforward. If you have a question about a new product launch, you go directly to the owner or CEO. Problems with a design flaw? Talk to your lead engineer. As that business scales, your lines of communication become more complex. You may need to send information through multiple channels to get an answer. Without an easy way to send or retrieve information, it might get lost or misinterpreted or you may wait days for an answer. Anyone who has worked in that environment knows the inherent challenges.
Time to read: 4 minutes
Bill is leaving Acme Manufacturing Corporation. And when Bill leaves, he will take with him a tremendous amount of the tribal knowledge that he accumulated over the last 10 years at Acme. Bill has spent the last decade building out all of the industrial data systems and all of the individual connections between these disparate systems. Bill is the only person in the entire facility who has knowledge of the custom connections and interdependencies between OT and IT systems. With Bill leaving, the team at Acme is challenged with picking up the pieces and trying to gather up all of Bill’s tribal knowledge in order to maintain connectivity and prevent system downtime. The OT and IT teams must go deep into the custom code to try to understand and replicate what Bill has done. This is a challenging and cumbersome task, especially when troubleshooting broken integrations.
Time to read: 10 minutes
Most manufacturing companies realize the benefits of leveraging industrial data to improve production and save costs, but they remain challenged as to how to scale-up their pilots and small-scale tests to the plant-wide, multi-plant, or enterprise level. There are many reasons for this including the time and cost of integration projects, the fear of exposing operational systems to cyber-threats, and a lack of skilled human resources.
At the root of all of these problems is the difficulty of integrating data streams across applications in a multi-system and multi-vendor environment, which has required some degree of custom coding and scripting. Standardizing data models, flows, and networks is hard work. Unlike an office environment with its handful of systems and databases, a typical factory can have hundreds of data sources distributed across machine controls, PLCs, sensors, servers, databases, SCADA systems, and historians—just to name a few. Industrial DataOps provides a new approach to data integration and management. It provides a software environment for data documentation, governance, and security from the most granular level of a machine in a factory, up to the line, plant, or enterprise level. Industrial DataOps offers a separate data abstraction layer, or hub, to securely collect data in standard data models for distribution across on-premises and cloud-based applications. These four use cases illustrate how Industrial DataOps can integrate your role-based operational systems with your business IT systems as well as those of outside vendors such as machine builders and service providers.
Time to read: 5 minutes
Leveraging industrial information to make better decisions, faster is the ultimate goal of Industry 4.0, Smart Manufacturing, IIoT, and Advanced Analytics solutions. However, as industrial information networks expand to encompass more sensors, devices, and systems, the existing data infrastructure has become overloaded. Manufacturers and other industrial companies require an Industrial DataOps solution that can handle industrial data collection, transformation, and delivery at scale. Scalability is the name of the game.
Time to read: 7 minutes
An executive for an industrial products company once told me even though his factories are full of similar equipment, he still struggled to access meaningful data from the machines. Each one of the plastic injection molding machines had a different way of presenting the data. That meant the company needed to customize coding for every piece of equipment to obtain meaningful insights.
It’s a common scenario in many industrial environments, where plants may have hundreds of PLCs and machine controllers on disparate machines generating operational data that is unintelligible to the data scientists who must make sense of it. This is where Industrial DataOps comes in. It provides a way to standardize data using common models, or object-oriented approaches, to integrate and manage information coming from multiple sources. Here’s a closer look at the top six signs it’s time to consider an Industrial DataOps architecture for your company.
Time to read: 14 minutes
If you know me well, then you’ve probably heard me say words matter. A shared vocabulary—and a shared understanding of a word’s meaning—is a simple but powerful tool when two bodies approach a problem from different perspectives.
Two bodies that often approach problems, projects, and process from different perspectives are IT and Operations Technology (OT). While the industrial automation community has been writing and discussing the necessity of IT-OT convergence for nearly a decade, this functional collaboration still remains a stumbling block for many industrial companies on their Industry 4.0 journeys. The good news is that the emerging concept of Industrial DataOps can provide some common ground. DataOps is a new approach to data integration and security that aims to improve data quality and reduce time spent preparing data for use throughout the enterprise. Industrial DataOps provides a toolset—and a mindset—for OT to establish “data contracts” with IT. By using an Industrial DataOps solution, OT is empowered to model, transform, and share plant floor data with IT systems without the integration and security concerns that have long vexed the collaboration. If we see the value in IT-OT collaboration, the first step is getting these functions to speak the same language. This post aims to document key terms surrounding Industrial DataOps and provide IT and OT with a common dictionary. Some of these definitions are more technical in nature and others are more business oriented. Let’s dive in.
Time to read: 4 minutes
In my last post, “Seven steps to making your industrial data fit for purpose”, I briefly covered seven steps that are critical for manufacturers looking to scale their IIoT projects and wrangle data governance. I’d like to use this post to dive deeper on step 4, selecting your integration architecture, which requires diligence during IIoT planning.
Integration architectures fall in two camps: direct application programming interface (API) connections (application-to-application) or integration hubs (DataOps solutions).
Time to read: 6 minutes
A modern industrial facility can easily produce a terabyte of data each day. With the proliferation of sensors and the recent wave of real-time dash-boarding, artificial intelligence, and machine learning technologies, we should be seeing huge productivity gains. Unplanned maintenance of assets and production lines should be obsolete.
But this is not the case. Access to data does not mean it is useful. Industrial data is very raw and must be made “fit for purpose” in order to extract its true value. Furthermore, the tools used to make the data fit for purpose must operate at the scale of an industrial facility. With these realities in mind, I’ve written a practical, seven-step guide for manufacturers and other industrial companies to make their data fit for purpose. Time to read: 8 minutes The manufacturing industry is experiencing a change so significant it has earned the title of Fourth Industrial Revolution. This transformation was kick-started by the need to become more data driven and then fueled by a number of recent technological advances. Early adopters in factories around the world recognize that industrial data—operations data coming from machines, processes, products, and systems on the plant floor—is gold. More users and systems want access to this data in real time to convert it into valuable information they can act on to predict machine failure, prevent downtime, and improve product quality. In fact, IDC recently projected that there will be 41.6 billion IoT devices in the field generating 79.4 zettabytes of data by 2025. These devices include machines, sensors, and cameras as well as industrial tools. It’s an immense, even overwhelming, volume of data. How can companies leverage it effectively? Updated: 03/25/2020 Time to read: 9 minutes Earlier this year, my colleague John Harrington wrote an article for Control Engineering that I think is worth sharing here as well. The article introduces a concept and process that gained popularity as early as the 1970s: Extract, Transform, Load—or more commonly known as ETL. An ETL system extracts data from the source systems, enforces data quality and consistency standards, conforms data so that separate sources can be used together, and finally delivers data in a presentation-ready format so that application developers can build applications and end users can make decisions (Kimball and Caserta, 2004). So why are we still talking about this acronym 50 years later? Because the unique challenges of working with industrial operations data demand a new look at an old concept. |
|