Skip to main content
📽️ HighByte Office Hours: Design and Query Your UNS Watch On-Demand. >

The Industrial DataOps Solution for Industry 4.0

Contextualized data is essential for Industry 4.0. Deploy HighByte Intelligence Hub at the Edge to access, model, transform, and prepare plant floor data for analysis in the Cloud.

Frame 37413

HighByte Intelligence Hub

HighByte Intelligence Hub is an Industrial DataOps software solution designed specifically for industrial data modeling, orchestration, and governance.
Industrial Data

Built for Industrial Data

HighByte Intelligence Hub was built for the unique qualities of industrial data. The software securely connects devices, files, databases, and systems via open standards and native connections. Use the interface to model streaming data in real time, normalize and standardize data points and data models inherent to diverse machinery, and add context to data payloads that otherwise lack descriptions. Tap into real-time and asset model data from a variety of edge data sources, including machine data, transactional data, and time series (historical) data.

Designed for Scale

Designed for Scale

HighByte Intelligence Hub was designed for scale. Simplify and accelerate the modeling of tens of thousands of datapoints from PLCs and machine controllers with re-usable models that transform raw data into complex, useful information. Import and export template definitions to quickly replicate common datasets across assets. Efficiently deliver contextualized and correlated information to the applications that require it. 

Ideal for Operations

Ideal for Operations

HighByte Intelligence Hub is an ideal solution for manufacturers and other industrial companies because the software was designed for Operational Technology (OT) teams. The platform-agnostic solution runs on-premises at the Edge, scales from embedded to server-grade computing platforms, and offers a code-free user interface. Administrators can network distributed Intelligence Hubs through a single management portal and deploy hubs without downtime.

hub-main-1
Pipelines-3
  • Enable complex data processing
  • Monitor data pipeline health
  • Publish cyclic, event, and batch datasets to targets
Connections-3
  • Establish inputs and outputs to OT and IT systems
  • Curate streaming, transactional, and historical data
Namespaces SM-2
  • Visually design, define structure, and organize contents of a namespace
  • Query across the namespace based on its hierarchy and contents
Enterpirse-3
  • Scale with multi-hub configuration and management
  • Apply CI/CD best practices to data automation with advanced IT features
  • Deploy at the edge or cloud or containerize
Conditions-3
  • Run calculations, reduce noise, and apply logic to raw data
  • Leverage global functions to define reusable JavaScript snippets
Models-3
  • Build and import models and instances of products, assets, processes, and systems
  • Standardize, contextualize, and normalize data attributes and payloads across the enterprise

Connect, Condition, Model, and Orchestrate

Codeless Integration
Connections
Connect to inputs and publish outputs with a few clicks—no scripting involved.
Data Conditioning
Conditions
Create aggregates, deadbands, and custom conditions to transform raw data.
Data Modeling
Models
Develop models that standardize and contextualize industrial data.
Orchestrate
Pipelines
Orchestrate the configuration and movement of datasets.

Measurable benefits for OT, IT And Line of Business

Accelerate analytics and other Industry 4.0 use cases with a digital infrastructure solution built for scale.
Reduce Integration Time

Reduce system integration time from months to hours

Data Curation

Improve data curation and preparation for AI and ML applications

Scale Enterprise

Scale operations metrics and analytics across the enterprise

Reduce Wait Time

Reduce information wait time for business functions 

Eliminate Troubleshooting Time

Eliminate time spent troubleshooting broken integrations

Empower Operators

Empower operators with insights from the Cloud

System Security

Improve system-wide security and data governance

System Integrity

Meet system integrity and regulatory traceability requirements

Cloud Costs

Reduce Cloud ingest, processing, storage costs, and complexity

HighByte Intelligence Solution Brief
VERSION 4.0

HighByte Intelligence Hub Solution Brief

Download the Solution Brief to learn more about critical features, measurable benefits, use cases, and technical specifications.

The Features You Need for an Agile Infrastructure

Codeless Integration

Codeless Integration

Collect and publish data over open standards and native connections—eliminating the need for custom-coded integrations. Easily configure and manage multiple connections and their respective inputs and outputs within the script-free interface. Collect data from SQL and REST source systems using dynamic requests leveraging inputs from other systems. Quickly integrate data from specialty systems and devices. Merge data from multiple systems into a complex modeled payload.

Data Conditioning

Data Conditioning

Collect and condition raw input data and then pass conditioned data to model instances or pipelines. Filter data through a deadband condition to reduce the jitter in a source sensor or measurement. Filter the data through an aggregate to buffer higher resolution data and provide statistical calculations using average, min, max, count, and delta at a slower rate to characterize the specified time period. Manipulate and transform raw input data into a usable format. Alarm on bad quality or stale data. Use the built-in transformation engine to standardize and normalize data for comparison and application mismatches. The transformation engine enables you to perform calculations, execute logic to define new “virtual property” values, and decompose complex strings at the Edge to improve data usability and reduce transmission volume. Define global JavaScript functions or load third-party JavaScript or Node packages, then use them in any expression within the Intelligence Hub.

Data Modeling

Data Modeling

Represent machines, products, processes, and systems with intelligent data models suited to your needs. Contextualize thousands of industrial data points by merging them with information from other systems, adding meta data, standardizing data attribute names and lists, and normalizing units of measure. Model hundreds of common assets in minutes with templatized inputs and instances and manage models through an intuitive attribute tree that enables model nesting.
Data Orchestration

Data Orchestration

Create data flows for raw data, modeled information, or files between connections on an interval or event basis. Enable store and forward to buffer data to disk if target connection is lost. Manage data pipelines within HighByte Intelligence Hub, monitoring state and processing metrics at each step. See and be alerted to connection failures and easily monitor the Intelligence Hub at scale using third-party observability platforms.
Data Pipelines

Data Processing

Use the graphical Pipelines builder to curate complex data payloads for everything from MQTT brokers to historians to data warehouses and track the transformation of data through the pipeline. Use stages to read, filter, buffer, transform, format, and compress payloads. Read within and across Namespace hierarchies using the Smart Query stage. Use the On Change stage to enable event-based delivery and report-by-exception of any data source. Use the Switch stage to introduce conditional logic to your flow and the Model Validation stage to assess incoming data payloads against a model definition.

Namespaces

Namespaces

Namespaces provides a dedicated space to visually organize datasets and their relationships. Use Namespaces to design and govern a Unified Namespace (UNS) or any destination system with a hierarchal namespace. Catalog data sources and modeled data sets into a logical hierarchy for seamless integration with many data consuming entities. Use Namespaces to take a modeled, contextualized representation of your operations and use this as a basis to drive the topic namespace in an MQTT broker, asset hierarchy in a historian, tag structure in a SCADA or IIoT platform, and more.
MQTT Broker

MQTT Broker

Use the embedded MQTT broker that’s tightly integrated with the Intelligence Hub’s core data integration and contextualization capabilities to design a namespace that provides a real-time view of the state of business. The MQTT v3.1.1 and v5 compliant broker was built from the ground up by HighByte. It supports both JSON and Sparkplug payloads and can be quickly enabled in the UI. The broker is a critical component to rapidly building a local unified namespace (UNS) inside the factory.
UNS Client

UNS Client

Use the UNS client to visually discover and interrogate contents residing in any MQTT broker, negating the need for external testing clients. Simply select a connection and instantly visualize the namespace. The UNS Client can automatically detect and visualize message payloads including JSON, Sparkplug, text, and raw binary, and decode Protobuf to make payloads human readable for Sparkplug users. In addition to topic and message inspection, the UNS Client can also publish messages to topics.

REST Data Server

REST Data Server

The REST Data Server acts as an API gateway for industrial data residing in OT systems, so any application or service with an HTTP client can securely request OT data in raw or modeled form directly from the Intelligence Hub—without requiring domain knowledge of the underlying systems. This API securely exposes the Intelligence Hub’s connections, models, and instances as well as the underlying values. Connect to the REST Data Server to access the Intelligence Hub as a transactional, request-and-response interface and programmatically browse the full Industrial DataOps infrastructure.

Edge Deployment

Edge Deployment

Run HighByte Intelligence Hub on your choice of light-weight hardware platforms including single board computers, industrial switches, IoT gateways, and industrial data servers. Deploy as an individual software installation or Docker image to rapidly deploy and upgrade system software components.