Skip to main content
📽️ HighByte Office Hours: Design and Query Your UNS Watch On-Demand. >

Configure and Move Tailored Datasets with Pipelines

Pipelines are the center of data movement and payload orchestration in the Intelligence Hub. Use Pipelines to curate, apply logic, and optimize datasets for specific applications using a simple but flexible graphical user interface. Build out stages in a pipeline to model, filter, buffer, or transform data flows to optimize delivery for consuming applications and device nodes.

HighByte Intelligence Hub Pipelines

Oversee data movement between systems with Pipelines.

Data Motion

Data Motion

Orchestrate the movement of data on time intervals or logic-based events. Delay the delivery of data or buffer data based on time or size. Maintain the state of data that has been moved.

Data Processing

Data Processing

Sequentially transform data with pre-built processing stages, custom expressions, or even third-party JavaScript libraries. Process complex event streams of data structures with modeling, validation, and end-to-end observability. Tailor modeled data to meet the needs of multiple consuming applications and services.

Data I-O

Data I/O

Compose data extraction routines from simple input reads to sophisticated parameter-based queries, multi-stage lookups, cross references, payload appending, and more. Publish data to destination systems and dynamically drive schema, topics, keys, and identifiers. Define success/failure criteria of an integration and enforce these criteria with custom error handling.

Enable complex data processing with the Intelligence Hub for a variety of use cases.

Simplify Data Pipelines

The transportation of contextualized data continues to be a key pain point for industrial organizations. Solve this pain point with Pipelines. Read data from an input or instance source and then publish that data to one or many target connections with Pipelines. Pipelines can be triggered by events or intervals, while running the execution of each stage in the pipeline. Replay pipeline runs to observe data transformations from stage to stage, observe and react to any errors, and monitor pipeline and stage statistics for delays.

Curate Payloads

Most target systems have limitations on how they can process inbound data. They may be incapable of parsing or filtering structured data. They may consume multiple records in batches or files. Use Pipelines to dynamically break up objects and arrays, discard unnecessary elements to facilitate easy consumption, and buffer data on time or record count. Publish as JSON payloads or as CSV and Parquet files—or compress those files prior to delivery. This enables target systems to efficiently consume industrial data regardless of how source systems produce and transmit it.

Engineer Sequential Stages and Conditional Actions

Some data sets must be sourced from multiple systems, where the data from one system is being used in the subsequent read. Based on the values sourced, the pipeline data may go down one or multiple conditional paths with unique structuring, transformation, filtering, and writing to target systems. Pipelines can employ metadata to facilitate dynamic reads as well as curate the presentation of data to the needs of the systems consuming it.

Model Data in Pipelines

Industrial data is not only produced as primitive tags or points, but also as complex structures. Instead of decomposing these structures and explicitly mapping them into model instances, Pipelines can validate and re-shape incoming data structures to adhere to model definitions. This is ideal for enforcing data quality or modeling many data sources that produce similar data structures with subtle differences.

Manipulate data with these Pipeline Stages.

Stage Type
Stage Name

Common

Breakup

Filter

Flatten

Model

Model Validation

Size Buffer

Timed Buffer

Transform

Control

On Change

Switch

File Format

CSV

Gzip

JSON

Parquet

Zip

I/O 

Read

Smart Query

Write

Write New

Trigger

Event

Flow

Polled

 

PIPELINES FEATURES AND FUNCTIONALITY

Use HighByte Intelligence Hub to complete the following tasks:

Check mark
Trigger pipeline execution based on an event or polling rate
Check mark
Read inputs and instances or query the Namespace to build the pipeline value payload
Check mark
Model, filter, buffer, transform, format, and compress data for the unique data consumption needs of target systems
Check mark
Use the on-change stage to enable event-based delivery and report-by-exception
Check mark
Persist variables according to the length of the manufacturing event with state management capabilities
check-mark 1
Track individual event executions at a granular level, including execution time and success/failure
Check mark
Use the switch stage to apply switch statement logic within a Pipeline; in the event of a failed write, easily define how to handle and remediate the error with conditional logic
Check mark
Employ metadata and logic in sequenced stages to dynamically shape the presentation and delivery of data
Check mark
Compose custom transformation stages with JavaScript expressions to satisfy advanced use cases
Check mark
Manage and monitor Pipeline data processing stages, status, performance, and individual event executions in real time
Check mark icon
Observe state and errors at-a-glance for each Pipeline stage
Check mark icon
Monitor performance with high-level metrics tabulating completions, queues, errors, and more

Ready to try HighByte Intelligence Hub?

Join the free trial program to get hands-on access to all the features and functionality within HighByte Intelligence Hub and start testing the software in your unique environment.