Skip to main content

Tailor Modeled Data for Target Systems with Pipelines

Pipelines provide the ability to better curate and optimize data payloads for specific applications using a simple but flexible graphical user interface. Build out stages in a pipeline to buffer, transform, or format data flows to optimize delivery for consuming application and device nodes. Easily adapt and reuse Pipelines instead of maintaining disparate and overlapping data models. 

HighByte Intelligence Hub Pipelines

Why Pipelines

When connections are established for an integration, they are created with a specific intent in mind, influencing how data is modeled and moved over time. The original intent might have been to simply consume source data and then move it as-is into a target system. Or perhaps the intent was to blend data with additional context or transform it into some logical structure. But as new target systems are introduced to consume this data, the project scope likely needs to evolve due to the unique requirements or limitations of how these target systems consume data.

Sometimes, it may be the model itself.  Other times, it may be the presentation and delivery of the model.

In practice, context is introduced as source data traverses organizational hierarchy and systems of record. This data and context, which is governed by a model, needs to be consumable by a wide range of devices and applications. Not only can HighByte Intelligence Hub ingest and model data with context, but it can flexibly adapt its presentation and delivery for target systems. Pipelines retain the semantics of a model while transforming the presentation and delivery to the unique needs of the systems consuming it. To summarize, Pipelines make modeled, contextual data more accessible to more nodes.​​​

Use Cases

Breakup Complex Payloads

Some target systems have limitations in how they process data. They may need to consume data in a curated form.  They may be incapable of parsing or filtering structured data. With Pipelines, Intelligence Hub can dynamically break up objects and arrays as well as discard unnecessary elements to facilitate easy consumption by application nodes. Instead of constructing integrations arbitrarily, Pipelines employ meta data to dynamically curate the presentation of data to the needs of the systems consuming it.

Buffer Data

Beyond the presentation of data to target systems, users may also need to consider its delivery of data. Some systems consume records in batches. Some systems reside on constrained or variable cost infrastructure that would benefit from ingesting data at a specific cadence. Pipelines in the Intelligence Hub can buffer the delivery of industrial data based on time or record count.  This enables efficient consumption by target systems regardless of how source systems produce and transmit industrial data.

Publish Into a Single Payload

Some systems are unable to consume structured data. Instead, they consume data as “flat” lists of name-value pairs. With Pipelines, the Intelligence Hub can “flatten” data structures into a single payload for target systems while preserving model context within the topic or attributes names.

Persisting Data

Some manufacturing events, such as downtime or infrastructure updates, can last hours, days, or weeks. When processes resume, the previous event must be available to track machine status and production. The Intelligence Hub is stateful, making it capable of persisting variables from one run to the next, independent of the execution of a Pipeline. When long manufacturing events occur, Pipeline states can ensure that processes resume without incident.

PIPELINES FEATURES AND FUNCTIONALITY

Use HighByte Intelligence Hub to complete the following tasks:

Check mark

Filter, buffer, transform, format, and compress modeled data for the unique data consumption needs of target systems

Check mark

Use the on-change stage to enable event-based delivery and report-by-exception

Check mark

Persist variables according to the length of the manufacturing event with state management capabilities

Check mark

Track individual event executions at a granular level, including execution time and success/failure

Check mark

Use the switch stage to employee switch statement logic within a Pipeline; in the event of a failed write, easily define how to handle and remediate the error with conditional logic

Check mark

Employ metadata and logic in sequenced stages to dynamically shape the presentation and delivery of data

Check mark

Compose custom transformation stages with JavaScript expressions to satisfy advanced use cases

Check mark

Manage and monitor Pipeline data processing stages, status, performance, and individual event executions in real time

Check mark

Observe state and errors at-a-glance for each Pipeline stage

Check mark

Monitor performance with high-level metrics tabulating completions, queues, errors, and more

Ready to try HighByte Intelligence Hub?

Join the free trial program to get hands-on access to all the features and functionality within HighByte Intelligence Hub and start testing the software in your unique environment.