Skip to main content

🎥 How Gousto reduces food waste and increases workforce productivity: Watch on-demand. >

HighByte Blog / Latest Articles / More data, less clicks: Meet HighByte Intelligence Hub version 2.4

More data, less clicks: Meet HighByte Intelligence Hub version 2.4

Torey Penrod-Cambra
Torey Penrod-Cambra is the Chief Communications Officer of HighByte, focused on the company's messaging strategy, market presence, and ability to operationalize. Her areas of responsibility include marketing, public relations, analyst relations, investor relations, and people operations. Torey applies an analytical, data-driven approach to marketing that reflects her academic achievements in both chemistry and ethics. Torey received a Bachelor of Arts in Chemistry from Miami University in Oxford, Ohio and completed post-graduate studies in Bioethics Ethics and Health Law at the University of Pittsburgh.
It’s been inspiring to see the wide variety of ways customers are using HighByte Intelligence Hub to conquer Industry 4.0 use cases that previously seemed impossible. From creating contextualized electronic batch reports to improving first run yield, predicting asset maintenance, performing real-time analytics on UNS data, and gaining enterprise-wide performance visibility across multiple sites with different systems—customers are using the Intelligence Hub in increasingly more sophisticated ways.
 
With more sophisticated use cases comes the need for more sophisticated tools for scalability and connectivity in the Intelligence Hub.
 
That’s why I am so excited to introduce HighByte Intelligence Hub version 2.4. With new instance and input templates and parameters, global functions, custom conditions, OPC collection, and more, the latest release takes a giant leap forward in terms of scalability and data pipeline automation.
 
I sat down with my friend and colleague John Harrington, Chief Product Officer at HighByte, to learn more about version 2.4 and what these new capabilities will mean for our customers.

Q: Thanks for breaking down the latest release for me, John. Can you give me an overview of what the release includes?

A: I’d be happy to, Torey. “More data, less clicks” has internally become the theme of this release. Version 2.4 includes several new capabilities that enable users to define common components that are reusable across the product to improve its speed of deployment and maintainability.
 
The release also includes several new connections—like Modbus, Amazon S3 and Redshift, and Azure Blob Storage—and improves many existing connections.
 

Q: What are templates and parameters, and why are they useful?

A: Prior versions of the Intelligence Hub required users to define Instances for every modeled asset and Inputs for every source of data. The Intelligence Hub version 2.4 allows a single Instance or Input to represent hundreds of assets using templates and parameters.
 
As an example, an OPC UA server has 100 pumps, each with the same 5 tags. A single instance template can be defined with parameters to characterize the expansion to the 100 pumps. To publish the data from all 100 pumps to the Cloud or a UNS, the template can be used in a single flow, or if only specific pumps are needed, each pump can be individually listed in the flow. Parameterized Inputs allow users to create a single input for many assets based on the parameter values defined. More data, less clicks.
 

Q: I can imagine that would be a major time-saver for our customers. Can you tell me more about the other new features designed with speed and scalability in mind?

A: Yes, I want to specifically callout custom conditions, global functions, and OPC collection.
 
There are many cases where raw input data must be transformed or conditioned before it is ready for modeling. Maybe it is an old SOAP API that returns escaped XML, or a more advanced case where a user wants to return a raw count from a PLC into an infinite counter. Custom conditions allow users to apply logic and state using JavaScript to a raw input. This provides the ability to transform arrays of data into objects, check if SQL tables have changed between reads, and much more.
 
In some applications, small JavaScript snippets (unit conversions, array lookups, etc.) get copied and pasted into many expressions. With the addition of Global Functions, users can now define these functions once and use them in any JavaScript expression applied to Instances or Custom Conditions.
 
And finally, I want to share more about OPC collections given that OPC UA is one of the most used connectors in our library. As data volumes increase, handling groups of data becomes more important. OPC inputs can now be grouped into collections and handled throughout the Intelligence Hub as a single structure of data. This will be a useful new feature for the majority of our customers.
 

Q: Since you mentioned connections, what new connectors are now available in the Intelligence Hub?

A: We added Modbus, Amazon S3, Amazon Redshift, and Azure Blob Storage. We also added writing (outputs) to the Apache Parquet connector to write Parquet formatted files to disk. (Inbound connectivity was first introduced back in version 2.1.) All of these new connections further extend the reach of HighByte Intelligence Hub to key source data and cloud platforms. Let me provide more details about each of them.
 
Many industrial sensors and protocol gateways support Modbus for data communications. The Intelligence Hub now supports connecting to these devices over the Modbus TCP protocol and reading input coils, output coils, input registers, and holding registers.
 
We’ve extended the supported AWS services in the Intelligence Hub to allow publishing files directly to S3 buckets as well as streaming data directly to and from Redshift. Publishing data directly to an Amazon S3 bucket or Redshift database reduces complexity and improves efficiency of delivering data to AWS.
 
The Intelligence Hub also supports sending data sets, image, video, or any other file types to Azure Blob Storage. Publishing files to Azure Blob Storage can optimize the efficiency of getting data to Azure as well as increase the types of data being published beyond streaming data.
 
And finally, a few thoughts on why we added outbound connectivity for Apache Parquet file types. Moving high volumes of data to a cloud data lake requires compact and efficient file formatting. In many cases, latency is less of an issue than efficiency of delivery. The Intelligence Hub now supports writing streaming data to a file in the Apache Parquet format. The Apache Parquet format is an open-source column-oriented data file format. The Parquet files can then be transported to AWS or Azure at some interval by leveraging the file connector and the respective cloud service. Parquet format is supported by Amazon S3, Amazon Athena, Azure Blob Storage, Azure Data Factory, and Azure Synapse Analytics. 
 

Q: This is a lot to pack into one release! Any final new capabilities you want to share?

A: It’s definitely an impressive release for a “minor” cycle. Last but not least, I want to mention Event Flows. We added the ability for users to set flows to Event, which results in the Intelligence Hub immediately executing flows when data is received. This action minimizes latency and ensures data is not lost. Event Flows are available for MQTT, Sparkplug, Webhook, OPC UA Subscriptions, Azure IoT Hub, and Azure Event Hubs Inputs.
 

Additional Resources

I hope you are excited to use these new capabilities in version 2.4 that make it faster and easier to deploy and maintain the Intelligence Hub as your use cases increase and become more sophisticated. To learn more, check out these additional resources:


Request a free trial or log in to your existing account to test and deploy the software in your unique environment.

Get started today!

Join the free trial program to get hands-on access to all the features and functionality within HighByte Intelligence Hub and start testing the software in your unique environment.

Related Articles