Skip to main content
📽️ HighByte Office Hours: Design and Query Your UNS Watch On-Demand. >

Release Notes

HighByte Intelligence Hub Version 4.0

New Features:
  • Migrated all Flows to Pipelines and removed Flows. Pipelines are now self-triggering with new Trigger stages and do not require Flows to run.
  • Added Model and Instance support for inline hierarchy. Hierarchy no longer requires child Models and Instances.
  • Added a Model Validation stage to Pipelines to validate event values against one or more Models.
  • Added a Model stage to Pipelines, allowing Pipelines to use a model definition to reshape an event.
  • Redesigned the Pipeline UX to provide more space and make it easier to configure and work with Pipelines.
  • Added support for a new Ignition Module that allows the Intelligence Hub to connect directly to Ignition and create, read, and write to tag providers, including folders, tags, and UDTs.
  • Added support for Namespaces and Pipeline Smart Query stage.
  • Added support in Instances to allow an attribute to be a simple Reference (e.g., OPC tag) or an advanced JavaScript Expression. References do not require the use of JavaScript and are computed faster.
  • Added support in Instances for an Initialization step that is called before the instance computes. This allows data shared across attributes (e.g., SQL results, default values, functions) to be defined in one place.
  • Added support for running Instances in “Legacy Mode”. Instances in this mode operate like they did in version 3.x with no functional or configuration changes required.
  • Enhanced project export to allow users to provide an optional export file name.
  • Added new settings for PI Asset Metadata inputs to optionally include child assets and attributes.
  • Added acknowledgments to PI Event Frame inputs.
  • Added PI Asset Data Pipes input type to get PI Asset Framework changes.
  • Changed PI Asset inputs to return _name, _model, and _timestamp metadata.
  • Improved PostgreSQL upsert performance when upserting many rows.
  • Added support to the Sparkplug Input for reading Datasets.
  • Added support for using System Variables in MQTT input topics.
  • Added file encoding option to CSV Inputs.
  • Added support to bulk enable and disable all Pipeline Triggers.
  • Added support for Model Attribute descriptions and default values.
  • Updated Instance Attribute default value fields to support JSON format.
  • Updated the Pipeline Transform stage to support hints and code completion.
  • Updated the main navigation and layout of the UI.
  • Updated Connection, Condition, Pipeline, Instance, and Model views to support table sorting.
  • Changed the name of backup configuration files to intelligencehub-configuration-YYYYMMDD-HHMMSS.sss.json.
Fixes:​​
  • Fixed a File Output issue where whitespace in a file value could cause issues when decoding base64. The File Connector now removes whitespace before decoding.
  • Removed outputting _model: “ComplexData” in the default JSON output of a value. _model and _name are now only included in the JSON if they have meaningful values (i.e., Model and Instance names).
  • Fixed a bug in the PI Agent connection that caused an authentication error in the logs when first trying to connect.
  • Fixed an issue where a PI Event Frame with a null attribute caused an exception on read.
  • Fixed an issue with OPC UA Alarm inputs where attribute types not defined in namespace zero of the server could not be used.
  • Fixed an issue where the REST Client could not connect to a server that only supported TLSv1.3.
  • Fixed an issue where MSSQL input column names were not escaped correctly, not allowing for columns named after special MSSQL types (e.g., file).
  • Fixed an issue where the UNS Client could not connect to TLS enabled cloud brokers because of certificate validation errors.
  • Fixed an issue where data could be lost when Store and Forward was enabled for the Snowflake Streaming connection. The connection now issues one write at a time and waits for completion, avoiding the queue mechanism in the Snowflake Java SDK. To ensure performance is on par with version 3.4, use a Pipeline Buffer stage to issue writes in chunks.
Breaking Changes: 
  • Updated minimum JRE requirement from v11 to v21 or greater.
  • Changed the default server installer to place appData outside of the runtime directory. This makes future upgrades easier, but users not using appData must take care after upgrading from version 3.x to either move the appData files to the new directory or change the settings.json file to default appData back to the runtime directory. See the user guide for details.
  • Reworked remote configuration to support future enhancements. Existing version 3.x installations will not work with version 4.0. All installations must be upgraded to version 4.0.
  • Removed v1/instance and v1/model routes from the configuration API. There are new v2/instance and v2/model routes.
  • Removed v1/project routes. Project import and export should now use the v1/project/import and v1/project/export routes which provided better management of configuration and secrets.
  • The reference {{System.Internal.Datetime}} and other Datetimes are now bound to JavaScript as a Date data type. JavaScript expressions like “{{System.Internal.Datetime}} + 1” will now evaluate to a string in the form “Tue Jun 11 2024 18:50:00 GMT+0000(GMT)1”. In version 3.x, this would evaluate to a number (unix epoch time + 1). To keep the previous behavior, update the expression as follows “{{System.Internal.Datetime}}.getTime() + 1”.
  • Removed the Array Builder helper UI from Instance configuration.
  • Removed global REST Data API token support in settings. User level tokens must be used, which can be scoped further than global tokens.
  • Removed Element Unify Connector and its Model/Instance import support.
  • The hub.withMetadata JavaScript command is not available in Instances with “Legacy Mode” disabled. If quality and timestamps are required for OPC UA tags, enable “Include Metadata” for the OPC UA input.
  • {{System.Flows}} and {{System.Flows.flowName}} references, used to monitor flow statistics and health, are not automatically ported to pipelines and will fail to read. These must be manually changed to {{System.Pipelines}} and {{System.Pipelines.pipelineName}}. Note the data shape has also changed. Pipelines provide more detailed status and statistics.
 Security Patch Updates:
  • Parquet Connection
    • CVE-2024-36114: In some cases, decompression (reading) could cause access to memory outside of the file.
  • Redshift Connection
    • CVE-2024-32888: Fixes an issue where SQL injection was possible if the preferQueryMode=simple JDBC connection property was used in the connection settings.
  • REST Data API Swagger Documentation
    • CVE-2024-45801: Fixes an issue that could enable cross site scripting (XSS) attack. As part of this change, model definitions were also removed from the REST Data API documentation.
Patch (4.0.1 2024.11.5.4)
  • Fixed an issue where using Pipeline state could cause native system memory leaks.
  • Fixed an issue where users with Read Execute permission to a Connection using a Tag were not able to read inputs on the Connection.
  • Improved the performance of Flow Triggers configured for Event mode to match the performance of the Event Trigger.
  • Fixed an issue in the InfluxDB Connection with escaping spaces in measurement names and tags.
  • Fixed an issue with blank passwords not working on the initial Ignition Module install.
  • Enhanced the Oracle Database to support timestamps with time zones.

 

Ready to see more?

Curious to learn more and see a live demo of HighByte Intelligence Hub? In this demo, we will show you how to deploy HighByte Intelligence Hub at the Edge to access, model, transform, and securely flow industrial data to and from your IT applications without writing or maintaining code.