Skip to main content
🔥 HighByte Announces Support for Amazon S3 Tables. Read More >

HighByte Blog / Latest Articles / Edge AI + Intelligence Hub: A Match in the Making

Edge AI + Intelligence Hub: A Match in the Making

Aron Semle
Aron Semle is the Chief Technology Officer of HighByte, focused on guiding the company’s product strategy through research and development and technical evangelism.

In my 15 years in Industry, I haven’t seen a technology move as quickly as AI. The timeline from writing funny Haikus to showing up on the factory floor was a lot shorter than I expected.

AI is exciting, but what’s hype versus what’s real? Where is it being used today? How could it be used in the future, and what are the limitations? This will probably require a few blogs, but to get started, let’s split the use of AI in manufacturing into two camps: IT and OT. The divide lives on.

 

Corporate AI in Manufacturing

IT is a natural place to begin for AI initiatives in large manufacturers for a few reasons:

  • Most AI workloads are executed in the cloud given infrastructure constraints.
  • IT has business data that’s already contextualized and ready for AI.
  • There are many business workflows that could benefit from AI.

Often, these initiatives are strategic and have direct visibility from the CEO. They start with an internal chatbot into HR, ERP, CRM systems, move on to agentic workflows, and then shift focus to the manufacturing floor.

This is where things get interesting. From IT’s standpoint, manufacturing data is just like any other data. All they need to do is get it into the data lake. But we know this isn’t true.

The shop floor looks nothing like a data lake. There is telemetry data from machines, historical data, MES data in SQL, some random CSV files, and most of it lacks context. They run headfirst into all the problems that an Industrial DataOps solution like HighByte Intelligence Hub solves.

Companies that realize this—or already have an Industrial DataOps strategy—move quickly beyond these issues. Companies that don’t, end up creating a solution that works with only telemetry data (for example) and then find out they need other data. Or worse, when they get something working in the first factory, they find out factories 2, 3, and 4 have different technology stacks. But I digress. This isn’t a DataOps blog.

For companies that are having success, what does it look like? They are asking big questions, like the following:

  • Why is plant A outperforming plant B?
  • Plant A made a bad product; what happened?

These are important questions. Longer term, I think AI can certainly help us here, but given the current state of technology, AI struggles with broad questions that require precise answers. It leaves a lot of room for hallucinations.

Much of the AI focus from IT is around supply chain, production planning, and root cause analysis. The scope of the problems and data sets needed to answer them makes repeatable progress slow.

 

Manufacturing Floor AI

The manufacturing floor has a tremendous amount of potential for AI, but it’s also harder to get started for the following reasons:

  • AI either needs to run in the cloud (data privacy concerns), or they need the hardware to run AI locally.
  • Manufacturing data lacks context and standardization to be ready for AI.

Factory initiatives are often driven by a plant manager or local leader who has used AI in their day-to-day. They know where all their data resides, and they are intimately familiar with their processes and problems. What they lack are the tools to iterate quickly with AI.

In comes DataOps (again). Cloud AI and Edge AI have the same problems with industrial data. They need access to contextualized information across many systems. The only difference is there is no data lake in the factory—but that’s OK. DataOps can leave the data in the source systems and expose it over APIs, allowing edge AI to access the data needed for specific tasks.

But just like IT, what happens if OT doesn’t use DataOps? It’s the same set of issues. If you try to integrate AI directly with data from your SCADA, historian, or even UNS/MQTT, you'll limit the data and context to which the agent has access. SCADA/Historians only have telemetry data. UNS/MQTT is report by exception, and AI is request/response based (i.e., it can’t integrate). But again, I digress. Use DataOps.

With the right tooling, what are OT teams (and sometimes individuals) doing? They are iterating using AI to solve very specific problems. Many of which you and I are not creative enough to come up with, but here are some general examples.

  • “Monitor this process and let me know when it doesn’t look right.”
  • “Get me all the maintenance work orders on this machine, and let me know how long the delay might be if we fix this issue.”
  • “Compare this run to the golden batch and tell me what’s different.”

In practice, the approach is very iterative. Pick a question, gather the data you think you need, run it through AI, and see what you get. Tweak the context, add additional data, change the AI instructions or prompt, and try again. Rinse, wash, repeat.

And when you do find value, it scales. Using DataOps, you end up with an API and an AI model that can accomplish a task. You can bring that same API and AI to the next factory, wire up the API into the factory-specific systems, and now you’ve enabled the same capability.

 

AI Limitations & How To Scale

The biggest limitation of AI in both environments today is hallucinations. AI will not say, “I don’t know.” It makes the next best guess, which is sometimes wrong. This is problematic as we try and use it for less creative tasks in the factory, tasks that need to be deterministic.

To get around this, the industry and broader market are scaling down. That means smaller and more focused LLMs, and many agents focused on specific tasks. This limits the potential for hallucinations but doesn’t solve it outright.

In practice, this looks like a factory machine with many agents. One agent monitors process variables, another looks for incoming work orders, another is responsible for tool change over, etc. Each agent could use a different LLM trained on knowledge for the specific task.

 

What Would I Do?

If I were a large industrial, I’d be focused on how I can best align IT and OT around AI. My first goal would be to deliver AI to the OT environment as a tool or capability for local workers to rapidly iterate with to solve their day-to-day problems. After this, I’d look to scale out use cases that unlock real ROI and share these across sites. I think companies that take this approach—versus looking at AI as an autonomous thing that instructs factories on what to do—are going to be ahead.

 

DataOps & AI

Traditionally DataOps has been used to move contextualized data to the cloud & IT systems. But Edge AI is quickly becoming the largest consumer of DataOps inside the factory walls, applying more force to a larger movement of OT environments looking more and more like IT over time. Industry is slow, but it will be interesting to see over time how this new sense of urgency accelerates the merger of the two.

To see what I’m talking about here, watch this short video in which I demonstrate how Industrial AI Agents can interact with manufacturing data using Model Context Protocol (MCP) and HighByte Intelligence Hub.

 

Appendix: LLMs vs Agents vs AI

Here is how I define a few of the key terms used in this post.

  • LLM. A Large Language Model, just like ChatGPT that you already interact with.
  • Agent. An agent uses an LLM and executes a set of instructions, often reaching out for data one or more times to accomplish a task. Think of the agent as you but replacing what you’d do manually to interact with an LLM.
  • AI. In the context of the post, AI means both LLMs and Agents.

Get started today!

Join the free trial program to get hands-on access to all the features and functionality within HighByte Intelligence Hub and start testing the software in your unique environment.

Related Articles