More Than A Connector

For more than 20 years, companies have established strategies for collecting data from the field using traditional historians as a way to provide engineering, operations management, and even enterprise systems with the ability to access historical data from the field. Many of these systems have evolved to provide various types of data from time series to events, tag configurations to hierarchical views of the world to make things easier for clients to understand the context. The demand for this data has increased, and the acceleration that organizations have experienced in recent years is pushing the boundaries of these systems to support the various functions of industrial operations. 

The challenge is many of these systems reside behind firewalls and have hardware or license limitations that constrain the use of that data across multiple use cases.

Current Options

Over the last few years, we have seen new “standards” and “As a service” models introduced to provide access to field operations data to the various groups. We see technologies from various parts of the ecosystem:

  • OPC and MQTT brokers provide an agnostic layer for connectivity to underlying systems.

  • Automation companies provide direct access to data from the field to the cloud.

  • Process historians push data into cloud environments to facilitate cloud use cases.

  • Hardware and network device providers package protocol to protocol translations to push data from lower levels of the Purdue model to the cloud.

  • Integration Platform as a Service (iPaaS) leverages an API-oriented architecture to configure data pipelines that can extract, transform and load data to other destinations. 

  • Technology and sensor providers capture data from the field and push them into their cloud to deliver insights as a service to their customers.

The Fusion Approach

Uptake Fusion’s purpose is to help consolidate industrial data from the various industrial data sources so our customers have access to organized data within the right context of the analytics applications. In addition, Uptake Fusion provides a layer of abstraction to protect the underlying critical systems from high-performance data inquiries and cyber-security concerns. It is important to note that we can integrate with the above options should our customers choose any of those methods. 

Fusion Connectivity Options

Many of our clients are looking for data reliability, provenance, granularity, and consistency to ensure quality data to the various analytics consumers throughout their analytics lifecycle, from experimentation to operationalization and sustainment.

As a result, Uptake Fusion provides additional 1st party connectivity options to process historians such as OSIsoft PI, Rockwell Automation, Emerson, and soon to the Aspentech Infoplus.21, to name a few. 

For the most part, clients are familiar with  “connectors,” “agents,” or simply “interfaces.” However, our 1st party connectivity options are more than that. They allow Fusion to establish a layer of protection and behave as a single source of truth, as shown by the trending tool below, where data from multiple systems can be visualized together for diagnostics and investigation or selection of datasets.  

Fusion’s 1st party connectivity options encapsulate best practices to ensure we meet those data requirements for the various systems. These are some of the key capabilities that make those connectivity options beyond just simple connectors:

  • Data request: Fusion provides a Cloud interface for users to explicitly request specific tags and ranges and/or syncing for hierarchies and tag configurations. Fusion enables you to do all of this without having to manually configure independent pipelines from source to cloud. This can become an administrative challenge as the system scales across various systems and sites.

  • Underlying System Protection: Fusion monitors the performance of the underlying system and throttles the historical data transfers to provide priority to the users at the site. In addition, Fusion provides an emergency stop mechanism. 

  • Outage Handler: Part of the data reliability, Fusion provides various methods to handle outages that will involve mechanisms to store and forward data due to network disconnections and automatic backfilling of data as required, among others. 

  • Streaming and bulk uploading mechanisms: Fusion extracts historical data from underlying systems and the streaming data from the snapshot subsystems of the underlying systems. As a result, Fusion will collect uncompressed streaming data, which is useful for machine learning use cases. 

  • On changes and data compression transfers: As part of the process to minimize the frequency of data transfers, Fusion is able to push data on changes and is able to compress data transfers using similar zip methods to reduce the size of packet transfers and ingestion in the cloud. 

  • Data normalization: Fusion incorporates ways to normalize data differences across various data sources. Something as simple as data quality statuses vary from system to system. 

  • Configuration: the native connectivity options have a Cloud control plane interface to register and configure the 1st party connectivity to underlying OT data sources. This can be accomplished in a remote manner, thus allowing users to configure the required interfaces with the various subsystems of the OT data source. For instance, OSIsoft PI has three subsystems we interface with the snapshot to get streaming uncompressed data, the archive to get access to history, and the asset framework for extracting hierarchies and event frames. Fusion implements vault protection for secret keys to protect usernames and passwords from connecting to the OT data sources.

The Reasoning Behind Fusion Native Connectors

Uptake Fusion is not just a data broker, data pipeline, or interface. Our focus is delivering quality data (streaming and historical) to the analytics consumers by acquiring, organizing, and distributing the data. 

Over the years, we have implemented various standard OPC and MQTT broker interfaces. The reality is that many vendors have not completed the implementation of the entire specifications, and as a result, we find interfaces somewhere unstable. As well as being able to meet all the requirements above, multiple methods will have to be implemented, integrated, and managed.

We see clients implementing single pipelines that can extract aggregated data and provide it from the source system to various endpoints. However, as new requirements for data appear from other analytics applications, a new pipeline needs to be created with a different level of aggregation. This drives load and introduces inconsistency in the data used by one application vs. the other one. 

Conclusion

Thus, when comparing Fusion to other methods, it is important to consider that the native connectivity options that Fusion provides are beyond the capabilities that each of those other options provides. We believe that data requirements are based on value, so if you have already implemented any of those options, Fusion can co-exist and ingest data from them as well. As requirements evolve, you can leverage more capabilities to support the needs of your industrial data analytics ecosystem. 

Previous
Previous

How is Your Asset Performance?

Next
Next

From Greenwashing to Transparency