The Knowns Unknowns Approach to Industrial Data and Analytics Strategies

I was fortunate to be invited to participate as a panelist at the ARC Advisory Forum in Orlando. This was in support of our client, Dan King, VP of Operations and R&D at Davey Textile Solutions, who also presented The New Digital and Connected Asset Management

Prior to that presentation and panel discussion, we had the opportunity to meet with ReliabilityWeb’s Terrence O’Hanlon and Maura Abad, where we discussed ReliabilityWeb’s Uptime Elements, which you see on the table in the picture below.

One of the most prominent digital transformation opportunities revolves around the application of analytics for predictive maintenance. The Uptime Elements provide a great framework to get things started to support the asset lifecycle from design to disposal/renewal. 

One of the major hurdles that are often found in the ineffective design, standardization, and implementation of maintenance strategies. Organizations are jumping into Predictive Maintenance (PdM) strategies with the use of AI/ML analytics capabilities without having well-established condition-based monitoring strategies for their critical and auxiliary assets in some industries related to the balance of plant (BoP). 

It is often found that organizations are still running on Overall Equipment Manufacturing (OEM) schedules which are not optimal, as Dan King shared, and do not provide a good indication of asset health based on equipment utilization. However, shifting their entire maintenance strategy towards predictive maintenance can also result in high costs and waste implications as parts may be replaced or equipment may be maintained, repaired, or retired too early in their lifecycle. A framework like Uptime can help find the optimal maintenance strategy balance.

These optimal maintenance strategies can include the use of industrial data analytics for Predictive and Prescriptive maintenance. Through this framework, data strategies can mature and help transform data and knowledge into insights, and even wisdom to a great degree, as many of the promoted Uptime elements (graphic below), are considered best practices in the industry.  

THE KNOWNS-UNKNOWNS APPROACH

In a way, the Uptime framework follows the knowns-unknowns approach (see graphic below). This approach consists of mapping what is already known to build up the baseline knowledge about the specific operations and assets, e.g., critical assets, failure modes, tasks, and activities, including frequency of inspections and type of inspections or online condition-based monitoring applications. With this baseline, gaps and opportunities can be identified - are adequate tasks executed at the proper frequency for the right duty cycle? Are proper online instruments, sensors, and analyzers implemented?

In the case of Uptime elements, this is accomplished using tools like Criticality Analysis, Reliability Engineering, Root-Cause Analysis, and Reliability Centered Design, among others. This helps establish proper maintenance strategies, including condition-based monitoring for those critical assets and for known failures. As part of this process, opportunities, where Predictive Analytics can be implemented to improve anomaly and failure detections can be spotted. Typically, a hypothesis is defined for the development of algorithms which may include the need for new IoT sensors. The Uptime framework, at best, will help establish a good data strategy baseline, providing a great foundation for where to drive predictive maintenance initiatives.

KEY CONSIDERATIONS

Garbage in, Garbage out

Remember, most analytics projects fail because data is garbage or not enough data is available, so inventorying and assessing the industrial data is recommended before proceeding with any analytics initiative. We often see clients asking for predictive analytics using AI and ML to find that they do not have access to the data, the data is unavailable, the data is not trustworthy, or the data is not granular enough.

Beyond predictive maintenance initiatives

We often forget that reliability and maintenance and overall asset management are just some of the functions of any industrial operation. The opportunities for analytics can be applied in other parts of the operations to target drivers and levers related to energy, environment, process, production, quality, and among others. Many organizations are addressing industrial analytics use cases one at a time or in siloes. The knowns-unknowns is a nimble, collaborative approach and generic enough that it can be applied and complemented with other more specific frameworks such as Uptime. Operations Subject Matter, Analytics Technologies (AT), Operational Technologies (OT), and Information Technologies (IT) experts can work together to share what they know and do not know. It is recommended to work with cross-functional teams to identify 3 to 5 use cases across adjacent functions - asset performance with energy management, process optimization, and quality, as an example. This helps form a more robust and sustainable data strategy and architecture to identify foundational elements such as an Industrial Analytics Data Hub and Enterprise Data Lakes.

Leverage existing data with adequate granularity for more.

For the most part, we see organizations looking to leverage the data, information, and knowledge they have already established in systems such as historians and start moving data into their cloud environment. The initial use cases mostly require just aggregated and/or interpolated data moved from on-premise systems to the cloud using Extract, Transform, Load (ETL), or batch interpolated processes. The reality is that as they progress on the journey of industrial analytics innovation, they will find that the granularity of the data is not enough, they need streaming data as well, or they are missing data to support other additional use cases such as the development of soft sensors or new control algorithms. 

Missing data !!! You are not alone !!!

It is not out of the ordinary to find many brownfield industrial operations facilities that have not been designed with an overall digital strategy in mind. These facilities may be operating without adequate instrumentation, sensors, and analyzers to support the various target value drivers and levers, or perhaps the instruments have not been integrated into historians and operations management applications, so they remain stranded. Using the knowns-unknowns, gaps, and opportunities can be determined, including new IoT sensors when data is not already available in historians and/or control systems layers. IoT sensors need to be administered, monitored, and maintained as well which adds to the Total Cost of Ownership (TCO).

IoT Sensor and Device strategy requires careful consideration

There is a tendency to jump into purchasing IoT sensors and devices that will allow them to get access to missing information. 

The IoT sensor market is very interesting because even though there are “standards” such as MQTT, OPC-UA, Modbus, DNP3, and IEC, the reality is that the environmental and operational requirements need to be considered. For example, some IoT devices that may communicate via wireless cellular networks may not comply with bandwidth requirements of regional jurisdictions, other devices may not be intrinsically safe, and others may require repeaters due to electromagnetic fields interfering with communications, and others in very remote areas with difficulties for power may need a different strategy such as LoRaWan, and even Bluetooth in explosive areas. 

It is recommended to validate the operational environment, such as power and communication constraints, that will help select the fit-for-purpose IoT sensor and device supplier(s). A supplier that provides a publish/subscribe API for data ingestion is recommended. 

Leverage fit-for-purpose partners as required

Some OEMs are now offering new pieces of equipment with embedded sensors and IoT gateways to communicate directly with their cloud, but for existing equipment, this may require retrofits or replacements. IoT platforms may offer edge gateway aggregations with mostly fit-for-purpose sensor capabilities that may not be adequate for every operating condition and application. The truth is that multiple of these IoT providers will be required to support various parts of the business operations. Oil & Gas upstream environments are much different than downstream or midstream.

Some service providers, such as Drilling companies, may implement a holistic sensors and instrumentation strategy to provide aggregated data and insights as a service for specific types of applications. Check Use Case #1 Fracking Monitoring application as an example

There is no wrong or right answer. Just be aware there is not one platform in the market that can manage them all for the various applications. Every IoT platform will manage firmware and software upgrades for each of the devices under management. This needs to fit the overall data strategy and architecture as those insights need to be collected for better analytics across the value chain.

Conclusion

The world of industrial analytics is just getting started, so we need structured frameworks to guide requirements. The known-unknowns approach is a good way to map those requirements for digital and analytics initiatives. It is based on 4 major stages or phases depending on the magnitude of the initiatives:

  • Start with the knowns, knowns: this is the crawling and consists of looking at the critical events that are known and that have a known explanation as to why they occurred. 

For the most part, these events are already captured as parameters in historians, control systems as time series, and/or alarm and events. 

With this, a good baseline data strategy can be formed to help assess data readiness for the use case in mind. This includes the quality of the data. It also provides an initial foundation for gaining visibility of the current state and conditions for monitoring metrics and KPIs to support the specific value levers. 

  • Move into knowns, unknowns: it may be seen as baby steps. This consists of identifying events that are known, but there is not a consistent data-driven explanation as to what may have caused it and/or how to prevent it. Start by formulating a hypothesis that may lead to missing data and may require collecting data from PLCs, RTUs, or even new IoT Sensor Devices such as video and audio, to name a few. Understand the environment and operational conditions that these sensors and devices need to meet, especially on the communication bands, enclosures, resolution, bandwidth, and power constraints existing in the field.

At this point, the journey of building some anomaly detection and/or failure prediction models or identifying companies that already have commercial off-the-shelf products that include reliable analytics. 

Consider getting access to the results and supporting evidence. Even though they may indicate that an issue may arise, this does not mean that action needs to be taken. Those insights need to be placed within the context of the operations and priorities, such as financial, people, and equipment availability. 

  • Move into unknown, knowns: the jogging starts, often leading to innovation opportunities. This may require a more knowledgeable expert-driven design with subject matter experts who can point to typical things that may have happened in other operations. It just happens that, potentially, there may not be any relevant events or data to support it in one single facility. Sharing data across multiple sites and operations or partners can help strengthen datasets for the development of analytical algorithms.

  • Lastly, consider the unknown, unknowns: there are always going to be things that subject matter experts may not be able to explain and will not understand yet. Leverage data, information, and knowledge that may be available in other adjacent areas of the operations. Just a simple example if a process optimization SME may not be able to explain high variability without looking at chemical compositions in quality assays or survey lab results.  

Uptake Fusion has been designed as an Industrial Data Analytics Hub to support the known-unknown approach described above. Uptake Fusion can start by consolidating existing industrial data in historians and other operations management systems. 

If data is missing, Fusion can acquire industrial data from control system level systems and/or ingest data from IoT sensor devices in the field or provided by service providers. As required, data from various sources can be segregated or amalgamated by independent sites or a group of sites so data can be shared across. All data in Uptake Fusion can be accessed securely by the various analytics teams and applications across organization functions - maintenance, process, engineering, quality, environmental, etc. All within the boundaries of our client's cloud environment as part of their data strategy

Analytics results can also be ingested and stored in Fusion or on clients’ data lake via Fusion to ensure that all relevant data to support prioritization of activities and actions get dealt with accordingly and or to support other downstream enterprise analytics applications for value chain and supply chain optimization.

Uptake Fusion empowers organizations to have full control over their data strategy first — Read A Step Change from Traditional methods with a Cloud-Native Industrial Data Strategy First to empower an Analytics SaaS Ecosystem. Yet, it allows data to be shared with third-party analytics SaaS providers (e.g., Snowflake, OneBridge, Seeq, Cognite). 

If you have any questions or want to learn more about Uptake Fusion, please don't hesitate to contact us.

Previous
Previous

Unlocking the Power of Self-Serve Analytics

Next
Next

A Step Change from Traditional methods with a Cloud-Native Industrial Data Strategy First to empower an Analytics SaaS Ecosystem