Digital Solution Certainty.

An OT data solution is much more than technology; it’s a delivered solution to the whole organisation to meet the “data driven” objective. Although the data management technology is important, in our experience, it is one of 4 key ingredients to deliver a successful digital project.

This is Parasyn’s algorithm to deliver data solutions.

a * b * c * d = successful data-driven outcome

a. The right Technology – scales, performs, well supported, cost effective, no lock-in.

b. The right Implementation – a Systems Team that manages requirements, financial outcomes, and with the desired capability and continuing lifecycle support to ensure that capability continues.

c. The right IT and Business Integration Technology and landscape – to remove any friction to deliver and maintain the capability across the enterprise.

d. The right impact through organisational change – on-going human feedback that makes data systems work and continue to be impactful.

Any of the key ingredients can either substantially diminish or improve the outcome. It’s not a summation where we can be average in any aspect, it’s a product of all key elements in concert. Do the math. If any parameter is small, we get a poor outcome. Many projects fail because we address some but not all of the parameters. The worst part is we do this on purpose to not “bite off too much”, but we are setting ourselves up for failure.

Good OT data

Key insights Gained from Data-Driven Solution

We have learned so many things about data projects it is challenging to keep the list short. Here is our attempt to tackle the big ones.
Data design for both greenfield and brownfield solutions is key to establishing a good baseline, however, do not expect standards to be maintained for the life of the information system. Complementary processes are required to manage the integrity of conventions and data quality. Expect engineering practices to be compromised and when this occurs the entire system integrity will come into question, so be prepared, or intentionally design it out of the system.
Enterprise solutions of any type (not just data-based applications) are far less forgiving when the user base perspective is broad and where the context of information cannot be assumed. Even though the data model in the background may be well crafted, it must be useful for the lowest common denominator, the new employee, who “knows nothing”!
A new generation of data users take a “kiosk self-service” approach to consuming data and are less tolerant to tech limitations. Appreciating new ways of consuming data is key.
Change Management commences at requirements development. Be flexible but formal about meeting stakeholder needs.
Understand the funding model for use cases as people want their money well spent as this is almost always internal funding, is precious, and mostly unrelated to the business’s supply chain.

Lessons Learned about OT data engineering

Edge technology is a selling feature of many new solutions. When vendors do not have control of all layers within the digital solution, they are forced to do edge analysis (data conditioning for example) in the components inside their own scope. This may be optimal for the vendor but not for the enterprise solution. Further, when devices do not have the computer processing capability, often the analysis must be performed elsewhere, offline, or in higher performance environments, which also may not be optimal for data conditioning. We must take these factors into consideration when putting together the solution.

Conditioning data is paramount to managing digital solutions. Sometimes it should be done at the source, sometimes closer to the data repository, and sometimes at the client application. Quality data including known latency, reliability, error tolerances, resolution and sampling rates must all be considered when designing the data management plan. When data conditioning occurs, it should be flexible to optimise the data metrics and retain the raw data wherever possible so that new data processing initiatives can occur in the future.

OT data is a jigsaw

Solving the OT to Business data challenge with OT DataBridge

Resilience

OT DataBridge is based on OT design and integrity principles (i.e. safe system design principles). To meet system availability targets (minimal downtime) the design considers the necessary aspects including the server environment (virtual, on premise, hosted locally, hosted regionally), application redundancy, protocols, data storage and data stream buffering. These factors should not become constraints to any solution.
The network may not always be “online” (depending on the use case) however this does not need to translate to data loss. The platform caters for all data acquisition types and conditioning data for all consumption types.

Good OT data has to live somewhere safe

A customers’ pre-existing system configuration may not be suitable to support a “no loss” system, however based on experience, numbers of systems do have the technical capability to meet this need but have not been implemented to support it. This is a common occurrence.

The platform resilience is a simple factor of allocating sufficient instances of applications and assigning sufficient resources for the production environments. The software applications are not the limiting factor. The platform is easily configured to dual stream data into multiple environments so development can occur on real time data sets avoiding unnecessary risks to the organisation’s digital infrastructure. The more important data is to the business, the more important it becomes to keep the digital infrastructure always online.

The following example is one method (our preferred) for safeguarding the integrity of data systems. An example may include 4 environments as shown, but at least 2.

A user (typically a data scientist with high load demand on the platform) accesses the Test System and experiments with the environment using actual real time data. When the experiments are over and the Use Cases are finalised, design applied, development complete, and testing approved, the new functions are deployed to the prep-production environment and then eventually staged to the production environment. This done with high levels of confidence that the infrastructure will not be compromised, and the new features will work in the first instance.

OT Data consumption

Extensibility

OT DataBridge has several open interfaces/APIs. It incorporates the ability to serve up raw or conditioned data or push specific data into multiple consumer platforms.

This is in part achieved by;

  • All conventional automation protocols supported
  • Easy DMZ configuration
  • Encryption supported
  • Redundancy supported
  • Tiered architecture supported
  • Configurable extensibility, “no code”

The configuration of data structures allows the data types to match the organisation’s Enterprise Asset System’s (or any other system’s) nomenclature. The process and control system conventions can also be retained. This is important to avoid the need to reengineer the OMCS as a precursor to digital projects. This is no longer a prerequisite for our solutions.

The system can be queried for information (served) or the system can deploy the data and in some cases the data structures can also be deployed to other data platforms.

The data structures can be configured using a user interface which supports drag and drop extensibility. During the initial system setup, when the organisation’s standards are defined, the data structures are configured to match the business standards but still support OT definitions. The extensibility tools are easy to use and are suitable for system maintainers to uphold the organisations standards, an essential aspect of system integrity and maintainability. With high quality data sets (structured or unstructured) new data structures can be applied to cater for new use cases.

Getting OT Data out of the Dark

Data Quality and Trust

The integral input to all data projects is data quality and the data definitions. Labelling data is only one aspect of its identification which leads to its value. The dimensions of data are essential to its consumption. If data dimensioning is not used and the data is unconditioned and raw, then it can be dimensioned for easier consumption extending the platforms use to a wider much less technical group of users.
At a planning stage, the data design does not usually need to be considered when building use cases if the underlying technology is flexible and easily configured. Circling back, the basic assumption is the data is trusted thus allowing a free-flowing ability to extend the systems digital footprint scope. Coupled with flexible technology, the entire platform solution provides unilateral expansion confidence.

Model Predictive Control emulates plant operation  

YOU MIGHT ALSO BE INTERESTED IN:

Digital Transformation & Analytics
Digital Transformation
Creating a structured world from chaos