In 2007 the Australian Government introduced the National Greenhouse and Energy Reporting Scheme (NGERS), providing the first mandated national reporting guidelines for Australian companies. The instrument for compliance is the National Greenhouse and Energy Reporting Act 2007.
Glencore (Xstrata at that time), along with many organisations, initially reported information, consolidated from data sources including complex spreadsheets. In time, software applications and platforms began to emerge, offering support for the Australian Governments NGERS reporting, out of the box.
In 2011, Parasyn and Glencore commenced evaluating options for Greenhouse Gas reporting to meet these regulatory requirements. Enterprise applications were considered, including packages that provided other business benefits to Glencore corporate other than NGERS (i.e. opportunity to reduce OPEX simply by evaluating energy use or better gas model definitions through data analysis).
Digitisation is a positive step forward, but only the start
When it comes to digitalisation, software applications implement a standardised user interface and reporting presentation layer. Software applications also aim to improve the user experience (UX). This is usually very simple to achieve for basic data entry applications which we have all seen on smart devices. When the input of data is complex and challenging to qualify in terms of quality, applications can become complex.
Perhaps the most challenging issue of data intensive applications is managing and tracking questionable data. Just because data is digitised, i.e. moved from paper or from spreadsheets into a software platform, it doesn’t improve the usefulness of the data, only access to the data. On its own, Big Data doesn’t change the world, but it lays out the platform that may change a few things. Decisions, automated or interpreted by humans, is only as good as the information. The information is only as good as the data it is built on.
Data often offers more than a singular benefit
After Glencore selected the software application to host the reporting data and provide the user interface, the question of data sources remained. Where is the best place to acquire process and business data, and where is the right place to store it? How can human error, and intentional manipulation of data be reduced or even removed?
Just as enterprise software applications may be used to deliver a host of different functions, to consider data as singular in purpose, is a long-term investment mistake. The value of data at the first instance is usually to meet a primary function of the business. The owner of the business function is usually singular, or a small set of stakeholders with a perspective on achieving the functional requirements. The business case to justify the investment requires tight scrutiny and cost management. This can be a limiting factor if the data has potential use for the whole of business. Nevertheless, there is often scope for expanding the usefulness of the emerging data systems when the expected information is codified in terms of how the whole of business can improve. This takes up front effort.
When data is digitized and presented in a better form for consumption, the possibilities explode. How data is organized plays an important role in adaptation because usability is key to acceptance of change.
Enterprise Application & Enterprise Historian, initially both play a role
The architectural design for the Glencore NGERS solution included Enablon as the Enterprise Application for reporting (Air Quality Model) and using OSI PI as the Enterprise Historian. The historian also functions as the aggregator for raw data collected from remote mine sites.
Implementing enterprise class software applications requires deep consideration in terms of hardware, software functionality, access to data, data storage, cybersecurity, networking, user administration, functional configuration, transitioning data from legacy systems (if they exist), configuration management, systems engineering, human factors engineering, lifecycle management, just to name a few. How this all works together should influence the choice of software applications and these should not be considered in isolation to each other.
When change feels needed, can it be done with ease?
Being forced or snookered into a situation where options are limited is frustrating, and usually very expensive. Designing a system for lifecycle management should include transition out considerations providing better options for the next stage.
Transition out consideration often changes a number of elements of systems design and almost always improves the requirement for IP, Configuration and Product management. It further leads to a higher level of governance. This is not always good for vendors, but it provides lower risk for the system owners. No one wants to think about this when they buy something shiny and new, but in today’s world that places high value on good information, the key area of lifecycle management should be data. This data now includes derived information from the raw data and usually is what has been published or reported to other parties.
Of prime importance is access to all of the data and the proprietary or openness of data configuration. Keeping this in check safeguards investment in data integrity including the high cost to QA a new system. Often the QA process can take years to complete, and that may be years after the vendor has moved on. These follow-on costs are an important part of lifecycle management Further, this initial QA investment cost is repeated entirely or in part for each technology refresh cycle, if data is not transferred reliably to the next system.
When change is required, do you start again?
As Glencore matured its requirements, the proposal to shift all reporting requirements to the PI platform was conceived. To facilitate this, the investment in OS Pi was more fully realised by adding an Application layer using PI ACE. The purpose of the PI ACE Application was to provide data calculations based on stringent rules and automate the process of transforming raw data to reporting information. Enablon was transitioned out in terms of NGERS and PI ACE became the uplift with manual data entry via the company’s ERP (SAP). Having good information available from taking a lifecycle approach to managing information assets, streamlined achieving the transition out and transition in goals.
In terms of lifecycle management, OSI PI is not solely dedicated to NGERS reporting. Process data from other Glencore facilities is collected and stored in the Enterprise Historian and becomes available to other corporate uses. It made good sense to multipurpose the use of the Enterprise Historian already in place and maintained.
When should you start again?
Configuration Management is key to systems integrity. Configuration Management applies to artefacts that represent designs, software configurations, system audits, incident management, incident performance management reporting, and source code and data base backup systems. Configuration Management most certainly does reduce operating costs and the silent partner is the PR department who has less to deal when systems are unreliable and expose the organisation.
If a system starts from a Proof of Concept, most often the approach to getting a successful trial overlooks aspects of configuration management and systems engineering. It is a good idea after a standalone PoC is successfully completed, to start again. Even better, if the PoC forms part of a larger undertaking (project or works program), a strong commitment from stakeholders triggers earlier, staging is considered and whole of life perspective of the PoC is applied. Too often, PoC means; does it functionally work, not can it meet the requirements of the wider business, and does it scale.
Have you ever seen a PoC that tests the full scalability or limit testing of a software application or hardware device? Why not?
Have you ever seen a PoC merge into being an essential OT system for managing critical assets? Too often this is the path, and it almost always leads to uncertainty and poor reliability.
The Parasyn Glencore Enterprise Application Journey
At different stages of the Glencore NGERS lifecycle, Parasyn has played different roles. At crossroads our role has been designer, integrator, tester, and maintainer. This is in contrast to caretaker mode which is more focussed on data integrity, reporting compliance, application support, extending the system to capture new data sources and configuration for new standards.
Does Lifecycle Management cost more?
“How much does it cost”, answered at the beginning of a journey, may seem very attractive to the novice. Prudent consideration means considering full lifecycle costs including transition out and replacement costs. Further, given that engineering costs are generally very high, the replacement costs of software generally pale into insignificance.
The danger in platform refresh as an activity, is that Enterprise Application software license costs are the focus and distort the bigger picture. This is becoming more prevalent as “subscription” licensing becomes prevalent and therefore “sticky”. The concept that easily stepping away from a subscription license arrangement is “lower risk to the business”, is not the main issue.
For data intensive applications where data validation and system expansion are common practice, the replacement costs in terms of full lifecycle are magnitudes higher. This leads to the conclusion. To avoid feeling locked into having to replace a system because it is no longer functional, assess the lifecycle management practice and determine if it has been applied with discipline. If it has, transition out should be methodical. If it wasn’t, then a repeat (disaster) cycle should be expected. “The greater the pain to maintain”, is generally a strong indicator that the lifecycle management and probably configuration management is the root cause of most of the anguish.