Trusting Data for Action
Can you trust every bit of data and information coming across your desk, your screen, or shared during a meeting? Probably not. How much of it is good? How good is it? One would like to think most of it is good, even quite good since millions have been invested in measurement and data collection systems, in report development, in database and data warehouse to ensure the right data and information are always available at your fingertips at all times. Today, the speed of business is such that regardless of how good, reliable, fresh, and "trustable" the information is, we will use it in taking immediate action or in shaping future actions and decisions.
Then one day, we pick out an inconsistency and start pulling the data, the information, the reports apart. At this moment we realize the magnitude of the challenge of ensuring data quality throughout the life cycle of each data element that makes its way to a user or a system for immediate or future actions. There is a long list of things that can go wrong: wrong measurement system, poor calibration, undetected defective meters, manual data entry error, transcription error, wrong correction factors, engineering unit mismatch, stalled readings from sensors, missing or bad data in a file or in data historian stream, corrupted records, data loss due to communication system failure, wrong date and time reference, data from another system not available on time, gross error, bad presentation of the information, etc.
In process and manufacturing, information systems should be implemented with a system of checks and balances to maximize the quality of data and the economic potential of data. Creating an environment where data can be trusted is not an afterthought. Data quality, similar to product quality, needs to be built into our business and data management activities so that we can trust data for action at all times.
At Pimsoft, we care about data. Using Sigmafine® as a foundation, we implement the check and balances needed to ensure that anyone in the organization, relying on process and manufacturing data, is getting the best available and technically possible data quality at all and any time. Sigmafine uses proven methodologies based on physical conservation principles and supported by statistics and engineering standards to carry out these checks and balances. Our users can testify to the importance and relevance of data quality in maximizing the value of information.
When it comes to data and information, value and quality are inseparable concepts. You cannot maximize the value from information without trust in your data. Pimsoft and Sigmafine deliver sustainable data quality governance and practices that will have a tangible and lasting impact on your business.
If you are interested to know more, join us at the at the SFUM 2015 on October 20-22, our upcoming annual Sigmafine Users Meeting. You will hear from users about the value of "trustable" information and learn how Pimsoft delivers data quality to its process and manufacturing customers. Learn more here.
Process Industry Professionals: What is your tolerance for bad data quality & poor information?
Whether it is called "Industry 4.0", "Edge Computing", "Big Data" or "IOT", the tolerance of People, Applications and Business Processes to poor data quality in industrial plants is diminishing rapidly. The Modern process industries thrive on readily usable information and credible data to deliver sustainable tangible business results.
Until recently, Data Quality was relegated to Data Validation and Data Reconciliation projects supporting certain specific functions such as production and yield accounting, mostly in hydrocarbon process industries.
It is time to embrace Data Quality by implementing a strategy that will result in a trustable industrial plant dataset.
The targeted outcomes are clear:
- There are three classes of data users we must satisfy:
- Software Applications (downstream from sensors)
- Operational and Business Processes
- Industrial plants are continuously generating different types of data (for instance, streams, events and transactions.) which must be assembled into a coherent, accurate, credible, functional and usable dataset suitable for all class of data users. We call it the "Industrial Plant Dataset"
- To connect the data users with the "Industrial Plant Dataset" in a reliable, sustainable way in order to move operations and business forward and redirect when necessary.
The strategy of Pimsoft is based on implementing Sigmafine, a robust system of checks and balances. Using conservation principles, statistics, engineering standards and calculations to monitor and assemble Industrial Plant data, Sigmafine generates a dataset that is coherent, trustable and usable - a dataset ready for business.
We implement Sigmafine as an adaptive Enterprise Information Service (EIS) which can evolve with the business and operational needs of data users. Ensuring Data Quality is no longer a task or a function relegated to desk of a statistician, a production accountant or to a point solution. Instead, it is an enterprise service that runs autonomously, that spans the value chain of the industrial plant and which is accessible to all data users.
Defining Data Quality in the Process Industries
Data Quality is an abstract concept until we are confronted with bad data or information which is not usable, not credible, not presented correctly, not accurate, etc.. Then Data Quality becomes a very concrete experience. While, there are many definitions for Data Quality, all of them converge to the same focal point. “fitness for use” by people, applications and business processes.
Tolerance for bad data
Whether it is called “Industry 4.0”, “Edge Computing”, “Big Data” or “IOT”, the tolerance of People, Applications and Business Processes to poor data quality in industrial plants is diminishing rapidly. The Modern process industries thrive on readily usable information and credible data to deliver sustainable tangible business results.