Process Data from the Ground Up

The “Flux Capacitor” may have made time travel possible in the movie Back to the Future, but the control loop is what makes industrial automation possible. The humble control loop is the foundation of the distributed control system (DCS), and it does its job very well. The DCS receives source data sent to it from the measuring devices in the field, and then uses that data to make control decisions for the process based on parameters, which skilled operators set and monitor. That information then goes to the historian and into the enterprise resource planning (ERP) system. Management then reviews the archived reports and makes critical business decisions based on the information shown. But can they trust that information?

Not if data coming into the DCS isn’t reliable or properly conditioned.

To create quality ERP system reports, it’s important to ensure that the basic incoming data is good and ultimately available in a format that properly reveals the necessary information. It must provide the right data, in the right format, to the right people.

Aside from basic signal conditioning, in most cases the DCS doesn’t invent or change its incoming device data. It accepts the data flow as true. Generally, if proper software testing has been done, the DCS can be trusted to perform any calculations with complete reliability.

For now, let’s assume the DCS is healthy and functioning properly. It is far more likely that any errors in the data were introduced before it reached the I/O card. Here are some key points to consider concerning the information coming in from the field devices.

Device Data Capture

Although it sounds simple, it’s critically important to know what data you’re collecting and how it will be used and visualized. For example, let’s look at transfers between tanks. When moving material out of Tank A into Tank B, a DCS could make a calculation to verify that the material leaving Tank A coincides with the material entering Tank B. You can set an alarm or take automatic action if the calculations don’t agree, but if the field devices or sensors are not giving the right information, that’s an issue. The calculations and actions we take are only as reliable as the input data they operate on.

There are several things to consider as the data moves from the collection point in the field through the DCS and eventually to the historian and ERP.

First, let’s look at how the sensor data comes into the DCS. Typically, the sensing element is connected to a transmitter that sends the data to an I/O card over some length of wire. Although wireless technology has been around for years, hardwired sensors are more commonly used, especially in larger facilities. The wire length may be significant, and in some cases, the measured changes that are significant to the process may be too small to be carried over a 4–20mA signal and could get lost in the noise. In this case, the signal would be converted from an analog signal to a digital one using one of the many commonly used fieldbus protocols, such as Modbus, Profibus, or Ethernet/IP, and then sent back to the DCS.

Smart transmitters, of course, are most common in modern systems, sending a conditioned signal straight into the DCS and providing a conduit for diagnostic information through a communicator at the I/O cabinet or even through the DCS itself, allowing for easier maintenance and calibration to ensure that the incoming data is as reliable as possible. There are also instances, however, especially in the case of thermocouples or RTD temperature sensors, where the signal goes directly into the I/O card without a transmitter or conditioning. Some transmitters can also be connected to the DCS via a fieldbus protocol through a network switch.

The data coming in from the sensors relies on the electrical system, which must be well-designed and robust enough to handle the data coming over the wires. Power supply quality and reliability, proper grounding, noise from nearby high-voltage electrical systems, signal loss over long runs of wire and other factors should have been considered in the electrical system design to ensure data integrity.

Assuming the electrical system is sound, sensing devices are installed properly and wiring is good, we begin looking at the type of data coming in from the devices, how it behaves, how it will be used and what it indicates to the operators.

Scan Rate and Scan Class

First, look at the I/O scan rates in your DCS. The type of data you are collecting should tell you the scan rate you need. Slow to change data does not need to be scanned/written every one second, but high frequency data, like vibration analysis data, may warrant a sub-one-second scan rate. If you’re bringing in data from a temperature sensor, you may need that information once per second or maybe only every 5 seconds.

For fast-moving data, most vendors make sequence of events (SOE) cards, which are I/O cards that scan faster than one millisecond, to allow visibility into what inputs changed state and when. This information can be important where there are process implications, but most commonly it is used for safety purposes. For example, if an alarm trips or an accident occurs, it’s important to understand the root cause of the issue and be able to see the sequence of events with high precision or high time resolution, including data such as when the inputs were changing and what came in first.

Next, let’s move out of the DCS and into the historian and ERP layers. If you’re collecting real-time data, you must also use the right type of interface. The interface type is determined by what type of data you are collecting: Is it real-time data? Here are some interface recommendations:

  • OPC Data Access (OPC-DA) is recommended for real-time data
  • OPC Historical Data Access (OPC-HDA) is recommended for historized data
  • OLE Database (OLE-DB) Enterprise and OLE-DB Provider
  • ODBC Driver

Another consideration is the scan class and interface configuration. A scan class is a code that interfaces like historians use to schedule data collection. You set the scan class in the Interface Configuration Utility when you configure an interface (see table 1).

Table 1: Scan Class Components List

Component Description Optional Example
Period (Scan Frequency) Specifies how often the interface collects data. No 01:00:00

Get data every hour.

Offset Specifies a start time for the calculation. Data archive interprets the value starting from midnight of the current day. Yes 01:00:00,13:00:00

Get data every hour, starting at 1:00 p.m.

UTC Time Requires that the scheduling is synchronized with UTC. To use it, add “,U” after the scan class. UTC scan classes are not affected by daylight saving time because the scan class scheduling synchronizes with UTC* rather than local time. UTC time has no effect if the scan-class period is 1 hour or less. Yes, but recommended 02:00:00,13:00:00,U

Get data every two hours, starting at 1:00 p.m. UTC time.

Local Time
  • Specifies that: During a transition from daylight saving time to standard time, the scan-class period is 24 hours.
  • During a transition from standard time to daylight saving time, the scan-class period is 22 hours.

To specify Local Time, add “,L” after the scan class. This setting has no effect when the scan-class period is 1 hour or less.

Yes, using forces Wall Clock Scheduling 23:00:00,08:00:00,L

During a transition from daylight saving time to standard time, get data after 24 hours. During a transition from standard time to daylight saving time, get data after 22 hours.

Data Conditioning

Assuming you’ve met the challenges of setting up good field device incoming data, the DCS now relies on that information to control the process. More often than not, process measurements enter the DCS as a 4–20mA analog signal but must be converted into digital values that the DCS controller uses. Note the resolution of the analog-to-digital converter (ADC) on the I/O card, which can be as low as 12 bits or as high as 24 depending on the make and model.

Once the data is in the control system, signal conditioning can be a concern. Noise can come either from the process instrument itself or from electrical noise from nearby wiring. For example, differential pressure flow measurements from orifice plates or similar technologies are often noisy simply due to their principle of operation. Some filtering techniques can be applied in the DCS to help filter or smooth out the measurement from a noisy sensor.

Low-pass filters are often used to filter out high-frequency noise from either the sensor itself or electrical interference. Low-pass filters are typically used in a processing plant since processes often have dynamics on the order of seconds or minutes, where these types of noise have dynamics in the millisecond range. When you filter a signal, there is always a trade-off between responsiveness and noise rejection – the more noise you reject, the slower your filtered signal is going to be to catch up to reality. The engineering decisions here is a choice between how much noise do you need to reject vs. how quickly you need to see a changing process signal.

Data Visualization

Knowing the type of data being collected is also important when deciding how it should be presented visually. For example, tank levels may be presented in a trend or as a graphic showing level and temperature. Pumps and motors may be presented graphically with the attributes, pressure, RPM, or vibration as continuous readings or trend charts may be used to display the attributes.

In the historian, it may be necessary to show the same data differently for different users. Should the data be shown on a gauge, or should it be shown as a trend? An operator looks at the data differently than a maintenance person or a manager. An operator, for instance, cares about what happened in the last few hours or so, whereas the maintenance person cares what’s happened in say the last 6 weeks, and an administrator in the last several months.

These factors will determine setup parameters, such as data compression rates. For example, if you are measuring the level of liquid in a tank that has product being taken out and put in on a regular basis, you will need to set the data compression accordingly. If you are receiving data that is slow to change, the compression settings can be a bit less stringent.

For example, in a tank monitoring situation, once the temperature cools to its set point, it should stay +/- 5 degrees, so during cooling, the data compression can be looser. However, during the heating up process, you will want it to be tighter to see if it’s going up or starting to slow down. This is so you know that the valves are closing, and agitation is still working and so on. If you’re detecting something like vibration in a motor, you’ll need to know the vibration limits; for example, how much is too much vibration that will tear something up or when the data indicates that the machine needs maintenance.

A different example would be inventory levels. If you are storing lube oil and you have 60 ea. 55-gallon drums, and you sell them every 3 months, you don’t need that data every 15 seconds. Set that compression so it only writes when the inventory number changes. Most historians allow you to change the compression settings and even the way the data is collected so that it will only record when there’s a change.

Exception Deviation and Compression Deviation

As defined for tags used in Rockwell Automation’s FactoryTalk® Historian and the OSIsoft PI Data Archive (historian):

Exception reporting is used to define the precision of a data stream, and the amount of deviation that constitutes a significant change. Most interface programs can execute an exception-reporting algorithm to determine when to send a point value to the Snapshot subsystem. An exception is an event that occurs either:

  • After a specified minimum duration of time since the previous event, while exceeding a specified deviation in value from that event.
  • After a specified maximum duration of time since the previous event.

This means that when activated, exception reporting filters events and stores only periodic values, including duplicates, unless an event represents a significant change in the short-term trend of values. An exception event, both timestamp and value, is sent with the previous event to the Snapshot.

An exception deviation is the deviation in value required to store an event, either as a number of engineering units, or as a percentage of the point’s Span value. The exception deviation should be less than the compression deviation by at least a factor of 2 and is ignored for digital, string and binary large object (BLOB) data type points.

  • Min Time – The minimum time that must elapse after an event before an exception value can be stored.
  • Max Time – The maximum time that can elapse after an event before automatically storing the next event as an exception value. Set the minimum and maximum time values to 0 to turn off exception reporting.

Once events are sent to the Snapshot subsystem, a compression algorithm can further filter data and reduce storage to only significant values as they are moved into the archive. An event is recorded:

  • After a specified minimum duration of time since the previous event, if it exceeds a specified deviation in value from that event.
  • After a specified maximum duration of time since the previous event. When activated, compression reporting filters events and stores only periodic values (including duplicates), unless an event represents a significant change in the short-term trend of values.

To turn off compression and archive every event that passes exception reporting, disable the compressing attribute.

For a compression deviation, enter the deviation in value required to record an event, either as a number of engineering units, or as a percentage of the point’s Span value.

For most flows, pressures, and levels, use a deviation specification of 1 or 2 percent of Span. For temperatures, the deviation should usually be 1 or 2 degrees.

  • Min Time – Enter the minimum time that must elapse after an event before a compressed value can be recorded. The minimum time should be set to 0 if exception reporting is activated for the same point.
  • Max Time – Enter the maximum time that can elapse after an event before automatically recording the next event as a compressed value. The recommended maximum time is one work shift (e.g., 8 hours). If this value is too low, the compression effects are too limited to save significant archive space. If this value is too high, useful data may be lost. Events that reach the PI Data Archive server in asynchronous order bypass the compression calculation and are automatically recorded to the archive.

The compression specifications consist of a deviation (CompDev), a minimum time (CompMin), and a maximum time (CompMax).

Events are also archived if the elapsed time is greater than the maximum time. Duplicate values will be archived if the elapsed time exceeds CompMax. Under no circumstances does this cause PI Data Archive to generate events; it only filters events that are externally generated.

The most important compression specification is the deviation, CompDev. For non-numeric tags, CompDev and CompDevPercent are ignored. They will be displayed by applications as zero.

It’s possible for users to change the compression if they have the proper technical knowledge to do so. You can also ask a qualified automation solutions provider to show you how to change it. It may be possible to change it internally at little to no cost. It’s important to be properly trained, however, to safely change compression parameters. Remote management and monitoring services (e.g., MAVERICK’s PlantFloor24®) are available for end users who need assistance with process issues and for operators who need expert guidance on data compression or other control issues.

A Good Stream of Data

Process control is easy – and it’s hard. The most complicated, largest or most modern system still depends on the data. The best field devices in the world can provide great data, but if it is coming into an aging system, you still have risk. A knowledgeable automation solutions partner who knows your industry and your technology can offer great insights into all these areas to keep your data reliable and help mitigate risk. Meanwhile, we hope this article gives you some new ideas of how to ensure that your data stream is good and that you get the right data, in the right format, to the right people so your business decisions can remain sound.

ABOUT THE AUTHORS

Brian Bolton

Brian E. Bolton

Brian E. Bolton is a consultant for MAVERICK Technologies. He has more than 35 years of experience in chemical manufacturing, including more than 20 years involved with the OSIsoft PI Suite of applications, quality assurance, continuous improvement and data analysis.

View all posts

Travis Giebler

Travis Giebler

Travis Giebler is an Application Engineer Consultant at MAVERICK Technologies with 10 years of experience in process controls in the chemicals, pulp and paper, and power generation industries.

View all posts

Travis Giebler

Travis Giebler

Travis Giebler is an Application Engineer Consultant at MAVERICK Technologies with 10 years of experience in process controls in the chemicals, pulp and paper, and power generation industries.

View all posts

1 comment

Your email address will not be published. Required fields are marked *

  • Good article on industrial data processing. As an an instrumentation and control engineer, this article offers important insights into data processing in instrumentation systems. Thanks for sharing.

Archives

Video Link
Subscribe to the Inside Automation eNewsletter