Sell with no payment Risks!

Logo Projectmaterials B2B marketplace

Sell with no Payment Risks!

Process Instruments (Measurement Features)

Share on whatsapp
Share on telegram
Share on linkedin
Share on email

An overview of key concepts about process instrumentation: measurement precision (a static feature that identifies the degree of agreement between the indication of the instrument and the characteristics of the measurand in measurement or monitoring), measurement uncertainty, metrological confirmation, and calibration.


By measurement precision we mean:

  • according to ISO-IMV (International Metrology Vocabulary): “…closeness of agreement between indications or measured quantity values obtained by replicate measurements on the same or similar objects under specified conditions”;
  • according to IEC-IEV (International Electrotechnical Vocabulary): “..quality which characterizes the ability of a measuring instrument to provide an indicated value close to a true value of the measurand” (Note: call in this case, however, not Precision but Accuracy);
  • or we could deduce the following practical definition from the previous ones:“…by testing a measuring instrument under conditions and with specified procedures, the maximum positive and negative deviations from a specified characteristic curve (usually a straight line)”.

Therefore, the concept of linearity is also inherent in the measurement precision term (which is currently very limited in the digital instrumentation), while the concept of hysteresis is not included (although this is considered, as it is included within the maximum positive and negative deviations found).

instrument accuracy and precision
Instruments accuracy and precision

Furthermore, the concept of repeatability of the measurement is not included (which is instead considered in the case of verification of precision over several measuring cycles.

Therefore, in the practical verification of the precision of the measuring instruments with a single up and down measurement cycle (generally conducted for instruments with hysteresis, such as, pressure gauges, pressure transducers, load cells, etc.) a calibration curve is obtained of the type found in Figure 1, where we can deduce the concept of tested accuracy (accuracy measured) that must be included within the so-called nominal accuracy (accuracy rated) or the limits within which the imprecision of an instrument is guaranteed by its specification.

The metrological confirmation is the verification that the measuring instrument keeps the accuracy and uncertainty characteristics required by the measurement process over time.

Sometimes this concept of imprecision for some common types of instruments (such as gauges, resistance thermometers, thermocouples, etc.) is also called precision or accuracy class, which according to the International Reference Vocabularies ISO-IMV and IEC-IEV : “class of measuring instruments or measuring systems that meet stated metrological requirements that are intended to keep measurement errors or instrumental measurement uncertainties within specified limits under specified operating conditions” (ie, the accuracy measured must be less than accuracy rated: See also Figure 1).

Instrument Measurement Accuracy
Instrument Measurement Accuracy

Figure 1 – Exemplification of measurement accuracy concepts


The measurement uncertainty of the measuring instrument is a new concept which takes into account during the calibration not only the errors or deviations found but also its resolution of indication as well as the uncertainty of the measurement standard used in the calibration itself.

Measurement Uncertainty
Instruments measurement uncertainty

By measurement uncertainty we mean:

  • according to ISO-IMV (Internat. Metrology Vocabulary): “non-negative parameter characterizing the dispersion of the quantity values being attributed to a measurand, based on the information used”;
  • according to ISO-GUM (Guide to Uncertainty of the Measurement): “result of the estimation that determines the amplitude of the field within which the true value of a measurand must lie, generally with a given probability, that is, with a determined level of confidence”.

From the above definitions we can deduce two fundamental concepts of measurement uncertainty:

  1. Uncertainty is the result of an estimate, which is evaluated according to the following two types:
  • Category A: when the evaluation is done by statistical methods, that is through a series of repeated observations, or measurements.
  • Category B: when the evaluation is done using methods other than statistical, that is, data that can be found in manuals, catalogs, specifications, etc.

2. The uncertainty of the estimate must be given with a certain probability, which is normally provided in the three following expressions (see also Table 1):

  • Standard uncertainty (u): at the probability or confidence level of 68% (exactly 68.27%).
  • Combined uncertainty (uc): the standard uncertainty of measurement when the result of the estimate is obtained by means of the values of different quantities and corresponds to the summing in quadrature of the standard uncertainties of the various quantities relating to the measurement process.
  • Expanded uncertainty (U): uncertainty at the 95% probability or confidence level (exactly 95.45%), or 2 standard deviations, assuming a normal or Gaussian probability distribution.

Standard uncertainty u(x) (a)

The uncertainty of the result of measurement expressed as a standard deviation u(x) º s(x)

Type A evaluation (of uncertainty)

Method of evaluation of uncertainty by the statistical analysis of series of observations

Type B evaluation (of uncertainty)

Method of evaluation of uncertainty by means other than the statistical analysis of series of observations

Combined standard uncertainty uc(x)

Standard uncertainty of the result of measurement when that result is obtained from the values of a number of other quantities, equal to the positive square root of a sum of terms, the terms being the variances or covariances of these other quantities weighted according to how the measurement result varies with changes in these quantities

Coverage factor k

The numerical factor used as a multiplier of the combined standard uncertainty in order to obtain an expanded uncertainty (normally is 2 for probability @ 95% and 3 for probability @ 99%)

Expanded uncertainty U(y) = k . uc(y) (b)

Quantity defining an interval about the result of a measurement that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to the measurand (normally is obtained by the combined standard uncertainty multiplied by with a coverage factor k = 2, namely with the coverage probability of 95%)

(a)   The standard uncertainty u (y), ie the mean square deviation s (x), if not detected experimentally by a  normal or Gaussian distribution, can be calculated using the following relationships:

u(x) = a/Ö3, for rectangular distributions, with an amplitude of variation ± a, eg Indication errors

u(x) = a/Ö6, for triangular distributions, with an amplitude of variation ± a, eg Interpolation errors

(b)   The expanded measurement uncertainty U (y) unless otherwise specified, is to be understood as provided or calculated from the uncertainty composed with a coverage factor 2, ie with a 95% probability level.

Table 1- Main terms & definitions related to measurement uncertainty according to ISO-GUM


The metrological confirmation is the routine verification and control operation that confirms that the measuring instrument (or equipment) maintains the accuracy and uncertainty characteristics required for the measurement process over time.

By metrological confirmation we mean according to ISO 10012 (Measurement Mgt System): “set of interrelated or interacting elements necessary to achieve metrological confirmation and continual control of measurement processes”, and generally includes:                          

  • instrument calibration and verification;
  • any necessary adjustment and the consequent new calibration;
  • the comparison with the metrological requirements for the intended use of the equipment;
  • the labeling of successful positive metrological confirmation.
Metrological confirmation
Metrological confirmation for process instrumentation

The metrological confirmation must be guaranteed through a measurement management system which essentially involves the phases of Table 1.

0. Equipment scheduling
1. Identification need for calibration
2. Equipment calibration
3. Drafting of  calibration document
4. Calibration identification
5. There are metrological requir.???
6. Compliance with metrological req. 6a. Adjustment or repair 6b. Adjustment Impossible
7. Drafting document confirms 7a. Review intervals confirm 7b. Negative verification
8. Confirmation status identification 8a. Recalibration phase (2 to 8) 8b. State of identification
9. Satisfied need 9a. Satisfied need 9b. Need not satisfied

Table 1 – Main phases of the metrological confirmation (ISO 10012)
Table 1 highlights three possible paths of metrological confirmation:

  1. the left path that normally achieves the satisfaction of the positive outcome of the metrological confirmation without any adjustment of the instrument in confirmation to phase 6;
  2. the first left path and then the middle one from phase 6a to 9a, in case of positive adjustment or repair of the instrument in confirmation and whose recalibration satisfies the confirmation: therefore, in this case, it will be necessary to reduce only the confirmation interval;
  3. the first path on the left and then the right from phase 6b to 9b, in case of negative adjustment or repair of the instrument in confirmation, which does not satisfy the result of the confirmation: therefore the instrument must be downgraded or alienated.

Metrological confirmation can usually be accomplished and fulfilled in two ways:
Comparing the Maximum Relieved Error (MRE) with the Maximum Tolerated Error (MTE), ie:
Comparing the Max. Relieved Uncertainty (MRU) with Tolerated Uncertainty (MTU, ie:
With reference to the previous articles, and taking into consideration the one on the Calibration, related to the evaluation of the calibration results in terms of Error and Uncertainty of a manometer, respectively equal to:

  • MRE: ±05 bar
  • MRU:   066 bar

if the maximum error and tolerated uncertainty were both 0.05 bar, then the manometer if evaluated in terms of MRE is compliant, while if evaluated in terms of MRU it is not compliant, and therefore it should follow path 2 of Table 1, or path 3; if it does not fall into it is then downgraded.


Instrumentation calibration is the operation to obtain under specified conditions, the relationship between the values of a measurand and the corresponding output indications of the instrument in calibration.

By calibration we mean:

  • according to ISO-IMV (International Metrology Vocabulary): “operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication”;
  • or we could deduce the following practical example from the previous one:  “operation performed to establish a relationship between the measured quantity and the corresponding output values of an instrument under specified conditions”.
instrument calibration
Instrument calibration

Calibration should not be confused with the adjustment, which means: “set of operations carried out on a measuring system so that it provides prescribed indications corresponding to given values of a quantity to be measured (ISO-IMV).

Hence, the adjustment is typically the preliminary operation before the calibration, or the next operation when a de_calibration of the measuring instrument is found.

Calibration should be performed on 3 or 5 equidistant measuring points for increasing (and decreasing) values in the case of instruments with hysteresis phenomena: eg manometers):

Figure 1 presents the calibration setup, while Table 1 presents the calibration results.

Manometer setup calibration
Manometer setup calibration

Figure 1 – Calibration setup of a manomete

Relieved Values
Relieved Errors
0 0.05 0.05 0.05
2 1.95 2.05 – 0.05 0.05
4 3.95 4.05 – 0.05 0.05
6 5.95 6.00 – 0.05 0.00
8 7.95 8.00 – 0.05 0.00
10 10.00   0.00
Table 1 – Calibration Results

From the calibration results shown in Table 1, the metrological characteristics of the manometer     (or pressure gauge) can be obtained in terms of:

  • Measurement Accuracy: that is, maximum positive and negative error: ± 0.05 bar
  • Measurement Uncertainty: or instrumental uncertainty that takes into account the various factors related to the calibration, namely:

Iref    Uncertainty of the reference standard                  0.01 bar (supposed)
Emax  Max error of measurement relieved                       0.05 bar
Eres  Error of resolution of the manometer                     0.05 bar
from which the composed uncertainty uc can be derived from the following relation:
Instrument calibration formula
and then the extended uncertainty (U), at 95% confidence level (ie at 2 standard deviations):
Calibration formula instruments 
Obviously, the measurement uncertainty of the manometer (usually called instrumental uncertainty) is always higher than the measurement accuracy (because it also takes into account the error of resolution of the instrument in calibration and the uncertainty of the reference standard used in the calibration process).

The article describes the standardized analog pneumatic signals (20 to 100 kPa) and electrical signals (4 to 20 mA), as well as the innovative analog and digital hybrid signals HART (Highway Addressable Remote Transducer) and the state of the art of current digital communication protocols commonly called BUS.


Analog Control Signals

The traditional most commonly used transmission signal of the type:

  • Direct current signals (Table 1): for connection between instruments on long distances (i.e. in the field area)
  • Direct voltage signals (Table 2): for connection between instruments on short distances (i.e. in the control room)
(1)  Preferential signal

Table 1- Standardized signals in direct current (IEC 60381-1)

– 10
+ 10

(1) Voltage signals that can be derived directly from normalized current signals

(2) Voltage signals that can represent physical quantities of a bipolar nature

Table 2- Standardized signals in direct voltage (IEC 60381-2)
The signal different from 0 (live zero) for the variable at the beginning of the measuring range (true zero), is used for electrical instruments to power the instrument and in general to highlight connection losses (as in the pneumatic instruments).

Moreover, given their characteristics, the current signals are used in the field instrumentation, while the voltage signals are used in the technical and control room instrumentation.

Finally, the current signal with respect to the voltage signal has the advantage of not being affected by the length and hence the impedance of the connection line at least up to certain resistance values, as it is illustrated in Figure 1.

Limit supply region analog signals
Limit supply region analog signals

Figure 1 – Example of the limit of the operating region for the field instrumentation in terms of its connection resistance Omega to the supply voltage V

  • Vdc = Actual supply voltage in volt
  • Vmax= Maximum supply voltage, 30 V in this example
  • Vmin= Minimum supply voltage, 10 V in this example
  • RL= Max. load resistance in ohm at the actual supply voltage:
  • RL <= (Vdc – 10) / 0,02 (in the Example reported in Figure 1)

Hybrid Control Signals

Hybrid signals, that is of the analogical-digital protocol type, were standardized “de facto” by a Consortium of Manufacturers as:

HART (Highway Addressable Remote Transducer) which precisely superimposes to the analog normalized signal (4 ¸ 20 mA) a digital signal modulated in frequency according to the Standard Bell 202, with amplitude of +/- 0.5 mA and with frequency found in Table 3, which given the high frequency of the superimposed signal, the added energy is virtually zero, so this modulation does not cause any disturbance on the analog signal.

NOTE: Remember that to operate the HART protocol requires a resistance of 250 ohms in the output circuit!

HART protocol
Table 3 – HART protocol with signals standardized BELL 202

Digital Control Signals

Digital signals were normalized towards the end of the 1990s by the International Standard IEC 61158 on Fieldbus Protocol, but still not much applied since it standardizes as many as 8 communication protocols, and as each digital protocol is essentially characterized by following features (see Table 4):

  • Transmission encoding: Preamble, frame start, transmission of the frame, end of the frame, transmission parity, etc.
  • Access to the network: Probabilistic, deterministic, etc.
  • Network management: Master-Slave, Producer-Consumer, etc.
Standard IEC
Fieldbus Foundation
(1) Protocol initially designed as unique standard protocol IEC

Table 4 – Standardized protocols provided for by the International Standard IEC 61158

 Finally, Figure 2 shows the geographic path of the measurement signals from the “field” to the “control room” through the “technical room”, where the sorting (also called “marshaling”) takes place and the transformation of the current signal in the voltage signal for the controller (DCS: Distributed Control System) and then through digital signals flow in “control room” for the operator station and video (HMI: Human Machine Interface).

Instrumentation measuring chain
Figure 2 – the Typical path of a measurement chain from the field to the control room


  • For pneumatic instrumentation: 140 ± 10 kPa (1.4 ± 0.1 bar) for the pneumatic instrumentation (sometimes the normalized pneumatic power supply in English units is still used: 20 psi, corresponding to ≈ 1.4 bar)
  • For electrical instrumentation: Continuous voltage: 24 V dc for field instrumentation, Alternating voltage: 220 V ac for control and technical room instrumentation

The connection and transmission signals between the various instruments in the measuring and regulating chains are standardized by the IEC (International Electrotechnical Commission):

  • Pneumatic signals (IEC 60382): 20 to 100 kPa (0.2 to 1.0 bar) (sometimes the standardized signal is still in English units: 3 to 15 psi, ≈ 0.21 to 1.03 bar)
  • Electrical signals (IEC 60382):

About the Author

Alessandro Brunelli

Author: Dott. Prof. Alessandro Brunelli – Professor of Instrumentation, Automation, and Safety of Industrial Plants

Cavaliere dell’ Ordine al Merito della Repubblica Italiana (OMRI N. 9826 Serie VI)

Author of “Instrumentation Manual” (available in IT):

  • Part 1: illustrates the general concepts on industrial instrumentation, the symbology, the terminology and calibration of the measurement instrumentation, the functional and applicative conditions of the instrumentation in normal applications and with the danger of explosion, as well as the main directives (ATEX , EMC, LVD, MID, and PED);
  • Part 2: this part of the book deals with the instrumentation for measuring physical quantities: pressure, level, flow rate, temperature, humidity, viscosity, mass density, force and vibration, and chemical quantities: pH, redox, conductivity, turbidity, explosiveness, gas chromatography, and spectrography, treating the measurement principles, the reference standard, the practical executions, and the application advantages and disadvantages for each size;
  • Part 3:illustrates the control, regulation, and safety valves and then and simple regulation techniques in feedback and coordinates in feedforward, ratio, cascade, override, split range, gap control, variable decoupling, and then the Systems of Distributed Control (DCS) for continuous processes, Programmable Logic Controllers (PLC) for discontinuous processes and Communication Protocols (BUS), and finally the aspects relating to System Safety Systems, from Operational Alarms to Fire & Gas Systems, to systems of ESD stop and finally to the Instrumented Safety Systems (SIS) with graphic and analytical determinations of the Safety Integrity Levels (SIL) with some practical examples.

Download the PDF – Traceability & Calibration Handbook

You can download an excerpt of the “Instrumentation Manual” (Brunelli, 2018-2019) by clicking on the following link:

pdf iconTraceability&Calibration Handbook (Process Instrumentation)

One Response

  1. The Calibration Process for the Ultrasonic Flaw Detector Thickness measurement is not the sole consideration.
    The Calibration Process for the Ultrasonic Flaw Detector Thickness measurement is not the sole consideration.
    Calibration is a crucial process for non-destructive testing professionals to ensure the accuracy and precision of their ultrasonic flaw detector. It involves setting the measuring instrument with a standard unchanging reference material that meets certain specifications. This process often occurs in two stages: manufacturer calibration, where the flaw detector meets the standard manufacturing specification outlined in the relevant code, and user calibration, which ensures the detector meets the specification for the anticipated flaw to be detected.

    The most important reason for calibrating ultrasonic flaw detectors is to ensure that it is accurate and void of errors. Over time, the cumulative margin of error increases, and the inaccuracy becomes big and beyond acceptable tolerance levels. Periodically calibrating your ultrasonic flaw detector reduces this margin of error to the barest minimum without overtly affecting the result.

    There are three ways to set up for an ultrasonic flaw detector calibration: zero offset calibration, material velocity calibration, and auto calibration. Zero offset calibration considers the time elapsed during wave travel before entering the test sample and equates it to the time elapsed as it travels through a layer of the test sample. Material velocity calibration depends on the material to be inspected and the ambient temperature obtainable during the setup. Auto calibration requires specific settings to be in place before calibration, including using the ultrasonic velocity tables to adjust material velocity values closer to real values, setting delay and zero offset values to zero, and adjusting the speed of two similar signal reflectors sending two separate signals from different distances.

    Ultrasonic flaw detectors typically require three calibration processes: velocity/zero calibration, reference calibration, and calibration certification. Velocity/zero calibration converts time to distance measurements using the speed of sound in test materials to program the flaw detector. It considers the dimensions measured by the flaw detector, including the distance and thickness of the material, using precisely timed echoes. The accuracy of this calibration depends on the careful measures taken during the calibration exercise, and errors might occur in the readings if the calibration is not carefully and correctly done.

    Reference calibration uses similar materials or test blocks as reference standards to set up a testing operation. For ultrasonic flaw detectors, the signal’s amplitude received from standard references is usually used for this type of calibration. User-defined procedures often provide the details required for reference calibration used for specific tests.

    Calibration certification involves documenting an ultrasonic flaw detector’s linearity and measurement accuracy given specific test conditions. This measurement accuracy is often juxtaposed with the manufacturer’s given tolerances. Distance and amplitude certifications are given for ultrasonic flaw detectors, but the certification must still follow relevant codes and standards such as EN 12668 or E-317.

    To calibrate an ultrasonic flaw detector, three steps are used: setting the probe in position A with the right coupling, adjusting the range and sound velocity, and setting the angle probe for angle. The speed/delay is instantly set and calibrated with the actual velocity also computed. The angle probe for angle is set in position C with the right coupling, and the aperture, depth of hole, and calibration type are entered. The gain is set to the correct value, and the probe should be adjusted to the highest echo conforming to a 50mm hole.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share on whatsapp
Share on telegram
Share on linkedin
Share on email

Searching for something else?


Buy Online!

ANSI Y-Strainers

🚀 Stock Delivery

🇪🇺 EU Origin

💰 Factory Prices

✅  Top Quality

SIZES       ⅜” to 12″
RATINGS 150# · 300# · 600# · 800#
GRADES  Carbon · Stainless
ENDS        Flanged RF · BW · THD · SW