Calibrating the measuring chain


This article presents useful information on how to calibrate a measuring chain. This information has been extracted from publications, books, product data sheets and operational manuals for our products. It is a very useful excerpt for a system technician at any level and also for students.

There is a need to first understand some terms and concepts that relate to this subject. Terms such as measuring chain, error, calibration, categories of calibration, standard practices. The latter part of the article then discusses calibration options and procedures.

The Measuring Chain

This refers to the series of elements of a measuring system constituting a single path for signal flow from the sensor/input transducer to an output element/output transducer.

The measuring system has elements that are assembled to measure quantities and display/present information about their values. These elements include the input transducer that transforms the quantity being measured into electrical signals (it can also be pneumatic or mechanical output), signal conditioners (filters, isolators, amplifiers), monitoring systems and output transducers (such as actuators, switches or display).

Therefore, a measuring chain is no different from a process control system with the exception that feedback is optional. When performing measurements, feedback of the output of the measuring chain can be used to control an industrial process when needed.

If a model of a measuring chain is to be drawn, it will be as shown below in figure 1 with blocks of sensing elements, conversion element, manipulation element, data transmission element, data processing element and data presentation element.

Figure 1. A Simple Measuring Chain

Figure 1. A Simple Measuring Chain

Errors in the Measuring Chain

Errors are unavoidable in measuring systems. It is the difference between the actual value/output from the measuring chain and the true value of the quantity. In designing a measuring chain, knowing the error or level of certainty becomes very important as it affects the accuracy with which the measurement is made.

There are two approaches to finding the total error in a measuring chain with different links. The first approach is assuming that the errors in each link of the chain can have extreme values and are in the same direction. Then a simple addition of each link error gives the total error of the chain.

The second approach to getting the chain error uses the tools of statistics and probability, which helps to reduce the error to a much smaller quantity. It is expressed mathematically as follows.

Furthermore, the two major categories of errors that arise in a measurement system are systematic errors and random errors.

Systematic errors are the errors inherent in the operation of the measuring chain instrument and can only be eliminated through proper and frequent calibration. It is also caused by improper handling or operation of the instrument by the system technician.

Also, one can think of influence quantities as sources of errors. Such quantities can be the direction in which the quantitative measurement is changing. This leads to hysteresis in some measuring instrument. Hysteresis is the instrument giving different readings as the quantity changes towards a particular value of interest. That is, as the quantity is increased from a lower value or decreased from a higher value.

Furthermore, discrepancies can also occur when it is assured assumed that the chain response is directly proportional to the quantity. This is called the non-linearity. This can also be catered for by calibration at sufficient intermediate points and by the use of microprocessor-based instrumentation.

Random errors are errors that affect readings and even though clearly observed are difficult to explain. Examples of causes of random errors in a measuring chain are electrical noise and irregular changes in temperature. Even though these errors appear unpredictable they often have a Gaussian normal distribution about the mean value, hence statistical methods are used in dealing with them.


This is the comparison of the reading/output from the measuring chain to a known standard and the maintenance of the evidentiary chain from that standard.

According to the International Society of Automation, the word calibration is defined as “a test during which known values of a measurand are applied to the transducer and corresponding output readings are documented under specified conditions”.

The measurand is the quantity, property or condition that is to be measured. The measurand of concern for this article includes load, weight, force, and pressure.

Also, the specified general conditions for proper calibration includes the ambient conditions, the force standard machine, the technical personnel, and the operating procedures.

The ambient conditions refer to the temperature and the relative humidity in the industrial environment or the laboratory.

The force standard machine refers to the reference standard instrument. It should have a measurement uncertainty that is better than one-third of the nominal uncertainty of the instrument being calibrated. This reference equipment is used for measuring ranges and the desired levels of uncertainty, traceable to the SI units by organizations like the National Calibration Laboratories(NCL) and the National Metrological Institute (NIM),

The technical personnel should be a well-trained and qualified system technician that performs technical and management procedures according to the manual of the equipment manufacturers or the laboratory. The person must have knowledge of pneumatic, hydraulic, mechanical and electrical instrumentation.

They must have received training in subjects of dynamic control theory, analog/digital electronics, microprocessors, and field instrumentation. They carry out the operational procedures. These operating procedures will be described in details in the latter section of this article.

Check out our load cell calibration services.

Categories of Calibration

Calibration methods for a measuring chain can be categorized based on different terms. They are of different types that include span and zero calibration, individual and loop calibration, formal and field system calibration.

Zero and span calibration are done to eliminate the zero and span errors of the instrument. The zero value is the lower end of the calibration range while the Span is the numerical difference between the lower and the upper values of the calibration. These errors are depicted graphically in figure 2 below.

Figure 2a. The Span Error
Figure 2a. The Span Error
Figure 2a. The Span Error
Figure 2b. The Zero Error

Our instruments allow for both zero and span adjustments as can be seen in this manual. Zero adjustments are used to produce the parallel shift of the input-output curve while span adjustments are used to change the slope of the input-output curve.

Individual instrument calibration

Individual instrument calibration is performed only on each of the links in the measuring chain. It involves decoupling the chain and working with each instrument. It requires applying inputs from a known source and obtaining the output at various data points. Loop calibration, on the other hand, is performed on the measuring chain as a whole with the advantage that it saves more time as the entire system is verified at once.

Formal calibration

Formal calibration is performed by approved laboratories where instruments used for official measurements are periodically calibrated, traceable to primary standards and certified. This calibration is performed under controlled temperature, atmospheric pressure, and humidity. In the chain, formal calibration is performed on the transducers, measuring, analyzing and computing instruments.

Field system calibration

Field system calibration is more like a check to ensure that there is no deviation from the normal operation determined during the formal calibration and are mainly for maintenance. Field calibration is done, not to determine the absolute accuracy of the measurement but the repeatability of the measurement. It is performed right there at the field where the measurement chain is installed. It is a calibration done at one point in the frequency domain or time domain and at one level.

Tacuna systems offer load cell calibration services for your equipment prior to shipment and it comes with a calibration certificate at good pricing. This service should be purchased when purchasing a load cell.

Standard Practices for Calibration

Standards are necessary for calibration as seen in earlier sections. There are basically three types of standards, the primary standard, the secondary standard, and the working standard.

According to the American Society for Testing and Materials (ASTM) standard ASTM E74 practices of calibration of the force measuring instruments for verifying the force indication of testing machines. The following are the definition of each of these standards.

Primary force

A primary force standard is a dead weight force that is applied directly to the measuring chain without any intermediate mechanism such as levers, hydraulic, multipliers or the like. The applied load, force, weight or pressure has been determined by comparison with reference standards, traceable to national standards of mass that it weighs.

Secondary force

A secondary force standard is an instrument or mechanism whose calibration has been established by comparison with the primary force standards. Therefore, in order to utilize a secondary force standard for calibration, it must have been calibrated by comparison with primary force standards.

Working force

Working force standards are the instruments used to verify testing machines. They may be calibrated by comparison with primary force standards or secondary force standards. They are also called the force standard machines.

The General Calibration Options

There are three options that are available to pick from for calibration.

        1. Calibrate the measuring chain over the full calibration range. Then proceed to permanently install the measuring chain system and then use a transfer standard to perform calibration. The transfer standard refers to an already calibrated transducer. Please do take note that the calibration range may differ from the instrument range. The instrument range refers to the capability of the instrument.
        2. Calibrate the system before installation and design it in such a way that it can be removed anytime for frequent recalibration.
        3. Initially calibrate the measuring chain prior to permanent installation and then not recalibrate throughout the entire lifecycle of the installation.

    The General Calibration Procedures

    There are different approaches to performing calibration, but the general procedures for a measuring chain according to the ISA are as shown below.

        1. Set the input signal to 0%, that is zero or no-load. Then adjust the initial scale of the measuring chain to be calibrated.
        2. Next step is to set the input signal to 100% that is to the full-scale output capacity. Then adjust the full scale of the measuring chain to be calibrated.
        3. Reset the input signal back to 0% and check the system’s output readings. If the value is more than one-quarter of the rated value on the instrument’s datasheet, readjust the scale until it falls within the tolerance level. This is also called the zero adjustments.
        4. Reset the input signal to 100% and check the instrument’s output reading. If the value is more than one-quarter of the rated value specified in the instrument’s datasheet, then readjust the scale till it falls within a tolerable range.
        5. Repeat steps 3 and 4 until the zero-balance and the full scale are within the tolerance of one-quarter of their nominal values. This is the span adjustment.
        6. Perform the measuring cycle by gradually changing the input signal by 20% or 25% and then recording the instrument’s output reading. This means the loads are applied sequentially like this, 0%,, 20%, 40%, 60%, 80%, 100%, 80%, 60%, 40%, 20%, 0% or 0%,2 25%, 50%, 75%, 100%, 75%, 50%, 25%, 0%. The readings should be observed and recorded after a sufficient period of stabilization, also the settling time.

    The calibration tolerance may not only be based on the manufacturer’s specified value but also on other factors such as the requirements of the process, the capability of available test equipment and the consistency with similar instruments at the laboratory or field.


    Calibration is very important as it determines the rate at which the instrument performance drifts and the maximum possible chain error.

    The end result of the calibration procedures gives the calibration curve that shows the relationship between the input signal and the response of the system. Information relating to the measuring chain’s sensitivity, linearity and hysteresis characteristics can be determined from this calibration curve.

    Finally, the verification of the system can then be performed after calibration.