Calibrating the measuring chain


A “measuring chain” is the series of elements of a measuring system creating a single path for signal flow from a sensor/input transducer to an output element/transducer. Naturally, each element has the potential to introduce error. This article discusses the importance of calibration to the accuracy of the measuring chain’s output. It also gives a brief overview of calibration methods.

First, we define or explain the following important terms and concepts:

  • Measuring Chain
  • Error
  • Calibration
  • Categories of Calibration
  • Calibration Standards

Then, calibration procedures are discussed.

The Measuring Chain

A measuring system has elements assembled in a “chain” to measure quantities and display/present information about their values. In this document, we refer to the quantity, property or condition that is to be measured as the “measurand”.  The measurand of concern for this article includes load, weight, force, and pressure.

The elements of a measuring chain include the following:

  • input transducer (transforms the quantity being measured into electrical signal,pneumatic, or mechanical output),
  • signal conditioners (filters, isolators, amplifiers),
  • monitoring systems and
  • output transducers (actuators, switches or display).

A measuring chain is similar to a process control system with the exception that feedback is optional. When performing measurements, feedback of the output of the measuring chain can be used to control an industrial process.

Errors in the Measuring Chain

Errors are unavoidable in measuring systems. “Error” is defined as the difference between the actual value/output from the measuring chain and the true value of the quantity. In designing a measuring chain, knowing the error or level of certainty becomes very important as it affects the accuracy of the measurement.

Two major categories of errors arise in a measurement system: systematic errors and random errors.

Systematic errors are errors inherent in the operation of measuring chain instruments and can only be mitigated through proper and frequent calibration. These errors may occur if there is improper handling or operation of the instruments by the system technician. Two main types of systematic errors are zero errors and span errors.  Zero errors occur when the measuring chain reflects a non-zero output at no load. An instrument with a zero error will produce an input-output curve that is parallel to the true measurement value. Span errors are reflected by a slope of the input-output curve that is different from the slope of the true measurement input-output curve, for the range of values the measuring chain is required to produce. Calibration procedures are executed on this range of values.

Random errors are statistical variations (either positive or negative) in the measurement result due to the precision limitations of a device. Examples of causes of random errors in a measuring chain are electrical noise and irregular changes in temperature. Even though these errors appear unpredictable, they often have a Gaussian normal distribution about the mean value, hence statistical methods are used to quantify them. That is, using repeated measurements to find mean deviations, random errors can be expressed as an uncertainty to 3 standard deviations (or a confidence interval of 99%). Clearly, the more measurements taken, the truer the estimate of uncertainty for the measurement chain will be.

In practical applications, the total error is expressed as the known systematic error plus the uncertainty to three standard deviations from the mean.

Technicians should be mindful of the ways in which errors can be introduced to a measurement chain. Examples of sources of error include:

  • Environmental factors: examples include the aforementioned temperature and electrical noise as well as other environmental conditions such as air pressure, vibration, etc..
  • Inconsistent methods: each measurement must be conducted with the same set of procedures on the entity being measured to avoid other variables being introduced to the measurement (for example, placement of the measurand in the device or even the rate at which the load is applied).
  • Instrument drift: an instrument, with use, may eventually drift from its original settings, motivating the need for frequent calibration.
  • Lag: an instrument may have a delay in reaching its steady state; if it is used before this point in time, the reading will be off.
  • Hysteresis: similar to lag, a device in the measuring chain may not return to equilibrium due to “memory” caused with use, resulting in a difference of output readings between the increasing and decreasing forces at any given force. That is, it is the difference between the readings of a load cell transducer as the load is increased (from zero-load to its rated capacity) and the readings as the load is decreased (from rated capacity to zero-load).


Calibration is the comparison of the reading/output from the measuring chain to a known standard and the maintenance of the evidentiary chain from that standard.

According to the International Society of Automation (ISA), the word calibration is defined as “a test during which known values of a measurand are applied to the transducer and corresponding output readings are documented under specified conditions”.

The specified general conditions for proper calibration include

  • the ambient conditions,
  • the force standard machine,
  • the technical personnel, and
  • the operating procedures.

As stated previously, variations in these conditions introduce random errors, making the uncertainty of the overall measuring chain more difficult to quantify precisely.

The ambient conditions refer to the temperature and the relative humidity in the industrial environment or the laboratory.

The force standard machine refers to the reference standard instrument used to verify testing machines. It should have a measurement uncertainty that is better than one-third of the nominal uncertainty of the instrument being calibrated. This reference equipment is used for measuring ranges and the desired levels of uncertainty, traceable to the SI units by organizations like the National Calibration Laboratories (NCL) and the National Metrological Institute (NIM).

The technical personnel should be well-trained, qualified system technicians that perform technical and management procedures according to the equipment manufacturer’s or laboratory manual. They must have knowledge of pneumatic, hydraulic, mechanical and electrical instrumentation, and must have received training in subjects of dynamic control theory, analog/digital electronics, microprocessors, and field instrumentation. Beyond technical knowledge, the technician must have keen attention to detail to ensure procedures are strictly followed and that all steps and results are clearly documented to insure the integrity of the calibration results.

Qualified technical personnel carry out the operational procedures, described later in this article.

Every calibration should be performed to a specified tolerance, or acceptable deviation from the actual value. The calibration tolerance may not only be based on the manufacturer’s specified value but also on other factors such as the requirements of the process, the capability of available test equipment and the consistency with similar instruments at the laboratory or field.

To ensure the best calibration results for our systems, Tacuna Systems recommends our load cell calibration services.

Categories of Calibration

Calibration methods for a measuring chain can be categorized to include zero and span calibration, individual and loop calibration, formal and field system calibration.

Zero and span calibration

Zero and span calibration are done to eliminate the zero and span errors of the instrument. Our instruments allow for both zero and span adjustments as can be seen in this manual. Zero adjustments are used to produce the parallel shift of the input-output curve while span adjustments are used to change the slope of the input-output curve.

Individual instrument and loop calibration

Individual instrument calibration is performed only on each of the links in the measuring chain by decoupling the chain. It requires applying inputs from a known source and obtaining the output at various data points. Loop calibration, on the other hand, is performed on the measuring chain as a whole with the advantage that it saves more time as the entire system is verified at once.

Formal calibration

Formal calibration is performed by approved laboratories where instruments used for official measurements are periodically calibrated, traceable to primary standards and certified. This calibration is performed under controlled temperature, atmospheric pressure, and humidity. In the chain, formal calibration is performed on the transducers, measuring, analyzing and computing instruments.

Field system calibration

Field system calibration is performed for maintenance in the field where the measuring chain is installed, to ensure that there is no deviation from the normal operation determined during formal calibration. Field calibration is done, not to determine the absolute accuracy of the measurement, but to determine the repeatability of the measurement.

Tacuna Systems offers load cell calibration services for its equipment prior to shipment. Customers receive a calibration certificate with their product when this service is added to a load cell purchase.

Calibration Standards

Standards are necessary for calibration as seen in earlier sections. There are basically three types of standards, the primary standard, the secondary standard, and the working standard.

According to the American Society for Testing and Materials (ASTM) standard ASTM E74 practices of calibration of the force measuring instruments for verifying the force indication of testing machines. The following are the definition of each of these standards.

Primary force

A primary force standard is a dead weight force that is applied directly to the measuring chain without any intermediate mechanism such as levers, hydraulic, multipliers or the like. The applied load, force, weight or pressure has been determined by comparison with reference standards, traceable to national standards of mass that it weighs.

Secondary force

A secondary force standard is an instrument or mechanism whose calibration has been established by comparison with the primary force standards. Therefore, in order to utilize a secondary force standard for calibration, it must have been calibrated by comparison with primary force standards.

Working force

Working force standards are the instruments used to verify testing machines. They may be calibrated by comparison with primary force standards or secondary force standards. They are also called the force standard machines.

General Calibration Procedures

A measuring chain should generally be calibrated prior to installation. Once the system is installed, the calibration options are as follows:

  1. For a permanent installation, use a transfer standard to perform calibration once installed. The transfer standard refers to an already calibrated transducer. Note the calibration range may differ from the instrument range, the latter referring to the capability of the instrument (often greater than the desired measurement range or calibration range).
  2. Design the installed system in such a way that it can be removed anytime for frequent recalibration.

There are different approaches to performing calibration, but the general steps for calibrating a measuring chain are listed below.

  1. Set the input signal to 0%, that is zero or no-load. Then adjust the initial scale of the measuring chain to be calibrated.
  2. Next, set the input signal to 100%, or the full-scale output capacity. Then adjust the full scale of the measuring chain to be calibrated.
  3. Reset the input signal back to 0% and check the system’s output readings. If the value is more than one-quarter of the rated value on the instrument’s datasheet, readjust the scale until it falls within the tolerance level. This is also called the zero adjustment.
  4. Reset the input signal to 100% and check the instrument’s output reading. If the value is more than one-quarter of the rated value specified in the instrument’s datasheet, then readjust the scale till it falls within a tolerable range.
  5. Repeat steps 3 and 4 until the zero-balance and the full scale are within the tolerance of one-quarter of their nominal values. This is the span adjustment.
  6. Perform the measuring cycle by gradually changing the input signal by 20% or 25% and then recording the instrument’s output reading throughout the span of calibration values. In other words, the loads are applied sequentially as 0%, 20%, 40%, 60%, 80%, 100%, 80%, 60%, 40%, 20%, 0% or 0%, 25%, 50%, 75%, 100%, 75%, 50%, 25%, 0%. The readings should be observed and recorded after a sufficient period of stabilization, also the settling time, to avoid lag errors.


Calibration of a measuring chain is a necessary part of its introduction into a process as well as its maintenance over the course of its use.

The end result of the calibration procedures gives the calibration curve that shows the relationship between the input signal and the response of the system. Information relating to the measuring chain’s sensitivity, linearity and hysteresis characteristics can be determined from this calibration curve.

For best results, Tacuna Systems recommends the purchase of our calibration services with every load cell purchase.