# Calibration

Calibration is the procedure used to establish a relationship between the values of measurement values delivered indicated by the measuring instrument and the corresponding values realized by standards of known accuracy under specified conditions. “Calibration” is an operation, performed under specified conditions, which in a first step establishes a relationship between the values of a quantity, with their respective measurement uncertainties, provided by measurement samples, and the corresponding indications, including the associated measurement uncertainties, and in a second step uses this information to establish a relationship that allows a measurement result to be obtained from an indication.

It refers to the process of establishing the characteristic relationship between the values of the physical quantity applied to the instrument and the corresponding positions of the index, or creating a chart of quantities being measured versus readings of the instrument.

If the instrument has an arbitrary scale, the indication has to be multiplied by a factor to obtain the nominal value of the quantity measured, which is referred to as a scale factor. If the values of the variable involved remain constant (not time-dependent) while calibrating a given instrument, this type of calibration is known as static calibration, whereas if the value is time-dependent or time-based information is required, it is called dynamic calibration. The relationship between an input of known dynamic behavior and the measurement system output is determined by dynamic calibration.

The main objective of all calibration activities is to ensure that the measuring instrument will function to realize its accuracy objectives. General calibration requirements of the measuring systems are as follows:

• accepting calibration of the new system;
• ensuring traceability of standards for the unit of measurement under consideration;
• carrying out calibration of measurement periodically, depending on the usage or when it is used after storage.

Calibration can be used to determine the metrological characteristics of the instrument (e.g., accuracy, repeatability, reproducibility, linearity, etc.) needed to define its functionality, or to verify that it meets requirements. It also allows to know what is the variation of the value of the quantity. Calibration is also used for determining the transcaracteristics of the instrument. In fact, many measuring instruments are transducers; they transform the read quantity into a signal (typically electrical) which can be more easily read and processed by special indicators. The knowledge of the relationship between read quantity and generated signal, allows the correct setting of the relative indicator, and therefore the exact reading of the measured quantity. This operation is often known by the experts as the determination of the sensitivity of the instrument.

Calibration is achieved by comparing the measuring instrument with the following:

• a primary standard;
• a known source of input;
• a secondary standard that possesses a higher accuracy than the instrument to be calibrated.

During calibration, the dimensions and tolerances of the gauge or accuracy of the measuring instrument are checked by comparing it with a standard instrument or gauge of known accuracy. If deviations are detected, suitable adjustments are made in the instrument to ensure an acceptable level of accuracy.

The limiting factor of the calibration process is repeatability because it is the only characteristic error that cannot be calibrated out of the measuring system and hence the overall measurement accuracy is curtailed. Thus, repeatability could also be termed as the minimum uncertainty that exists between a measurand and a standard.

Conditions that exist during calibration of the instrument should be similar to the conditions under which actual measurements are made. The standard that is used for calibration purpose should normally be one order of magnitude more accurate than the instrument to be calibrated. When it is intended to achieve greater accuracy, it becomes imperative to know all the sources of errors so that they can be evaluated and controlled.

## Calibration methodologies

Calibration methodologies can be basically divided into three types:

### Calibration by comparison

In this method, the calibrant is made to measure the same quantity as the sample instrument. The accuracy of the calibrator is determined by comparing the two measurement results. For example, a pressure gauge can be calibrated by connecting it to a hydraulic circuit where a “sample” gauge has also been installed. In this case the same quantity, the pressure, is measured by the two instruments, and the analysis of the differences of the two measurements allows to evaluate the accuracy of the calibrator.

### Calibration by substitution

In this calibration method, the sample directly generates the quantity to be measured by the calibrator. The accuracy of the calibrator is determined by comparing the nominal value of the generated quantity with the measurement results of the calibrator. For example, a balance can be calibrated by taking measurements on “sample” weights. In this case, the sample itself generates a nominal value, the weight, and the evaluation of the accuracy of the calibrator results from the analysis of the difference between the reading on the balance and the nominal weight of the sample.

### Direct calibration

This is complementary to the previous one and is intended for the calibration of reference instruments. The calibrator directly generates the quantity that is measured by the sample. The accuracy of the calibrator is defined by the comparison between its nominal value and the measurement made by the sample. For example, a weight can be calibrated by taking a measurement on a “sample” balance. In this case, the calibrant generates the quantity, the weight, and the evaluation of its accuracy results from the analysis of the difference between the reading on the balance and the nominal weight of the calibrant.

## Calibration error

The limiting factor of the calibration process is repeatability because it is the only characteristic error that cannot be calibrated out of the measuring system and hence the overall measurement accuracy is curtailed. Thus, repeatability could also be termed as the minimum uncertainty that exists between a measurand and a standard.

Conditions that exist during calibration of the instrument should be similar to the conditions under which actual measurements are made. The standard that is used for calibration purpose should normally be one order of magnitude more accurate than the instrument to be calibrated. When it is intended to achieve greater accuracy, it becomes imperative to know all the sources of errors so that they can be evaluated and controlled.

Subscribe
Notify of