# Interpretation of the Calibration Statistics

In our WebShop the Calibration statistics are shown as (example):
``````
Calibration Set (36)
R2      = 0.98724
RPD     = 8.8525
RMSEC   = 0.3308
SEC     = 0.3355
Bias    = 0.0000

Test Set (29)
R2      = 0.98792
RPD     = 9.0970
RMSEP   = 0.3249
SEP     = 0.3304
Bias    = 0.0140
```
```

## What does that mean?

The data is splited into 3 independent sets.
The “Reference vs. Predicted” plot shows the 3 sets in different colors.

The Calibration Set (CSet) to build the calibration and
the Validation Set (VSet) used to determine the calibration parameters and
the Test Set (TSet) used to measure the calibration performance.

The statistical values are listed for the CSet and the TSet as follow:

## Calibration Set (the number of spectra per set)

``````
R       = Poor 0.0 - 1.0 Excellent
correlation coefficient or coefficient of correlation,
how close the data are to the fitted regression line.

R2      = Poor 0.0 - 1.0 Excellent
R-squared value or coefficient of determination.
R2 = R * R :  determination (R2) is the square of the correlation (R).

RPD     = Degrees of merit for the Ratio of Performance Deviation (RPD) to the application of NIR spectroscopy.

RPD value       Rating              NIR Application

0.0 - 1.99      Very poor           Not recommended
2.0 - 2.49      Poor                Rough screening
2.5 - 2.99      Fair                Screening
3.0 - 3.49      Good                Quality control
3.5 - 4.09      Very good           Process control
4.1 -  oo       Excellent           Any application

RMSEC   = Accuracy      = total error       : Root Mean Square Error of Calibration
SEC     = Precision     = random error      : Standard Error of Calibration = Sdev(x-y): as small as possible (around the standard deviation of the reference method)
Bias    = Trueness      = systematic error  : by definition 0 for the calibration.
```
```

## Test Set (the number of spectra per set)

``````
R       = Correlation   , as above
R2      = Determination , as above
RPD     = Applicativity , as above

RMSEP   = Accuracy      = total error       : Root Mean Square Error of Prediction
SEP     = Precision     = random error      : Standard Error of Prediction = Sdev(x-y) : as small as possible (around the standard deviation of the reference method)
Bias    = Trueness      = systematic error  : around 0
```
```

## Total simplified

``````
look at
Test Set  RPD       : for rating and applicativity
and
Test Set  RMSEP     : for comparison with reference method
```
```