Procedures for NIR calibration – Creation of NIRS spectroscopy calibration curves

Do you know the effect that you prefer to try out their favorite data pretreatments in combination and often try the same wavelength selections based spectra of the visualized?

You try as six to ten combinations until one of them selects his favorite calibration model, to then continue to optimize. Since then suddenly fall to outliers, because it goes in depth, so is familiar with the data, we know now the spectra of numbers of outliers and is familiar with the extreme values.

Now, the focus is on the major components (principal components, Latent Variables, factors) and makes sure not to over-fit and under-fit not to. The whole takes a few hours and finally one is content with the model found.

So what would happen if you all in the beginning tried variants found outliers removed and re-evaluated and compared? The results would be better than that of the previous model choice? One does not try out? Because it is cumbersome and takes hours again?

We have developed a software which simplifies this so that also the number of model variations can be increased as desired. The variants generation is automated with an intelligent control system, as well as the optimization and comparing the models and finally the final selection of the best calibration model.

Our software includes all the usual known data pretreatment methods (data pre-processing) and can combine them useful. Since many Preteatments are directly dependent on the wavelength selection, such as the normalization the determined within a wavelength range of the scaling factors to normalize the spectra so that pretreatments with the wavelength ranges may be combined. So a variety of settings sensible model comes together that are all calculated and optimized. For the automatic selection of the relevant wavelength ranges, different methods are used, which are based on the spectral intensities. Thus, for example, regions with total absorption is not used, and often interfering water bands removed or retained.

Over all the calculated model variations as a summary outlier analysis can be made. Are there any new outliers (hidden outlier) discovered, all previous models can be automatically recalculated, optimized and compared without these outliers.

From this great number of calculated models with the statistical quality reviews (prediction performance) the optimum calibration can now be selected. For this purpose, not simply sorting by the prediction error (prediction error, SEP RMSEP) or the coefficient of determination (coefficient of determination r2), but by several statistical and test values are used jointly toward the final assessment of optimal calibration.

Thus we have created a platform that allows the highly automated work what a man can never do with a commercial software.

We therefore offer the largest number of matched to your application problem modeling calculations and choose the best calibration for you!

This means that our results are faster, more accurate, robust and objective basis (person independent) and quite easy for you to apply.

You have the full control of the models supplied by us, because we provide a clearly structured and detailed blueprint of the complete calibration, with all settings and parameters, with all necessary statistical characteristics and graphics.

Using this blueprint, you can adjust the quantitative calibration model itself in the software you use, understand and compare. You have everything under control form model creation, model validation and model refinement.

Your privacy is very important to us. The NIR data that you briefly provide us for the custom calibration development will remain of course your property. Your NIR data will be deleted after the job with us.

Interested, then do not hesitate to contact us.

How to develop near-infrared spectroscopy calibrations in the 21st Century?


The Problem

Calibration modeling is a complex and very important part of NIR spectroscopy, especially for quantitative analysis. If the model is badly designed the best instrument precision and highest data quality does not help getting good and robust measurement results. And NIR Spectroscopy requires periodically recalibration and validation.


How are NIR models built today?

In a typical usage in industry, a single person is responsible to develop the models (see survey). He or she uses a Chemometric software that has a click-and-wait working process to adjust all the possible settings for the used algorithms in dialogs and wait for calculations and graphics and then to think about the next modeling steps and the time is limited to do so. Do we expect to find the best use-able or optimal model that way? How to develop near-infrared spectroscopy calibrations in the 21st Century?


Our Solution

Why not put all the knowledge a good model builder is using into software and let the machines do the possibilities of calculations and presenting the result? Designing the software that way, that the domain knowledge is built-in, not just only the algorithms for machine learning and make it possible to scale the calculations to multi-core computers and up to cloud servers. Extend the Chemometric Software with the Domain Knowledge and make as much computer power available as needed.

As it was since the beginning

User  → Chemometric Software → one Computer → some results to choose from

==> User’s time needed to click-and-wait for creating results

Our Solution

User → (Domain Knowledge → automatized Chemometric Software) → many Computers → the best models

==> User’s time used to study the best models and reasoning about his product / process

Note that the “Domain Knowledge” here does perfectly support the User’s product and process knowledge to get the things done right and efficient.


Scaling at three layers

  • Knowledge : use the domain knowledge to drive the Chemometric Software
  • Chemometric Software : support many machine learning algorithms and data pre-processings and make it automatic
  • Computer : support multi-core calculations and scale it to the cloud

The hard part in doing this, is of course the aggregation of the needed domain knowledge and transform it into software. The Domain Knowledge for building Chemometric NIR Spectroscopic models is well known and it’s huge and spreads multiple disciplines. Knowledge-driven software for computing helps to find the gold needle in the haystacks. It’s all about scaling that makes it possible. See Proof of Concept.


New possibilities

  • NIR users can get help working more efficient and getting better models.
  • New types of applications for NIR can be discovered.
  • Evaluation of NIR Applications to replace conventional analytical methods.
  • Hopeless calibrations development efforts can be re-started.
  • Higher model accuracy and robustness can be delivered.
  • Automate the experimental data part of your application study.
  • Person independent optimization will show new solutions, because it’s not limited by a single mindset => combining all the aggregated knowledge and its combinations.
  • Software independent optimization will show new solutions, because none of vendor specific limitations and missing algorithms are present => combining all open available algorithms and there permutations.
  • Computing service is included.

Contact us for trial

Your NIR data is modeled by thousands of different useful calibration models and you get the best of them! That was not possible before in such a easy and fast way! See How it works

Proof of Concept

Chemometric software competitions (aka shootouts) are a good way to check algorithms, software and knowledge against all other experts in the field.

Imagine that the prediction results can be produced with any kind of software and newest algorithms.

And we just use PLS right to generate models that can be used on all NIR software systems, because PLS is a quasi standard, supported in all major chemometrics software.

Our software framework reached very good results, got gold (rank #1) and silver (rank #2) during well known international NIR Chemometric software shootouts* so far, the competitions are held bi-annual.

Rank / competitors  Competition / Conference  Year
 #1 / 1  **  Kaji / ANSIG  2014
 #1 / 150  Kaji / ANSIG  2012
 #2 / ???  IDRC / IDRC  2012
The Kaji Competition

A set of NIR spectral data will be available for downloading from the ANISG website and contestants will be asked to find and explain a “best” chemometric model to robustly predict samples of the same type.
A panel will select the three “best” entries based on the predicted results and spectroscopic explanation of the products and attributes of interest. 

http://www.anisg.com.au/the-kaji-competition


The IDRC Competition

The Software Shootout has been a staple of the IDRC. It is a competition amongst participants of the conference that aims at determining the person who developed the best model and obtained the lowest prediction error for a particular problem.
Every IDRC, a new challenge is proposed to participants. The challenge consists of a data set with calibration, test and a validation set.
Participants are given target values for the calibration and test sets but must do their best to develop a model that will predict the validation set as accurately and precisely as possible. Challenges from all sorts of fields of NIRS have been used (agriculture, biomedical, pharmaceutical, soil, …).

IDRC


*) The author was unable to present the results at the conferences, so this ranking was not official but confirmed by the shootout organizers. Thanks go to Benoit Igne, IDRC 2012 shootout organizer and Steve Holroyd, Kaji Competition organizer at ANISG Conference 2012.  

Conclusion

Our chemometric software framework can significantly reduce the time spent for NIR method development and fine optimization. The time saving can be achieved through highly automated experiments and the usage of cloud computing. Calibrations are built and evaluated using automated good practices protocols resulting in useful, precise and robust Calibrations. The high number of experiments enables a deep screening of the solution domain to find the optimum calibration settings, something currently unavailable in standard chemometric software.

**) We were the only participator that got the 4 competition tasks (4-times more than usual) completed in that short time and submitted the fully documented results. After the competition, the information was given, that the data was originated from forages and the constituents were dry matter, organic matter digestibility, protein and ash. Thanks go to Daniel Cozzolino, Kaji 2014 Competition organizer.

Recent advanced chemometric methods

You are searching for recent advanced chemometric methods to get better calibration models for NIR? Methods and algorithms like:
  • Artificial Neural Networks (ANN)
  • General Regression Neural Networks (GR-NN)
  • RBF Neural Networks (RBF-NN)
  • Support Vector Machines (SVM)
  • Multiway Partial Least Squares (MPLS),
  • Orthogonal PLS (OPLS), (O-PLS), OPLS-AA, OPLS-ANN
  • R-PLS, UVE-PLS, RUVE-PLS, LOCAL PLS
  • Hierarchical Kernel Partial Least Squares (HKPLS)
  • Random Forest (RF)
  • etc.
and data pre-processing methods like
  • Extended Multiplicative Signal Correction (EMSC)
  • Orthogonal Signal Correction (OSC)
  • Dynamic Orthogonal Projection (DOP)
  • Error Removal by Orthogonal Subtraction (EROS)
  • External Parameter Orthogonalization (EPO)
  • etc.
that are partly available as modules for software packages like Matlab, Octave, R-Project, etc. Why invest a lot of time and money with new tools? Have you tried it really hard to optimize your calibrations with standard chemometrics methods like Partial Least Squares (PLS), Principal Component Regression (PCR) and Multiple Linear Regression (MLR) which are available in all chemometric software packages? Are you sure you have tried all the good rules and optimization possibilities? Get it done right with the compatible standard methods, we are specialized in optimization and development of NIR calibrations, let us help you, give us a try!