Select Page
Forewarned is Forearmed: Error and risk minimization in process analysis – Part 3

Forewarned is Forearmed: Error and risk minimization in process analysis – Part 3

In the course of life, each of us learns to trust our gut feelings or our experiences to avoid situations that seem dangerous or risky. You quite literally sense potential dangers with an uneasy feeling. Who hasn’t painfully learned that touching a hot stove top isn’t a good idea? Or who voluntarily goes outside during a tornado?

While humans can rely on their intuition and learned patterns to avoid dangers or use protective strategies, this is far more complicated with electronic systems or machines. All components of a system must be in a permanently safe state. Failures and malfunctions of individual components can have devastating consequences for production processes and the safety of the operators.

An example of this is the Seveso disaster in 1976, in which highly toxic dioxin TCDD escaped as a result of an uncontrolled reaction, and sustainably poisoned flora and fauna. With regard to other major chemical accidents, the European Seveso III Directive then came into force in 2012 to control major accident hazards to prevent major accidents.

Have you read Part 1 and Part 2 of our «Advantages of PAT (Process Analytical Technology)» series? If not, find them here!

Recognize, master, and avoid errors

Process engineering systems that are operated continuously contain countless components that can wear out or fail during their life cycle. However, if the measuring, control, or regulating circuit is affected, failures can cause immense damage. Under no circumstances should humans nor the environment be exposed to any kind of danger. For this reason, the functional safety of the components must be guaranteed, and their risk and hazard potential must be analyzed in detail.

The service life of mechanical components can be evaluated by observing mechanical wear and tear. However, the aging behavior of electronic components is difficult to assess. A unit of measure that makes risk reduction and thus functional safety quantifiable is the so-called «Safety Integrity Level» (SIL). 

The following procedure is followed:

  1.   Risk analysis
  2.   Realization of risk reduction
  3.   Evidence that the realized risk reduction corresponds at least to the required risk reduction

«Process analysis systems are part of the entire safety cycle of a manufacturing plant and therefore only one component whose risk of malfunctions and failures must be considered in an assessment.»

Risk assessmentA process is considered safe if the current risk has been reduced below the level of the tolerable risk. If safety is ensured by technical measures, one speaks of functional safety.

Significance for process analysis systems

Errors can happen anywhere, and can never be completely excluded. To minimize possible errors, it is therefore necessary to estimate the risk of occurrence and the damage to be expected from it as part of a risk analysis. A distinction must be made here between systematic and random errors.

Systematic errors are potentially avoidable and are caused, for example, by software errors or configuration deficiencies. Accordingly, they already exist during or prior to commissioning.

In contrast, random errors are potentially difficult to avoid because they occur arbitrarily. Nevertheless, the error rate or failure probability can be determined statistically and experimentally.

Random errors usually result from the hardware and occur during operation. Ultimately, systematic errors should be avoided, and random errors should be mastered to ensure trouble-free functionality.

Process analysis systems are the link between manual laboratory analysis and the industrial process. In applications where continuous and fully automatic monitoring of critical parameters is required, process analyzers are indispensable. Due to the different analysis conditions in the laboratory and directly in the process, there are some challenges when transferring the measurement technology from the lab to the process. The decisive factors are the working and environmental conditions (e.g., high temperatures, corrosive atmospheres, moisture, dust, or potentially explosive environments) which the process analyzers have to meet regarding their design, construction materials, and reliability of the components. The analyzer automatically and continuously transmits system and diagnostic data to prevent hardware or software components from failing through preventive measures. This significantly reduces the chance of random errors occurring.

General process analyzer setup

a) Analyzer Setup

Process analyzers have been specially developed for use in harsh and aggressive industrial environments. The IP66 protected housing is divided into two parts, and consists of separate wet and electronic parts. The electronics part contains all components relevant to control and operate the process analyzer. Modular components like burettes, valves, pumps, sampling systems, titration vessels, and electrodes can be found in the analyzer wet part. Representative samples can thus be taken from the process measuring point several meters away. The analysis procedure, the methods to be used, and method calculations are freely programmable.

A touchscreen with intuitive menu navigation allows easy operation, so that production processes can be optimized at any time. The course of the measurement is graphically represented and documented over the entire determination, so that the analysis process is completely controlled. The measurement results can be generated 24/7 and allow close and fully automatic monitoring of the process. Limits, alarms, or results are reliably transferred to the process control system.

When operating the analyzer, there is a risk that software errors can lead to failures. In order to recognize this with foresight, the system delivers self-diagnostic procedures as soon as it is powered on and also during operation. This includes, e.g., checking pumps and burettes, checking for leaks, or checking the communication between the I/O controller, the human interface, and the respective analysis module.

b) Sensors

The central component of a process analyzer is the measurement technique in use. In the case of sensors or electrodes, there are several requirements such as chemical resistance, ease of maintenance, robustness, or precision which they must meet. The safety-related risk arises from the possibility if measurement sensors fail due to aging, or if they become damaged and subsequently deliver incorrect measurement results.

Failure of the electrode, contamination, or damage must be reported immediately. With online analysis systems, the analysis is performed in an external measuring cell. In addition, recurring calibration and conditioning routines are predefined and are performed automatically. The status of the electrode is continuously monitored by the system.

Between measurements, the electrode is immersed in a membrane-friendly storage solution that prevents drying out and at the same time regenerates the swelling layer. The electrode is therefore always ready for use and does not have to be removed from the process for maintenance. This enables reliable process control even under harsh industrial conditions.

c) Analysis

Process analyzers must be able to handle samples for analysis over a wide concentration range (from % down to trace levels) without causing carry-over or cross-sensitivity issues. In many cases, different samples from several measuring points are determined in parallel in one system using different analysis techniques. The sample preparation (e.g., filtering, diluting, or wet chemical digestion) must be just as reliable and smooth as the fully automatic transfer of results to the process control system so that a quick response is possible.

Potential dangers for the entire system can be caused by incorrect measurement results. In order to minimize the risk, a detector is used to notify the system of the presence of sample in the vessel. The testing of the initial potential of the analysis or titration curves / color development in photometric measurements are diagnostic data that are continuously recorded and interpreted. Results can be verified by reference analysis or their plausibility can be clarified using standard and check solutions.

Detect errors before they arise

The risk assessment procedures that are carried out in the context of a SIL classification for process engineering plants are ultimately based on mathematical calculations. However, in the 24/7 operation of a plant, random errors can never be completely excluded. Residual risk always remains. Therefore, the importance of preventive maintenance activities is growing immensely in order to avoid hardware and software failures during operation.

A regular check of the process analyzer and its diagnostic data is the basic requirement for permanent, trouble-free operation. With tailor-made maintenance and service concepts, the analyzer is supported by certified service engineers over the entire life cycle. Regular maintenance plans, application support, calibration, or performance certificates, repairs, and original spare parts as well as proper commissioning are just a few examples.

Advantages of preventive maintenance from Metrohm Process Analytics

  • Preservation of your investment
  • Minimized risk of failure
  • Reliable measurement results
  • Calculable costs
  • Original spare parts
  • Fast repair
  • Remote Support

In addition, transparent communication between the process control system and the analyzer is also relevant in the context of digitalization. The collection of performance data from the analyzer to assess the state of the control system is only one component. The continuous monitoring of relevant system components enables conclusions to be drawn about any necessary maintenance work, which ideally should be carried out at regular intervals. The question arises as to how the collected data is interpreted and how quickly it is necessary to intervene. Software care packages help to test the software according to the manufacturer’s specifications, to perform data backup and software maintenance.

«Remote support is particularly important in times when you cannot always be on site.»

In real emergency situations in which rapid error analysis is required, manufacturers can easily support the operator remotely using remote maintenance solutions. The system availability is increased, expensive failures and downtimes are avoided, and the optimal performance of the analyzer is ensured.

Read what our customers have to say!

We have supported customers even in the most unlikely of places⁠—from the production floor to the desert and even on active ships!

Post written by Dr. Kerstin Dreblow, Product Manager Wet Chemical Process Analyzers, Deutsche Metrohm Prozessanalytik (Germany).

The role of process automation in an interconnected world – Part 2

The role of process automation in an interconnected world – Part 2

The following scenario sounds like a fictional dystopian narrative, but it is a lived reality. A catastrophe, much like the current COVID-19 crisis, is dramatically impacting society. The normality, as was known before, has suddenly changed: streets are swept empty, shops are closed, and manufacturing is reduced or at a complete standstill. But what happens to safety-related systems, e.g. in the pharmaceutical or food industry, which must not stand still and are designed in such a way that they cannot fail? How can the risk of breakdowns and downtimes be minimized? Or in the event of failure, how can the damage to people and the environment be limited or, in general, the operational sequence maintained?

Digitalization: curse or blessing? 

When considering process engineering plants, one is repeatedly confronted with buzzwords such as «Industry 4.0», «digitalization», «digital transformation», «IoT», «smart manufacturing», etc. The topic is often discussed controversially and often it is about an either-or dichotomy: either man or the machine and the associated fears. No matter what name you give to digitalization, each term here has one thing in common: intelligently networking separate locations and processes in industrial production using modern information and communication technologies. Process automation is a small but important building block that needs attention. Data can only be consistently recorded, forwarded, and reproduced with robust and reliable measurement technology.

For some time already, topics including sensors, automation, and process control have been discussed in the process industry (PAT) with the aim of reducing downtimes and optimizing the use of resources. However, it is not just about the pure collection of data, but also about their meaningful interpretation and integration into the QM system. Only a consequent assessment and evaluation can lead to a significant increase in efficiency and optimization.

This represents a real opportunity to maintain production processes with reduced manpower in times of crisis. Relevant analyses are automatically and fully transferred to the process. This enables high availability and rapid intervention, as well as the assurance of high quality requirements for both process security and process optimization. In addition, online monitoring of all system components and preventive maintenance activities effectively counteracts a failure.

Digitally networked production plants

Even though digitalization is relatively well-established in the private sector under the catchphrase «smart home», in many production areas the topic is still very much in its infancy. In order to intelligently network different processes, high demands are made. Process analysis systems make a major contribution to the analysis of critical parameters. Forwarding the data to the control room is crucial for process control and optimization. In order to correspond to the state-of-the-art, process analysis systems must meet the following requirements:

Transparent communication / operational maintenance

Processes must be continuously monitored and plant safety guaranteed. Downtimes are associated with high expenditure and costs and therefore cannot be tolerated. In order to effectively minimize the risk of failures, device-specific diagnostic data must be continuously transmitted as part of the self-check, or failures must be prevented with the help of preventive maintenance activities. Ideally, the response must be quick, and faults remedied without having to shut down the system (even remotely).

Future-proof automation

If you consider how many years (or even decades) process plants are in operation, it is self-explanatory that extensions and optimizations must be possible within their lifetime. This includes both the implementation of state-of-the-art analyzers and the communication between the systems.

Redundant systems

In order to prevent faults from endangering the entire system operation, redundancy concepts are generally used.

Practical example: Smart concepts for fermentation processes

Fermenters or bioreactors are used in a wide variety of industries to cultivate microorganisms or cells. Bacteria, yeasts, mammalian cells, or their components serve as important active ingredients in pharmaeuticals or as basic chemicals in the chemical industry. In addition, there are also degradation processes in wastewater treatment assisted by using bioreactors. Brewing kettles in beer production can also be considered as a kind of bioreactor. In order to meet the high requirements for a corresponding product yield and the maintenance of the ideal conditions for proper metabolism, critical parameters have to be checked closely, and often.

The conditions must be optimally adapted to those of the organism’s natural habitat. In addition to the pH value and temperature, this also includes the composition of the matrix, the turbidity, or the content of O2 and CO2. The creation of optimal environmental conditions is crucial for a successful cultivation of the organisms. The smallest deviations have devastating consequences for their survival, and can cause significant economic damage.

As a rule, many of the parameters mentioned are measured directly in the medium using inline probes and sensors. However, their application has a major disadvantage. Mechanical loads (e.g., glass breakage) or solids can lead to rapid material wear and contaminated batches, resulting in high operational costs. With the advent of ​​smart technologies, online analysis systems and maintenance-free sensors have become indispensable to ensure the survival of the microorganisms. In this way, reliably measured values ​​are delivered around the clock, and it is ensured that these are transferred directly to all common process control systems or integrated into existing QM systems.

Rather than manual offline measurement in a separate laboratory, the analysis is moved to an external measuring cell. The sample stream is fed to the analysis system by suction with peristaltic pumps or bypass lines. Online analysis not only enables the possibility of 24/7 operation and thus a close control of the critical parameters, but also the combination of different analysis methods and the determination of further parameters. This means that several parameters as well as multiple measuring points can be monitored with one system.

The heart of the analysis systems is the intelligent sensor technology, whose robustness is crucial for the reliable generation of measured values.

pH measurement as a vital key parameter in bioreactors

Knowledge of the exact pH value is crucial for the product yield, especially in fermentation processes. The activity of the organism and its metabolism are directly dependent on the pH value. The ideal conditions for optimal cell growth and proper metabolism are within a limited pH tolerance range, which must be continuously monitored and adjusted with the help of highly accurate sensors.

However, the exact measurement of the pH value is subject to a number of chemical, physical, and mechanical influencing factors, which means that the determination with conventional inline sensors is often too imprecise and can lead to expensive failures for users. For example, compliance with hygiene measures is of fundamental importance in the pharmaceutical and food industries. Pipelines in the production are cleaned with solutions at elevated temperatures. Fixed sensors that are exposed to these solutions see detrimental effects: significantly reduced lifespan, sensitivity, and accuracy.

Intelligent and maintenance-free pH electrodes

Glass electrodes are most commonly used for pH measurement because they are still by far the most resistant, versatile, and reliable solution. However, in many cases changes due to aging processes or contamination in the diaphragm remain undetected. Glass breakage also poses a high risk, because it may result in the entire production batch being discarded.

The aging of the pH-sensitive glass relates to the change in the hydration layer, which becomes thicker as time goes on. The consequence is a sluggish response, drift effects, or a decrease in slope. In this case, calibration or adjustment with suitable buffer solutions is necessary. Especially if there are no empirical values ​​available, short intervals are recommended, which significantly increase the effort for maintenance work.

With online process analyzers, the measurement is transferred from the process to an external measuring cell. This enables a long-lasting pH measurement to be achieved with an accuracy that is not possible with classic inline probes.

In many process solutions, measurement with process sensors takes place directly in the medium. This inevitably means that the calibration and maintenance of electrodes is particularly challenging in places that are difficult to access, leading to expensive maintenance work and downtimes. Regular calibration of the electrodes is recommended, especially when used under extreme conditions or on the edge of the defined specifications.

If the measurement is carried out with online process analyzers, then calibration, adjustment and cleaning are carried out fully automatically. The system continuously monitors the condition of the electrode. Between measurements, the electrode is immersed in a membrane-friendly storage solution that avoids drying out, and at the same time prevents the hydration layer from swelling further as it does not contain alkali ions. The electrode is always ready for use and does not have to be removed from the process for maintenance work.

The 2026 pH Analyzer from Metrohm Process Analytics is a fully automatic analysis system, e.g., for determining the pH value as an individual process parameter.

Maintenance and digitalization

In addition to the automatic monitoring of critical process parameters, transparent communication between the system and the analyzer also plays a decisive role in terms of maintenance measures. The collection of vital data from the analyzer to assess the state of the system is only one component. The continuous monitoring of relevant system components enables conclusions to be drawn about any necessary maintenance work. For example, routine checks on the condition of the electrodes (slope / zero point check, possibly automatic calibration) are carried out regularly during the analysis process. Based on the data, calibration and cleaning processes are performed fully automatically, which allow robust measurement even at measuring points that are difficult to access or in aggressive process media. This means that the operator is outside the danger zone, which contributes to increased safety.


The linking of production processes with digital technology holds a particularly large potential and contributes to the economic security of companies. In addition, the pressure is growing steadily for companies to face the demands of digitalization in production. As an example, in the area of ​​fermentation processes, the survival of the microorganisms is ensured by closely monitoring relevant parameters. Intelligent systems increase the degree of automation and can make the process along the entire value chain more efficient.

Find out in the next installment how functional safety concepts help to act before a worst case scenario comes true where errors occur and systems fail.

Want to learn more about the history of process analysis technology at Metrohm? Check out our previous blog posts:

Read what our customers have to say!

We have supported customers even in the most unlikely of places⁠—from the production floor to the desert and even on active ships!

Post written by Dr. Kerstin Dreblow, Product Manager Wet Chemical Process Analyzers, Deutsche Metrohm Prozessanalytik (Germany).

Moisture Analysis – Karl Fischer Titration, NIRS, or both?

Moisture Analysis – Karl Fischer Titration, NIRS, or both?

In addition to the analysis of the pH value, weighing, and acid-base titration, measurement of water content is one of the most common determinations in laboratories worldwide. Moisture determination is important for nearly every industry, e.g., for lubricants, food and feed, and pharmaceuticals.

Figure 1. Water drops in a spider web

For lubricants, the water concentration is very important to know because excess moisture expedites wear and tear of the machinery. For food and feed, moisture content must be within a narrow range so that the food does not taste dry or stale, nor that it is able to provide a breeding ground for bacteria and fungi, resulting in spoilage. For pharmaceuticals, the water content in solid dosage forms (tablets) and lyophilized products is monitored closely. For the latter, the regulations state that the moisture content needs to be below 2%.

Karl Fischer Titration

Karl Fischer (KF) Titration for water determination was introduced back in the 1930’s, and to this day remains one of the most tried and trusted methods. It is a fast and highly selective method, which means that water, and only water, is determined. KF titration is based on the following two redox reactions.

In the first reaction, methanol and sulfur dioxide react to form the respective ester. Upon addition of iodine, the ester is oxidized to the sulfate species in a water-consuming reaction. The reaction finishes when no water is left.

Figure 2. Manual sample injection for volumetric KF Titration

KF titration can be used for the determination of the water content in all sample types: liquids, solids, slurries, or even gases. For concentrations between 0.1% and 100%, volumetric KF titration is the method of choice, whereas for lower moisture content between 0.001% and 1%, coulometric KF titration is recommended.

Depending on the sample type, its water content, and its solubility in the KF reagents, the sample can either be added directly to the titration vessel, or would first need to be dissolved in a suitable solvent. Suitable solvents are those which do not react with the KF reagents — therefore aldehydes and ketones are ruled out. In case the sample is dissolved in a solvent, a blank correction with the pure solvent also needs to be performed. For the measurement, the sample is injected directly into the titration vessel using a syringe and needle (Fig. 2). The endpoint is detected by a polarized double Pt pin electrode, and from this the water concentration is directly calculated.

Insoluble or hygroscopic samples can be analyzed using the gas extraction technique with a KF Oven. Here, the sample is sealed in small vial, and the water is evaporated by heat then is subsequently carried to the titration cell.

Figure 3. Fully automated KF Titration with the Metrohm 874 KF Oven Sample Processor

For more information, download our free Application Bulletins: AB-077 for volumetric Karl Fischer titration and AB-137 for coulometric Karl Fischer analysis.

If you would like some deeper insight, download our free monograph: “Water determination by Karl Fischer Titration”. 

Near-infrared spectroscopy

Near-infrared spectroscopy (NIRS) is a technique that has been used for myriad applications in the areas of food and feed, polymers, and textiles since the 1980’s. A decade later, other segments began using this technique, such as for pharmaceutical, personal care, and petroleum products.

NIRS detects overtones and combination bands of molecular vibrations. Among the typical vibrations in organic molecules for functional groups such as -CH, -NH, -SH, and -OH, it is the -OH moiety which is an especially strong near infrared absorber. That is also the reason why moisture quantification is one of the key applications of NIR spectroscopy.

For a further explanation, read our previous blog entry on this subject: Benefits of NIR spectroscopy: Part 2.

NIR spectroscopy is used for the quantification of water in solids, liquids, and slurries. The detection limit for moisture in solids is about 0.1%, whereas for liquids it is in the range of 0.02% (200 mg/L), However, in special cases (e.g., water in THF), moisture detection limits of 40–50 mg/L have been achieved.

This technique does not require any sample preparation, which means that samples can be used as-is. Solid samples are measured in high quality disposable sample vials, whereas liquids are measured in high quality disposable cuvettes. Figure 4 displays how the different samples are positioned on the analyzer for a measurement.

Detailed information about the NIRS technique has been described in our previous blog article: Benefits of NIR spectroscopy: Part 1.

Figure 4. Solid (left) and liquid (right) sample positioning for NIR measurements

NIRS is a secondary technique, meaning it can only be used for routine analysis for moisture quantification after a prediction model has been developed. This can be understood by an analogy to HPLC, for which measuring standards to create a calibration curve is among the initial steps. The same applies to NIRS: first, spectra with known moisture content must be measured and then a prediction model is created.

The development of prediction models has been described in detail in our previous blog article: Benefits of NIR spectroscopy: Part 3.

The schematic outline is shown in Figure 5.

Figure 5. Workflow for NIR Method implementation for moisture analysis

For creation of the calibration set, around 30–50 samples need to be measured with both NIRS and KF titration, and the values obtained from KF titration must be linked to the NIR spectra. The next steps are model development and validation (steps 2 and 3 in Figure 5), which are quite straightforward for moisture analysis. Water is a strong NIR absorber, and its peaks are always around 1900–2000 nm (combination band) and 1400–1550 nm (first overtone). This is shown in Figure 6 below.

Figure 6. NIR Spectra of moisturizing creams, showing the absorptions related to H2O at 1400–1550 nm and 1900–2000 nm

After creation and validation of the prediction model, near-infrared spectroscopy can be used for routine moisture determination of that substance. The results for moisture content will be obtained within 1 minute, without any sample preparation or use of chemicals. Also, the analyst does not need to be a chemist, as all they need to do is place a sample on the instrument and press start.

You can find even more information about moisture determination by near-infrared spectroscopy in polyamides, caprolactam, lyophilized products, fertilizers, lubricants, and ethanol/hydrocarbon blends below by downloading our free Application Notes.

Your choice for moisture measurements: KF Titration, NIRS, or both!

As summarized in Table 1, KF Titration and NIR Spectroscopy each have their advantages. KF Titration is a versatile method with a low level of detection. Its major advantage is that it will always work, no matter if you have a sample type that you measure regularly or whether it is a sample type that you encounter for the first time.

Table 1. Overview of characteristics of moisture determination via titration and NIR spectroscopy

NIR spectroscopy requires a method development process, meaning it is not suitable for sample types that always vary (e.g., different types of tablets, different types of oil). NIRS however is a very good method for sample types that are always identical, for example for moisture content in lyophilized products or for moisture content in chemicals, such as fertilizers.

For the implementation of a NIR moisture method, it is required that samples are measured with KF titration as the primary method for the model development. In addition, during the routine use of a NIR method, it is important to confirm once in a while (e.g., every 50th or every 100th sample) with KF Titration that the NIR model is still robust, and to ensure that the error has not increased. If a change is noticed, extra samples need to be added to the prediction model to cover the observed sample variation.

In conclusion, both KF Titration and NIR spectroscopy are powerful techniques for measuring moisture in an array of samples. Which technique to use depends on the application and the individual preference of the user.

For more information

Download our free whitepaper:

Karl Fischer titration and near-infrared spectroscopy in perfect synergy

Post written by Dr. Dave van Staveren (Head of Competence Center Spectroscopy), Dr. Christian Haider (Head of Competence Center Titration), and Iris Kalkman (Product Specialist Titration) at Metrohm International Headquarters, Herisau, Switzerland.

Benefits of NIR spectroscopy: Part 4

Benefits of NIR spectroscopy: Part 4

This blog post is part of the series “NIR spectroscopy: helping you save time and money”.

How pre-calibrations assist quick implementation of NIRS

This is part four in our series about NIR spectroscopy. In this installment, it is outlined in which cases NIRS can be implemented directly in your laboratory without the need for any method development. This means that for these applications your instrument is immediately operational to deliver accurate results – right from day one. At the end of this blog post, we provide an overview of several applications for which it is possible to get immediate results from the beginning.

The following topics will be covered in the rest of this post (click to jump to the topic):


In our last installment (Part 3: How to implement NIRS in your laboratory workflow), we showed how a newly received NIR spectrometer can become operational with a real application example. This process is depicted here in Figure 1.

The majority of work consists of creating a calibration set. Approximately 40–50 samples across the expected parameter range must be measured by a primary method, and resulting values need to be linked to the NIR spectra recorded for the same samples (Fig. 1: Step 1).

Thereafter, a prediction model needs to be created by visually identifying  the spectral changes and correlating these changes to the values obtained from the primary method (Fig. 1: Step 2). After validation by the software, a prediction model is available for use in routine measurements.

Figure 1. Workflow for NIR method implementation.

The process described above requires some effort and is of significant duration because in many cases, the samples spanning the concentration range first need to be produced and collected. Therefore, it would be very beneficial if steps 1 and 2 could be omitted so that the analyzer can be used immediately from day 1.

This is not just wishful thinking, but rather the reality for specific applications with the use of pre-calibrations.

What are pre-calibrations?

Pre-calibrations are prediction models that can be deployed immediately, and provide satisfying results right from the beginning. These models are based on a large number (between 100–600) of real product spectra covering a wide parameter range.

This means that steps 1 and 2 (Figure 1) are not required and instead the pre-calibration predication model can be used directly for routine analysis, as illustrated in Figure 2.

Figure 2. Workflow for NIR method implementation with a pre-calibration.

How do pre-calibrations work?

Each pre-calibration comes as a digital file that must be imported into the Metrohm Vision Air software. After installation of a new instrument (including the Vision Air software), a method needs to be created containing measurement-specific settings, such as measurement temperature and which sample vessel is used, followed by importing the pre-calibration and linking it to the method.

That’s all that is needed!

The instrument is now ready to deliver reliable results for routine measurements. It is advised to measure a few control samples of known values to confirm that the pre-calibration provides acceptable results.

Optimizing the pre-calibration

In some cases, the results obtained on control samples with the pre-calibration are not completely acceptable. There could be various reasons for this and in general, three different cases can be distinguished: 

  1. The results obtained with the control samples deviate only slightly from the expected values.
  2. The results are acceptable, but the standard error is somewhat on the larger side.
  3. The results deviate significantly.

Below we will go through each of these cases and provide recommendations:

    Case 1:

    The results obtained with the control samples deviate only slightly from the expected values.

    If the value obtained from the control samples deviates only slightly, a slope-bias correction is the recommended solution. The process is illustrated in Figure 3. In the top diagram, you see that the values from the pre-calibration deviate consistently over the whole range. In this situation, it is possible to perform a slope-bias correction on the measured model in the Vision Air software. After this has been done, the results fit very well (Fig. 3 – bottom).

    Figure 3. Top: correlation between measured control samples (orange dots) and the pre-calibration prediction model (blue line). Bottom: correlation between the values after slope-bias correction (orange dots) and the pre-calibration prediction model (blue line).

    Case 2:

    The results are acceptable, but error is somewhat on the larger side.

    In most cases, this behavior is observed if the range of the pre-calibration is much larger than the range that the analyst is interested in.

    Consider for example, measurement of a value at the lower end of the overall range. The error of the pre-calibration is calculated over the entire range, and therefore the impact of the average error (SECV = standard error of cross validation) is much larger on values on the lower end compared to values in the middle of the complete range. This is exemplified in Figure 4 and Table 1.

    Figure 4. Pre-calibration correlation plot of the kappa number (a pulp & paper parameter) over the extended range 0–200 (left), and the smaller range 0–36 (right).
    Table 1. Figures of merit for the different regions of the pre-calibration from Figure 4. Note the much smaller SECV for the range 0–36 compared to the SECV for the full range of 0–200.

    The recommended action in this case is to remove certain ranges of the pre-calibration, leaving in only the range of interest.

    From Table 1, it is clear that the SECV for the whole range (0–200) is much higher than the SECV of the smaller range (0–36). This means that when removing the samples corresponding to the higher ranges from the pre-calibration (leaving only the range of 0–36 in), the resulting modified pre-calibration gives a lower SECV.

    Case 3:

    The results deviate significantly.

    There could be several reasons behind this, so we will select two examples.

    In the first example, consider the possibility that the provided samples for analysis are proprietary. For instance, certain manufacturers produce unique, patented polyols. These proprietary substances are not included among the standard collection of sample spectra in the pre-calibration. Thus, the pre-calibration does not provide acceptable results for such proprietary samples.

    Another example is shown in Figure 5. Here it can be observed that the values from the primary method (blue dots) deviate significantly from the values obtained from the pre-calibration model.

    This example is taken from a real customer case which we have observed.

    At first, we were a bit puzzled when checking the results, but the reason became clear after speaking with our customer. They had chosen to measure the primary values (hydroxyl number) via manual titration and not, as recommended, with an automatic titrator from Metrohm.

    Figure 5. Correlation between measured control samples (blue dots) and the pre-calibration model (dotted red line) for the hydroxyl number in polyols. This data is based on a real customer example (click to enlarge).

    Therefore, the reason that the fit of the control samples is unsatisfying is due to the poor accuracy of manual titration of the control samples and has nothing to do with the quality of the pre-calibration.

    Looking for your pre-calibration?

    Metrohm offers a selection of pre-calibrations for a diverse collection of applications. These are listed in Table 2 together with the most important parameters of the pre-calibration. Click on the links to get more information.

    Metrohm NIRS pre-calibration options

    Pre-calibration Selected Important Parameters
    Polyols Hydroxyl number (ASTM D6342)
    Gasoline RON, MON, anti-knock index, aromatics, benzene, olefins
    Diesel Cetane index, density, flash point
    Jet Fuel Cetane, index, density, aromatics
    Palm oil Iodine value, free fatty acids, moisture
    Pulp & Paper Kappa number, density, strength parameters
    Bio-methane Potential (BMP) BMP (of biological waste)
    Polyethylene (PE) Density, intrinsic viscosity
    Polypropylene (PP) Melt Flow Rate
    Polyethylene Terephthalate (PET) Intrinsic viscosity, acid number, and others
    Polyamide (PA 6) Intrinsic viscosity, NH2 and COOH end groups
    Table 2. Overview of available pre-calibrations for the Metrohm Vision Air software.


    Pre-calibrations are prediction models based on a large number of real product spectra. These allow users to skip the initial model development part and make it possible to use the instrument from day one, saving both time and money.

    To learn more

    about pre-calibration for selected NIRS applications,

    come visit our website!

    Post written by Dr. Dave van Staveren, Head of Competence Center Spectroscopy at Metrohm International Headquarters, Herisau, Switzerland.

    To automate or not to automate? Advantages of PAT – Part 1

    To automate or not to automate? Advantages of PAT – Part 1

    I have to admit that the technological world of process analysis seemed foreign for me for a while. When I first heard about process automation, I imagined futuristic robots that do the work, similar to modern science fiction films. Perhaps many people might have the same impression.

    There is often a great deal of uncertainty about what the expression «we automate your process» actually means. In this blog series, I want to show you that process analytical technology (PAT) is less complicated than expected and offers several advantages for users.

    What does process analytical technology (PAT) mean? 

    I was once told in conversation:

    «Process analytics is for everyone who believes that they don’t need it.»

    There is definitely truth in this statement, and it certainly shows the abundance of application possibilities. At the same time, it should be considered that in the future, users of process analytical technology will not only invest in conventional measurement technologies (e.g., direct measurement, TDLAS, GC), but also increasingly in the determination of substance properties and material compositions.

    Pollution (gases and aerosols) in ambient air are especially harmful to human health. These substances can continuously and reliably be monitored by process analyzers.

    PAT serves to analyze, optimize, and ultimately control processes and their critical parameters. This control makes a major contribution to quality assurance and the overall process reliability at the manufacturer. Thinking back to some well-known chemical disasters (e.g. Minamata, Toulouse, or Tianjin) in which poisonous substances were released, causing immense damage to people and the environment, the importance regarding regular monitoring of critical parameters becomes abundantly clear. The list of analytes that can and must be monitored is long, ranging from contamination in wastewater due to municipal or industrial wastewater treatment plants, to pharmaceutical agents, to gases and aerosols in the ambient air.

    From Lab to Process

    Considering the history of manufacturing and other industrial processes, it is clear that the ultimate goal is to increase throughput in ever-shorter timeframes, with an eye on safety measures and minimization of costs where possible. Independence through automation and fast, reliable data transfer is a high priority.

    In order to make the process economically viable along the entire value chain, the resulting products should be manufactured at the highest quality in a short time and with minimal raw material and energy usage. For 24/7 operations in particular, knowledge of the composition of the starting materials and intermediate products (or rather, any impurities) is essential for optimal process control and reliability.

    How can reliable process monitoring be ensured around the clock? Very few companies have company laboratories with an actual 3-shift operation, and often send their samples to external laboratories. Additionally, the samples are sometimes taken with longer time intervals between them. This carries various risks.

    On one hand, the time lost between the sampling event and receiving the results from the analysis is enormous. It is only possible to react to fluctuations and deviations from target concentrations or limit values ​​with a certain delay. On the other hand, working and environmental conditions are not comparable and can lead to changes in the sample. Oxidation, pressure or temperature changes, introduction of moisture, and many other factors can change a sample’s original properties during transport, waiting periods, and manual laboratory analysis.

    Example trend graph comparing process deviations mitigated by manual control (grey) and fully automatic process control (orange) via PAT.

    Process analyzers: automated operation around the clock

    Analyses, which are usually carried out manually, are automated by using industrial process analyzers. The samples are automatically removed from critical points in the production process and processed further. The information obtained is used to control the process without any delay, as the data can be transferred immediately to a central computing system at the plant. Automated analysis right at the sample point allows for increased accuracy and reproducibility of the data.

    In practice, this entails rerouting a partial stream from the process in question to be fed to the analyzer by means of valves, peristaltic pumps, or bypass lines. Each sample is therefore fresh and correlates to the current process conditions. Probes can also be integrated directly into the process for continuous inline measurement.

    The analysis is performed using common titration, spectroscopy, ion chromatography, or electrochemical methods known from the laboratory, which are optimally integrated into the process analyzer for each individual application requirement. The methods can be used in combination, allowing several measuring points to be monitored in parallel with one system. Thanks to the process analyzers that are specifically configured and expandable for the application, the optimal conditions for stable process control are obtained.

    Spectroscopic methods have become particularly well-established in recent years for process analysis and optimization purposes. In contrast to conventional analysis methods, near-infrared (NIR) spectroscopy shows a number of advantages, especially due to the analysis speed. Results can be acquired within a few seconds and transferred directly to the chemical control system so that production processes can be optimized quickly and reliably. Samples are analyzed in situ, completely without the use of chemicals, in a non-destructive manner, which means further added value for process safety.

    The many advantages of PAT

    Automation in the context of process analysis technology does not always have anything to do with futuristic robots. Instead, PAT offers companies a number of advantages:


    • Fully automatic, 24/7 monitoring of the process
    • Timely and automatic feedback of the analysis results to the system control for automatic process readjustment
    • Reduction in fluctuations of product quality
    • Increased process understanding to run production more efficiently
    • Independent of your own laboratory (or contract lab)
    • Complete digital traceability of analysis results
    • Total solution concepts including sample preconditioning, saving time and increasing safety

    What’s next?

    In our next post in this series, you will discover the role process analysis technology plays in digital transformation with regard to «Industry 4.0».

    Want to learn more about the history of process analysis technology at Metrohm? Check out our previous blog post:

    Read what our customers have to say!

    We have supported customers even in the most unlikely of places⁠—from the production floor to the desert and even on active ships!

    Post written by Dr. Kerstin Dreblow, Product Manager Wet Chemical Process Analyzers, Deutsche Metrohm Prozessanalytik (Germany), with contributions from Dr. Alyson Lanciki, Scientific Editor at Metrohm International Headquarters (Switzerland).