Select Page
Upgrade your lab skills online

Upgrade your lab skills online

At the moment, times are strange, as many people are kept home to keep each other safe and healthy. Some of you are still able to work in your office or laboratories, but others are trying to find constructive ways to keep focused and stay connected.

During this time, one way to keep your skills sharp, or even to learn new ones, is by watching informative webinars. Level up in your laboratory expertise!

Below, we have a selection of some excellent free webinars from Metrohm to keep you on top of your game – no matter which technique you use. Application examples, practical information on handling, care, and troubleshooting, and more – our webinars provide very useful information dealing with various techniques and industries.

We offer several on-demand webinars about subjects such as the fundamentals of titration, troubleshooting, and the synergy between titration and near-infrared spectroscopy (also see our related blog post on this topic).

This important segment of titration is especially important for accurate moisture determinations.

On-demand webinars available include fundamentals and troubleshooting, as well as others for more in-depth knowledge.

NIRS is a fast, nondestructive, reagent-free technique, used in several markets (e.g., pharmaceuticals, petrochemicals, polymers, and personal care).

We have many interesting webinars not only focused on these industries, but also for quality control, process analytical technology (PAT),  and about the combination with the primary method of titration (also see our related blog post on this topic).

Raman spectroscopy is a handy tool for quick, reagent-free identification of raw materials, illicit substances, and hazardous chemicals – even from a distance.

Watch this webinar to learn how accurate, reliable, and portable screening tools can help to detect substandard and falsified medical products.

Aside from providing information about how Metrohm ion chromatography (IC) can be used for multiple applications in different markets, we also offer free webinars about sample preparation and automatic calibration to help save you valuable time when you’re back in the lab!

The measurement of pH is one of the most commonly performed determinations in chemical analysis. Why not learn some of the basics, or perhaps some troubleshooting techniques with our free webinars to impress your colleagues? If you are looking to avoid the most common mistakes in pH measurement, be sure to check out our blog post as well.

Our electrochemistry webinars cover a variety of topics to enhance your knowledge in this area. From corrosion analysis to electrocatalysis research, we have you covered.

If you’re more interested in screen-printed electrodes (SPEs) and biosensing applications, we have something for you, too!

I hope you find these webinars informative. If you’re interested in further educational opportunities from Metrohm, check out the Metrohm Academy. Stay safe, stay healthy, and always keep learning!

Post written by Dr. Alyson Lanciki, Scientific Editor at Metrohm International Headquarters, Herisau, Switzerland.

Benefits of NIR spectroscopy: Part 4

Benefits of NIR spectroscopy: Part 4

This blog post is part of the series “NIR spectroscopy: helping you save time and money”.

How pre-calibrations assist quick implementation of NIRS

This is part four in our series about NIR spectroscopy. In this installment, it is outlined in which cases NIRS can be implemented directly in your laboratory without the need for any method development. This means that for these applications your instrument is immediately operational to deliver accurate results – right from day one. At the end of this blog post, we provide an overview of several applications for which it is possible to get immediate results from the beginning.

The following topics will be covered in the rest of this post (click to jump to the topic):

Introduction

In our last installment (Part 3: How to implement NIRS in your laboratory workflow), we showed how a newly received NIR spectrometer can become operational with a real application example. This process is depicted here in Figure 1.

The majority of work consists of creating a calibration set. Approximately 40–50 samples across the expected parameter range must be measured by a primary method, and resulting values need to be linked to the NIR spectra recorded for the same samples (Fig. 1: Step 1).

Thereafter, a prediction model needs to be created by visually identifying  the spectral changes and correlating these changes to the values obtained from the primary method (Fig. 1: Step 2). After validation by the software, a prediction model is available for use in routine measurements.

Figure 1. Workflow for NIR method implementation.

The process described above requires some effort and is of significant duration because in many cases, the samples spanning the concentration range first need to be produced and collected. Therefore, it would be very beneficial if steps 1 and 2 could be omitted so that the analyzer can be used immediately from day 1.

This is not just wishful thinking, but rather the reality for specific applications with the use of pre-calibrations.

What are pre-calibrations?

Pre-calibrations are prediction models that can be deployed immediately, and provide satisfying results right from the beginning. These models are based on a large number (between 100–600) of real product spectra covering a wide parameter range.

This means that steps 1 and 2 (Figure 1) are not required and instead the pre-calibration predication model can be used directly for routine analysis, as illustrated in Figure 2.

Figure 2. Workflow for NIR method implementation with a pre-calibration.

How do pre-calibrations work?

Each pre-calibration comes as a digital file that must be imported into the Metrohm Vision Air software. After installation of a new instrument (including the Vision Air software), a method needs to be created containing measurement-specific settings, such as measurement temperature and which sample vessel is used, followed by importing the pre-calibration and linking it to the method.

That’s all that is needed!

The instrument is now ready to deliver reliable results for routine measurements. It is advised to measure a few control samples of known values to confirm that the pre-calibration provides acceptable results.

Optimizing the pre-calibration

In some cases, the results obtained on control samples with the pre-calibration are not completely acceptable. There could be various reasons for this and in general, three different cases can be distinguished: 

  1. The results obtained with the control samples deviate only slightly from the expected values.
  2. The results are acceptable, but the standard error is somewhat on the larger side.
  3. The results deviate significantly.

Below we will go through each of these cases and provide recommendations:

    Case 1:

    The results obtained with the control samples deviate only slightly from the expected values.

    If the value obtained from the control samples deviates only slightly, a slope-bias correction is the recommended solution. The process is illustrated in Figure 3. In the top diagram, you see that the values from the pre-calibration deviate consistently over the whole range. In this situation, it is possible to perform a slope-bias correction on the measured model in the Vision Air software. After this has been done, the results fit very well (Fig. 3 – bottom).

    Figure 3. Top: correlation between measured control samples (orange dots) and the pre-calibration prediction model (blue line). Bottom: correlation between the values after slope-bias correction (orange dots) and the pre-calibration prediction model (blue line).

    Case 2:

    The results are acceptable, but error is somewhat on the larger side.

    In most cases, this behavior is observed if the range of the pre-calibration is much larger than the range that the analyst is interested in.

    Consider for example, measurement of a value at the lower end of the overall range. The error of the pre-calibration is calculated over the entire range, and therefore the impact of the average error (SECV = standard error of cross validation) is much larger on values on the lower end compared to values in the middle of the complete range. This is exemplified in Figure 4 and Table 1.

    Figure 4. Pre-calibration correlation plot of the kappa number (a pulp & paper parameter) over the extended range 0–200 (left), and the smaller range 0–36 (right).
    Table 1. Figures of merit for the different regions of the pre-calibration from Figure 4. Note the much smaller SECV for the range 0–36 compared to the SECV for the full range of 0–200.

    The recommended action in this case is to remove certain ranges of the pre-calibration, leaving in only the range of interest.

    From Table 1, it is clear that the SECV for the whole range (0–200) is much higher than the SECV of the smaller range (0–36). This means that when removing the samples corresponding to the higher ranges from the pre-calibration (leaving only the range of 0–36 in), the resulting modified pre-calibration gives a lower SECV.

    Case 3:

    The results deviate significantly.

    There could be several reasons behind this, so we will select two examples.

    In the first example, consider the possibility that the provided samples for analysis are proprietary. For instance, certain manufacturers produce unique, patented polyols. These proprietary substances are not included among the standard collection of sample spectra in the pre-calibration. Thus, the pre-calibration does not provide acceptable results for such proprietary samples.

    Another example is shown in Figure 5. Here it can be observed that the values from the primary method (blue dots) deviate significantly from the values obtained from the pre-calibration model.

    This example is taken from a real customer case which we have observed.

    At first, we were a bit puzzled when checking the results, but the reason became clear after speaking with our customer. They had chosen to measure the primary values (hydroxyl number) via manual titration and not, as recommended, with an automatic titrator from Metrohm.

    Figure 5. Correlation between measured control samples (blue dots) and the pre-calibration model (dotted red line) for the hydroxyl number in polyols. This data is based on a real customer example (click to enlarge).

    Therefore, the reason that the fit of the control samples is unsatisfying is due to the poor accuracy of manual titration of the control samples and has nothing to do with the quality of the pre-calibration.

    Looking for your pre-calibration?

    Metrohm offers a selection of pre-calibrations for a diverse collection of applications. These are listed in Table 2 together with the most important parameters of the pre-calibration. Click on the links to get more information.

    Metrohm NIRS pre-calibration options

    Pre-calibration Selected Important Parameters
    Polyols Hydroxyl number (ASTM D6342)
    Gasoline RON, MON, anti-knock index, aromatics, benzene, olefins
    Diesel Cetane index, density, flash point
    Jet Fuel Cetane, index, density, aromatics
    Palm oil Iodine value, free fatty acids, moisture
    Pulp & Paper Kappa number, density, strength parameters
    Bio-methane Potential (BMP) BMP (of biological waste)
    Polyethylene (PE) Density, intrinsic viscosity
    Polypropylene (PP) Melt Flow Rate
    Polyethylene Terephthalate (PET) Intrinsic viscosity, acid number, and others
    Polyamide (PA 6) Intrinsic viscosity, NH2 and COOH end groups
    Table 2. Overview of available pre-calibrations for the Metrohm Vision Air software.

    Conclusion

    Pre-calibrations are prediction models based on a large number of real product spectra. These allow users to skip the initial model development part and make it possible to use the instrument from day one, saving both time and money.

    To learn more

    about pre-calibration for selected NIRS applications,

    come visit our website!

    Post written by Dr. Dave van Staveren, Head of Competence Center Spectroscopy at Metrohm International Headquarters, Herisau, Switzerland.

    Improving your conductivity measurements

    Improving your conductivity measurements

    Have you ever performed a conductivity measurement and obtained incorrect results? There are several possible reasons for this. In this post, I want to show you how you may overcome some of these issues.

    By itself, conductivity measurement is performed quite easily. One takes a conductivity cell and a suitable measuring device, inserts the conductivity cell into the sample solution and reads the value given. However, there are some challenges such as choosing the right sensor, the temperature dependency of conductivity, or the CO2 uptake, which falsify your results.

    The following topics will be covered in the rest of this post (click to jump to the topic):

     

    So many measuring cells – which one to use?

    The first and most important question about conductivity measurement is: which sensor is the most suitable for your application? The measuring range is dependent on the cell constant of your conductivity cell, and therefore this choice requires a few considerations:

    • What is the expected conductivity of my sample?
    • Do I have a broad range of conductivities within my samples?
    • What is the amount of sample I have available for measurement?

    There are different types of conductivity measuring cells available on the market. Two-electrode cells have the advantage that they can be constructed within a smaller geometry, and are more accurate at low conductivities. On the other hand, other types of measuring cells show no influences towards polarization, have a larger linear range, and are less sensitive towards contaminations.

    Figure 1 below shows you the wide application range of sensors with different cell constants. As a general rule: Sensors with a low cell constant are used for samples with a low conductivity and sensors with high cell constants should be used for high conductivity samples.

    Figure 1. Illustration of the range of applications for different conductometric measuring cells offered by Metrohm (click to enlarge).

    To get more information, check out our Electrode finder and select «conductivity measurement».

    Determination of the cell constant

    Each conductivity cell has its own conductivity cell constant and therefore needs to be determined regularly. The nominal cell constant is dependent of the area of the platinum contacts and the distance between the two surfaces:

    :  Cell constant in cm-1
    Aeff :  Effective area of the electrodes in cm2
    delectrodes :  Distance between the electrodes in cm

    However, no sensor is perfect and the effective cell constant does not exactly agree with the ideal cell constant. Thus, the effective cell constant is determined experimentally by measuring a suitable standard. Its measured conductivity is compared to the theoretical value:

    :  Cell constant in cm-1
    ϒtheor. :  Theoretical conductivity of the standard at the reference temperature in S/cm
    Gmeas :  Measured conductance in S

    With increasing lifetime usage, the properties of the measuring cell might change. Changing its properties means also changing its cell constant. Therefore, it is necessary to check the cell constant with a standard from time to time and to perform a redetermination of the cell constant if necessary.

    Temperature dependency of the conductivity

    Have you ever asked yourself why the conductivity is normally referred to at 20 °C or 25 °C in the literature? The reasoning is that the conductivity itself is very temperature-dependent and will vary with different temperatures. It is difficult to compare conductivity values measured at different temperatures, as the deviation is approximately 2%/°C. Therefore, please make sure you measure in a thermostated vessel or you use a temperature compensation coefficient.

    What is a temperature compensation coefficient anyway?

    The temperature compensation coefficient is a correction factor, which will correct your measured value at a certain temperature to the defined reference temperature. The factor itself depends on the sample matrix and is different for each sample.

    For example, if you measure a value of 10 mS/cm at 24 °C, then the device will correct your value with a linear correction of 2%/°C to 10.2 mS/cm to the reference temperature of 25 °C. This feature of linear temperature compensation is very common and is implemented in most devices.

    However, the temperature compensation coefficient is not linear for every sample. If the linear temperature compensation is not accurate enough, you can also use the feature of recording a temperature compensation function. There, you will measure the conductivity of your sample at different temperatures and afterwards fit a polynomial function though the measured points. For future temperature corrections, this polynomial function will be used, and more accurate results will be obtained.

    And… what about the conductivity standard?

    Figure 2. The blue curve shows the actual conductivity (mS/cm) and the orange line is a linear temperature compensation. The temperature compensation here varies from 2.39–4.04 %/°C.

    Which standard do I have to choose?

    In contrast to pH calibration, the conductivity cell only requires a one-point calibration. For this purpose, you need to choose a suitable standard, which has a conductivity value in the same range as your sample and is inert towards external influences.

    As an example, consider a sample of deionized water, which has an expected conductivity of approximately 1 µS/cm. If you calibrate the conductivity cell with a higher conductivity standard around 12.88 mS/cm, this will lead to an enormous error in your measured sample value.

    Most conductivity cells will not be suitable for both ranges. For such low conductivities (1 µS/cm), it is better to use a 100 µS/cm conductivity standard. While lower conductivity standards are available, proper handling becomes more difficult. For such low conductivities, the influence of CO2 influence increases.

    Last but not least: To stir or not to stir?

    This is a controversial question, as stirring has both advantages and disadvantages. Stirring enables your sample solution to be homogeneous, but it might also enhance the carbon dioxide uptake from ambient air.

    Either way, it does not matter if you choose to stir or not to stir, just make sure that the same procedure is applied each time for the determination of the cell constant, and for the determination of the conductivity of your sample. Personally, I recommend to stir slightly, because then a stable value is reached faster and the effect of carbon dioxide uptake is almost negligible.

    To summarize, it is quite easy to perform conductometric measurements, but some important points should be considered thoroughly before starting the analysis, like the temperature dependency, choice of suitable conductometric measuring cell, and the choice of calibration standard. Otherwise false results may be obtained.

    Curious about conductivity measurements?

    Read through our free comprehensive monograph:

    Conductometry – Conductivity measurement

    Additionally, you can download our free two-part Application Bulletin AB-102 – Conductometry below:   

    Post written by Iris Kalkman, Product Specialist Titration at Metrohm International Headquarters, Herisau, Switzerland.

    Tips and Tricks for IC Columns

    Tips and Tricks for IC Columns

    Monitoring and maintaining column performance

    One of the basic requirements for ensuring reliable chromatographic analyses is a high-performance separation column. Ion chromatography (IC) users should regularly check the performance of their column. This way, if a drop in performance becomes apparent, steps can be taken in good time to restore or maintain the proper functioning of the column, reducing downtimes in sample throughput. In this blog post, we explain how you can assess column performance, which parameters you should monitor, and which measures you can take to ensure excellent column performance.

     

    First-time use of a new separation column

    When you use a column for the very first time, we recommend that you check its initial performance. The Certificate of Analysis (CoA), which you receive with every purchase of a Metrohm column, is your source of reference here. Record a chromatogram and use the analysis conditions specified in the CoA: these include flow rate, temperature, eluent (mobile phase), analyte concentration, sample loop size, and suppression.

    You can evaluate the column’s performance by comparing some of the result parameters with the values listed in the CoA (e.g. retention time, theoretical plates, asymmetry, resolution, and peak height).

    Regular monitoring of column performance

    Columns that are already in use should be monitored regularly, too! We recommend carrying out these tests with check standards under the application conditions you normally use, because performance varies depending on the type of analysis and associated analysis conditions as well as the instrumental setup. If a reduction in performance is observed, the requirements of the application are crucial to determine whether it can still be used.

    Below, we explain how to determine your column performance based on five performance indicators. You will also find out how you can prevent or rectify a decline in performance.

    Click to jump directly to a topic:

     

    Backpressure

    Monitor the backpressure: When you use your new column for the first time, save the backpressure value under the analysis conditions of your application as a reference («common variable» in MagIC Net). Then use the user-defined results to monitor the difference between the initial backpressure and the one displayed during the current determination.

    If you identify an increase in the backpressure in comparison with the saved initial value, this indicates that particles have been deposited in either the guard column or separation column. If the measured increase is greater than 1 MPa, action must be taken. First, you should check which of the columns is affected (guard vs. separation). If the guard column is contaminated, it should be replaced, as this is its primary function. If the separation column is affected, remove it from the system, turn it around and reinstall, and then rinse it for several hours in this reversed flow direction. If this doesn’t help, we strongly recommend that you consider replacing the column. This will be essential if the maximum permitted backpressure for the column is reached.

    Retention time

    To track changes in the retention time (which signal a decrease in column performance), the retention time of the last analyte peak is monitored in the chromatogram. Sulfate, for example, is suitable for this, as it usually elutes right at the end of standard anion chromatograms. Here too, work with a common variable in MagIC Net to save the initial value.

    Unstable retention times can be caused by carbon dioxide introduced from the ambient air or from air bubbles present in the eluent. Luckily, these problems can be resolved easily (see Table 1).

    Table 1. Preventing and correcting performance loss in IC columns (click to enlarge).

    The column may also have lost some capacity. This capacity loss can be caused by the presence of high-valency ions which are difficult to remove due to their strong attraction to the stationary phase. The column should then be regenerated in accordance with the column leaflet to remove any contamination. If this doesn’t lead to any improvement, then consider replacing the column depending on the requirements of the application, particularly in the event of progressive capacity loss.

    Capacity loss can also occur if the functional groups are permanently detached from the stationary phase. In such a case, the column cannot be regenerated and must be replaced.

    Resolution

    Monitor the chromatographic resolution by comparing measurements from a predefined check standard with an initial reference value. If the resolution is R > 1.5, the signal is considered baseline-separated (see illustration below). However, in cases involving highly concentrated matrices and for peaks that are more widely spread, the resolution value must be higher to ensure baseline separation.

    If a loss of resolution occurs, first make sure that it is not caused by the eluent or the IC system. Once these have been ruled out, it is possible that the adsorptive effect of contaminations in the guard column or separation column may be responsible. A contaminated guard column should be replaced. If the cause of the problem is found to be the separation column, this should be regenerated in accordance with the column leaflet to free it from any organic or inorganic contamination. If the loss of resolution progresses, a column replacement is inevitable.

    Theoretical plates

    Save the initial number of theoretical plates in MagIC Net as a common variable, as mentioned earlier for other parameters. Usually, the last eluting peak is used – in anion chromatograms, sulfate would yet again prove to be a suitable candidate. Theoretical plates also depend on the analyte concentration. Therefore, it is ideal to monitor this parameter during check standard measurements and not during sample measurements. You can track the development of any changes to the number of theoretical plates via the user-defined results in MagIC Net.

    A decrease in the theoretical plates can suggest dead volume in the IC system (see Table 1). A low number of theoretical plates may also be observed if the column has been overloaded by a high salt concentration in the sample matrix, for instance. If the theoretical plates decrease by more than 20%, this indicates that column performance is declining. Depending on the requirements of the application, action may need to be taken.

    If the guard column is the reason for the drop in performance, it should be replaced. If the problem is with the separation column, we recommend regenerating the column in accordance with the column leaflet to eliminate any organic or inorganic contamination. If this doesn’t help, you should consider replacing the column, particularly if a trend toward lower theoretical plates is observed.

    Asymmetry

    Determine the initial asymmetry of the analytes by measuring a predefined check standard under the analysis conditions of your application. Save it as a common variable, then track the user-defined results to observe the development of asymmetry over time. The maximum acceptable values for the asymmetry vary depending on the analyte. For example, calcium and magnesium peaks initially present relatively high asymmetry values.

    Asymmetry is defined as the distance from the centerline of the peak to the descending side of the peak (B in the figure below) divided by the distance from the centerline of the peak to the ascending side of the peak (A in the figure below), where both distances are measured at 10% of the peak height. Some pharmacopoeia may use other figures – please check to be sure of the requirements in your country.

    AS > 1 means a peak has tailing, and AS < 1 equates to peak fronting. Optimum chromatography is achieved with peak asymmetries as close as possible to 1. As a general rule, column performance is considered in decline when the asymmetry is AS > 2 or AS < 0.5. Depending on the requirements of the application, measures have to be taken in this case in order to improve symmetry and to enable better integration.

    The reason for high asymmetry values may be down to the ion chromatograph – due to dead volume, for example. If this is not the case, it is important to find out whether the asymmetry is caused by problems with the guard column or with the separation column. If the guard column causes the asymmetry, it should be replaced. If it is the separation column, it should first be regenerated in accordance with the column leaflet to remove any organic or inorganic contamination. If this doesn’t help, you should consider replacing the column. If a trend toward higher asymmetry values can be observed, replacement is unavoidable.

    In summary, there are many ways in which you can estimate the performance of the column and track concrete figures over its lifetime. Proper maintenance can extend the lifetime of the separation column, as well as always using a guard column for extra protection.

    Need help choosing the right column for your application?

    Look no further!

    Try the Metrohm Column Finder here:

    For further guidance about IC column maintenance, you can watch our tutorial videos here:

    Post written by Dr. Alyson Lanciki, Scientific Editor at Metrohm International Headquarters, Herisau, Switzerland. Primary research and content contribution done by Stephanie Kappes.

    Benefits of NIR spectroscopy: Part 4

    Benefits of NIR spectroscopy: Part 3

    This blog post is part of the series “NIR spectroscopy: helping you save time and money”. 

    How to implement NIRS in your laboratory workflow

    This is the third installment in our series about NIR spectroscopy. In our previous installments of this series, we explained how this analytical technique works from a sample measurement point of view and outlined the difference between NIR and IR spectroscopy.

    Here, we describe how to implement a NIR method in your laboratory, exemplified by a real case. Let’s begin by making a few assumptions:

    • your business produces polymeric material and the laboratory has invested in a NIR analyzer for rapid moisture measurements (as an alternative to Karl Fischer Titration) and rapid intrinsic viscosity measurements (as an alternative to measurements with a viscometer)
    • your new NIRS DS2500 Analyzer has just been received in your laboratory

    The workflow is described in Figure 1.

    Figure 1. Workflow for NIR spectroscopy method implementation (click to enlarge).

    Step 1: Create calibration set

    NIR spectroscopy is a secondary method, meaning it requires «training» with a set of spectra corresponding to parameter values sourced from a primary method (such as titration). In the upcoming example for analyzing moisture and intrinsic viscosity, the values from the primary analyses are known. These calibration set samples must cover the complete expected concentration range of the parameters tested for the method to be robust. This reflects other techniques (e.g. HPLC) where the calibration standard curve needs to span the complete expected concentration range. Therefore, if you expect the moisture content of a substance to be between 0.35% and 1.5%, then the training/calibration set must cover this range as well.

    After measuring the samples on the NIRS DS2500 Analyzer, you need to link the values obtained from the primary methods (Karl Fischer Titration and viscometry) on the same samples to the NIR spectra. Simply enter the moisture and viscosity values using the Metrohm Vision Air Complete software package (Figure 2). Subsequently, this data set (the calibration set) is used for prediction model development.

    Figure 2. Display of 10 NIR measurements linked with intrinsic viscosity and moisture reference values obtained with KF titration and viscometry (click to enlarge).

    Step 2: Create and validate prediction models

    Now that the calibration set has been measured across the range of expected values, a prediction model must be created. Do not worry – all of the procedures are fully developed and implemented in the Metrohm Vision Air Complete software package.

    First, visually inspect the spectra to identify regions that change with varying concentration. Often, applying a mathematical adjustment, such as the first or second derivative, enhances the visibility of the spectral differences (Figure 3).

      Figure 3. Example of the intensifying effect on spectra information by using mathematical calculation: a) without any mathematical optimization and b) with applied second derivative highlighting the spectra difference at 1920 nm and intensifying the peaks near 2010 nm (click to enlarge).
      Univariate vs. Multivariate data analysis

      Once visually identified, the software attempts to correlate these selected spectral regions with values sourced from the primary method. The result is a correlation diagram, including the respective figures of merit, which are the Standard Error of Calibration (SEC, precision) and the correlation coefficient (R2) shown in the moisture example in Figure 4. The same procedure is carried out for the other parameters (in this case, intrinsic viscosity).

      This process is again similar to general working procedures with HPLC. When creating a calibration curve with HPLC, typically the peak height or peak intensity (surface) is linked with a known internal standard concentration. Here, only one variable is used (peak height or surface), therefore this procedure is known as «univariate data analysis».

      On the other hand, NIR spectroscopy is a «multivariate data analysis» technology. NIRS utilizes a spectral range (e.g. 1900–2000 nm for water) and therefore multiple absorbance values are used to create the correlation.

      Figure 4. Correlation plot and Figures of Merit (FOM) for the prediction of water in polymer samples using NIR spectroscopy. The «split set» function in the Metrohm Vision Air Complete software package allows the generation of a validation data set, which is used to validate the prediction model (click to enlarge).
      How many spectra are needed?

      The ideal number of spectra in a calibration set depends on the variation in the sample (particle size, chemical distribution, etc.). In this example, we used 10 polymer samples, which is a good starting point to check the application feasibility. However, to build a robust model which covers all sample variations, more sample spectra are required. As a rule, approximately 40–50 sample spectra will provide a suitable prediction model in most cases.

      This data set including 40–50 spectra is also used to validate the prediction model. This can be done using the Metrohm Vision Air Complete software package, which splits the data set into two groups of samples: 

      1. Calibration set 75%
      2. Validation set 25%

      As before, a prediction model is created using the calibration set, but the predictions will now be validated using the validation set. Results for these polymer samples are shown above in Figure 4.

      Users who are inexperienced with NIR model creation and do not yet feel confident with it can rely on Metrohm support, which is known for its high quality service. They will assist you with the prediction model creation and validation.

      Step 3: Routine Analysis

      The beauty of the NIRS technique comes into focus now that the prediction model has been created and validated.

      Polymer samples with unknown moisture content and unknown intrinsic viscosity can now be analyzed at the push of a button. The NIRS DS2500 Analyzer will display results for those parameters in less than a minute. Typically, the spectrum itself is not shown during this step—just the result—sometimes highlighted by a yellow or red box to indicate results with a warning or error as shown in Figure 5.

      Figure 5. Overview of a selection of NIR predicted results, with clear pass (no box) and fail (red box) indications (click to enlarge).
      Display possibilities

      Of course, the option also exists to display the spectra, but for most users (especially for shift workers), these spectra have no meaning, and they can derive no information from them. In these situations only the numeric values are important along with a clear pass/fail indication.

      Another display possibility is the trend chart, which allows for the proactive adjustment of production processes. Warning and action limits are highlighted here as well (Figure 6).

      Figure 6. Trend chart of NIR moisture content analysis results. The parallel lines indicate defined warning (yellow) and action (red) limits (click to enlarge).

      Summary

      The majority of effort needed to implement NIR in the laboratory is in the beginning of the workflow, during collection and measurement of samples that span the complete concentration range. The prediction model creation and validation, as well as implementation in routine analysis, is done with the help of the Metrohm Vision Air Complete software package and can be completed within a short period. Additionally, our Metrohm NIRS specialists will happily support you with the prediction model creation if you would require assistance.

      At this point, note that there are cases where NIR spectroscopy can be implemented directly without any prediction model development, using Metrohm pre-calibrations. These are robust, ready-to-use operating procedures for certain applications (e.g. viscosity of PET) based on real product spectra.

      We will present and discuss their characteristics and advantages in the next installment of the series. Stay tuned, and don’t forget to subscribe!

      For more information

      If you want to learn more about selected NIR applications in the polymer industry, visit our website!

      We offer NIRS analyzers suitable for laboratory work as well as for harsh industrial process conditions.

      Post written by Dr. Dave van Staveren, Head of Competence Center Spectroscopy at Metrohm International Headquarters, Herisau, Switzerland.