Select Page
What to consider during back-titration

What to consider during back-titration

Titrations can be classified in various ways: by their chemical reaction (e.g., acid-base titration or redox titration), the indication method (e.g., potentiometric titration or photometric titration), and last but not least by their titration principle (direct titration or indirect titration). In this article, I want to elaborate on a specific titration principle – the back-titration – which is also called «residual titration». Learn more about when it is used and how you should calculate results when using the back-titration principle.

What is a back-titration?

In contrast to direct titrations, where analyte A directly reacts with titrant T, back-titrations are a subcategory of indirect titrations. Indirect titrations are used when, for example, no suitable sensor is available or the reaction is too slow for a practical direct titration.

During a back-titration, an exact volume of reagent B is added to the analyte A. Reagent B is usually a common titrant itself. The amount of reagent B is chosen in such a way that an excess remains after its interaction with analyte A. This excess is then titrated with titrant T. The amount of analyte A can then be determined from the difference between the added amount of reagent B and the remaining excess of reagent B.

As with any titration, both involved reactions must be quantitative, and stoichiometric factors involved for both reactions must be known.

Figure 1. Reaction principle of a back-titration: Reagent B is added in excess to analyte A. After a defined waiting period which allows for the reaction between A and B, the excess of reagent B is titrated with titrant T.

When are back-titrations used?

Back titrations are mainly used in the following cases:

  • if the analyte is volatile (e.g., NH3) or an insoluble salt (e.g., Li2CO3)
  • if the reaction between analyte A and titrant T is too slow for a practical direct titration
  • if weak acid – weak base reactions are involved
  • when no suitable indication method is available for a direct titration

Typical examples are complexometric titrations, for example aluminum with EDTA. This direct titration is only feasible at elevated temperatures. However, adding EDTA in excess to aluminum and back-titrating the residual EDTA with copper sulfate allows a titration at room temperature. This is not only true for aluminum, but for other metals as well.

Learn which metals can be titrated directly, and for which a back-titration is more feasible in our free monograph on complexometric titration.

Other examples include the saponification value and iodine value for edible fats and oils. For the saponification value, ethanolic KOH is added in excess to the fat or oil. After a determined refluxing time to saponify the oil or fat, the remaining excess is back-titrated with hydrochloric acid. The process is similar for the iodine value, where the remaining excess of iodine chloride (Wijs-solution) is back-titrated with sodium thiosulfate.

For more information on the analysis of edible fats and oils, take a look at our corresponding free Application Bulletin AB-141.

How is a back-titration performed?

A back titration is performed according to the following general principle:

  1. Add reagent B in excess to analyte A.
  2. Allow reagent B to react with analyte A. This might require a certain waiting time or even refluxing (e.g., saponification value).
  3. Titration of remaining excess of reagent B with titrant T.

For the first step, it is important to precisely add the volume of reagent B. Therefore, it is important to use a buret for this addition (Fig. 2).

Figure 2. Example of a Titrator equipped with an additional buret for the addition of reagent B.

Furthermore, it is important that the exact molar amount of reagent B is known. This can be achieved in two ways. The first way is to carry out a blank determination in the same manner as the back-titration of the sample, however, omitting the sample. If reagent B is a common titrant (e.g., EDTA), it is also possible to carry out a standardization of reagent B before the back-titration.

In any case, as standardization of titrant T is required. This then gives us the following two general analysis procedures:

Back-titration with blank
  1. Titer determination of titrant T
  2. Blank determination (back-titration omitting sample)
  3. Back-titration of sample
Back-titration with standardizations
  1. Titer determination of titrant T
  2. Titer determination of reagent B
  3. Back-titration of sample

Be aware: since you are performing a back-titration, the blank volume will be larger than the equivalence point (EP) volume, unlike a blank in a direct titration. This is why the EP volume must be subtracted from the blank or the added volume of reagent B, respectively.

For more information on titrant standardization, please have a look at our blog entry on this topic.

How to calculate the result for a back-titration?

As with direct titrations, to calculate the result of a back-titration it is necessary to know the involved stoichiometric reactions, aside from the exact concentrations and the volumes. Depending on which analysis procedure described above is used, the calculation of the result is slightly different.

For a back-titration with a blank, use the following formula to obtain a result in mass-%:

VBlank:  Volume of the equivalence point from the blank determination in mL

VEP Volume at the equivalence point in mL

cTitrant:  Nominal titrant concentration in mol/L

fTitrant Titer factor of the titrant (unitless)

r:  Stoichiometric ratio (unitless)

MA Molecular weight of analyte A in g/mol

mSample Weight of sample in mg

100:  Conversion factor, to obtain the result in %

The stoichiometric ratio r considers both reactions, analyte A with reagent B and reagent B with titrant T. If the stoichiometric factor is always 1, such as for complexometric back-titrations or the saponification value, then the reaction ratio is also 1. However, if the stoichiometric factor for one reaction is not equal to 1, then the reaction ratio must be determined. The reaction ratio can be determined in the following manner:

 

  1. Reaction equation between A and B
  2. Reaction equation between B and T
  3. Multiplication of the two reaction quotients
Example 1

Reaction ratio: 

Example 2

Reaction ratio: 

Below is an actual example of lithium carbonate, which can be determined by back-titration using sulfuric acid and sodium hydroxide.

The lithium carbonate reacts in a 1:1 ratio with sulfuric acid. To determine the excess sulfuric acid, two moles of sodium hydroxide are required per mole of sulfuric acid, resulting in a 1:2 ratio. This gives a stoichiometric ratio r of 0.5 for this titration.

 For a back-titration with a standardization of reagent B, use the following formula to obtain a result in mass-%:

VB Added volume of the reagent B in mL

cB:  Nominal concentration of reagent B in mol/L

fB:  Titer factor of reagent B (unitless)

VEP:  Volume at the equivalence point in mL

cT:  Nominal concentration of titrant T in mol/L

fT Titer factor of the titrant T (unitless)

sBT Stoichiometric factor between reagent B and titrant T

sAB:  Stoichiometric factor between analyte A and reagent B

MA:  Molecular weight of analyte A in g/mol

mSample:  Weight of sample in mg

100:  Conversion factor, to obtain the result in %

Modern titrators are capable of automatically calculating the results of back-titrations. All information concerning the used variables (e.g., blank value) are stored together with the result for full traceability.

To summarize:

Back-titrations are not so different from regular titrations, and the same general principles apply. The following points are necessary for a back-titration: 

  • Know the stoichiometric reactions between your analyte and reagent B, as well as between reagent B and titrant T.
  • Know the exact concentration of your titrant T.
  • Know the exact concentration of your reagent B, or carry out a blank determination.
  • Use appropriate titration parameters depending on your analysis.

If you want to learn more about how you can improve your titration, have a look at our blog entry “How to transfer manual titration to autotitration”, where you can find practical tips about how to improve your titrations.

If you are unsure how to determine the exact concentration of your titrant T or reagent B by standardization, then take a look at our blog entry “What to consider when standardizing titrant”.

Post written by Lucia Meier, Product Specialist Titration at Metrohm International Headquarters, Herisau, Switzerland.

Titer determination in Karl Fischer Titration

Titer determination in Karl Fischer Titration

In a recent post, we have discussed the importance of titer determinations for potentiometric titrations.

Without a titer determination, you will not obtain correct results. The same applies for volumetric Karl Fischer (KF) titrations. In this blog post, I will cover the following topics (click to jump directly to each):

Why should I do titer determinations?

Why is a titer determination necessary? Well, the answer is quite simple. Without knowing the titer of a KF titrant, the water content of the sample cannot be calculated correctly. In Karl Fischer titration, the titer states how many mg of water can be titrated with one mL of titrant. Therefore, the KF titer has the unit «mg/mL».

You might say: “Now, ok, let’s determine the titer. That isn’t too much work and afterwards, I know the titer value and I don’t need to repeat the titer determination.

I agree this would be very nice. However, reality is somewhat different. You must carry out titer determinations on a regular basis. In closed bottles, KF titrants are very stable and the titer does not change appreciably. Once you open the bottle, the KF titrant starts to change significantly. Air will enter the bottle, and considering that 1 L of air contains several milligrams of water, you can imagine that this moisture has an influence on the titer. To prevent moist air from getting into the titrant, the bottle must be either tightly closed after use with the original cap, or should be protected with an absorber tube filled with a molecular sieve (0.3 nm).

Please be aware that temperature changes also have an influence on the titer. A temperature increase of the titrant by 1 °C leads to a titer decrease of approximately 0.1% due to volume expansion. Consider this, in case the temperature in your laboratory fluctuates during the working day.

Do not forget: if your titration system is stopped overnight, the reagent in the tubes and in the cylinder is affected and the titer is no longer equal to the titrant in the bottle. Therefore, I recommend first running a preparation step to flush all tubes before the first titration.

How often should I perform titer determinations?

This question is asked frequently, and unfortunately has no simple answer. In other words, I cannot recommend a single fixed interval for titer determinations. The frequency depends on various factors:

  • the type of reagent (two-component titrants are more stable than single-component titrants)
  • the tightness of the seals between the titration vessel and the titrant bottle
  • how accurate the water content in the sample must be determined

In the beginning, I would recommend performing a titer determination on a daily basis. After a few days, it will become apparent whether the titer remains stable or decreases. Then you can decide to adjust the interval between successive titer determinations.

What equipment do I need for a titer determination?

You need a fully equipped titrator for volumetric KF titration, as well as the KF reagents (titrant and solvent). Another prerequisite for accurate titer determinations is an analytical balance with a minimal resolution of 0.1 mg. Last but not least, you need a standard containing a known amount of water and some tools to add the standard to the titration vessel. These tools are discussed in the next section.

How to carry out a titer determination

Three different water standards are available for titer determinations. There are both liquid and solid standards available from various reagent suppliers. The third possibility is available in every laboratory: distilled water. Below, we will take a closer look at the individual handling of these three standards. For determination of appropriate sample sizes, you can download our free Application Bulletin AB-424, Titer determination in volumetric Karl Fischer titration.

1. Liquid water standard

For the addition of a liquid water standard, you need a syringe and a needle.

There are two possibilities to add liquid standard. One is to inject it with the tip of the needle placed above the reagent level. In this case, aspirate the last drop back into the syringe. Otherwise, it will be dropped off at the septum. The droplet is included in the sample weight, but the water content in the drop is not determined. This will lead to false results.

If the needle is long enough, you can immerse the tip in the reagent during the standard addition. In this case, there is no last droplet to consider, and you can pull the needle out of the titration vessel without any additional aspiration step.

Step-by-step – how to carry out a titer determination:

  1. Open the ampoule containing the standard as recommended by the manufacturer.
  2. Aspirate approximately 1 mL of the standard into the syringe.
  3. Remove the tip of the needle from the liquid and pull the plunger back to the maximum volume. Sway the syringe to rinse it with standard. Then eject the 1 mL of standard into the waste.
  4. Aspirate the remaining content of the ampoule into the needle.
  5. Remove any excess liquid from the outside of the needle with a paper tissue.
  6. Place the needle on a balance, and tare the balance.
  7. Then, start the determination and inject a suitable amount of standard through the septum into the titration vessel. Please take care that the standard is injected into the reagent and not at the electrode or the wall of the titration vessel. This leads to unreproducible results.
  8. After injecting the standard, place the syringe again on the balance.
  9. Enter the sample weight in the software.
2. Solid water standard

It is not possible to add the solid water standard with a syringe. For this, different tools are required. Here, examples are shown of a weighing boat and the Metrohm OMNIS spoon for paste.

Place the weighing boat on the balance, then tare the balance. Weigh in an appropriate amount of the solid standard, and tare the balance again. Start the titration, quickly remove the stopper with septum, add the solid standard and quickly replace the stopper. When adding the standard, take care that no standard sticks to the electrode or the walls of the titration vessel. In case that happens, gently swirl the titration vessel to wash down the standard. After the addition of the standard, place the weighing boat on the balance again and enter the sample weight in the software.

3. Pure water

Pure water can be added to the titration vessel either by weight or by volume.

For a titer determination with pure water, only a few drops are required. Such small volumes can be difficult to add precisely, and results strongly depend on the user. Moreover, addition by weight requires a balance capable of weighing a few milligrams. I personally prefer using water standards, and suggest that you use them as well.

By weight

Fill a small syringe (~1 mL) with water. Due to the very small amounts of pure water added for the titer determination, I recommend using a very thin needle to more accurately add small volumes. After filling the syringe, place it on a balance and tare the balance. Then start the titration, and inject an appropriate amount of water through the septum into the titration vessel. Aspirate the last droplet back into the syringe. Remove the needle, place the syringe on the balance again, and enter the sample weight in the software.

By volume

Fill a microliter syringe with an appropriate volume of water. Make sure there are no air bubbles in the syringe, as they will falsify the result. Begin the titration and inject the syringe contents through the septum into the titration vessel. Enter the added sample size in the software.

Acceptable results

During trainings, I am often asked if the obtained result is acceptable. I recommend carrying out a threefold titer determination. Ideally, the relative standard deviation of those three determinations is smaller than 0.3%.

How long can the reagent be used?

As long as you carry out regular titer determinations, the titer change will be considered in the calculation, and the results will be correct. Just keep in mind: the lower the titer, the larger the volume needed for the determination.

I hope I was able to convince you that titer determination is essential to obtain correct results in volumetric Karl Fischer titration, and that it is not that difficult to perform.

In case you still have unanswered questions, please download Metrohm Application Bulletin AB-424 to get additional information, tips, and tricks on performing titer determination.

Still have questions?

Check out our Application Bulletin: Titer determination in volumetric Karl Fischer titration.

Post written by Michael Margreth, Sr. Product Specialist Titration (Karl Fischer Titration) at Metrohm International Headquarters, Herisau, Switzerland.

FAQ: All about pH calibration

FAQ: All about pH calibration

In a recent blog post, we discussed how to avoid the most common mistakes in pH measurement:

Here, I want to discuss in a bit more detail how you can correctly calibrate your pH electrode and what you have to consider to obtain the best measurement results afterwards by answering some of your most frequently asked questions.

Let’s get right into it! If you want to jump directly to a question, click on one of these links:

When do I have to calibrate my pH electrode?

Performing regular calibration of your pH electrode is important to get accurate results. The pH electrode can change its properties (e.g., by contamination of the reference electrolyte) which then leads to deviating calibration results. If you do not freshly calibrate your electrode, you obtain precise but inaccurate results of your pH measurement. Therefore, the more accurate the results need to be, the more often you have to calibrate.

Depending on the number of measurements and the sample matrix, I recommend calibrating at least weekly. If the sensor is used often, or if the sample matrix contaminates the sensor, then you should calibrate daily or even more frequently. If the pH electrode is not used often, then always calibrate it prior to a new set of measurements. Also make sure that you always calibrate your sensor if you have received a new one, or after maintenance.

How do I select the correct buffers?

Any time you perform a calibration, it is essential that appropriate buffers are used.

First, you have to select the pH values that you would like to use for calibration. Use at least two different buffers, though it is even better to perform a multi-point calibration. Furthermore, make sure that the pH of your sample is of course within the calibration range! For example, if you want to measure a sample at pH 9, your calibration should not be within pH 4 and 7, but at least up to pH 10. In the graph, you can see that errors become large especially outside of the calibrated range.

 

In addition, the quality of your buffer solutions is essential, as your calibration can only be as good as the buffers used. Never use expired calibration buffers! If the buffer solutions are meant for single use only, do not reuse them. Microbial growth in the buffer can alter its properties quickly. Always mark your buffer solution bottle with the opening date, and especially ensure that alkaline buffers above pH 9 are not used for too long (< 1 month), as CO2 will enter and change the pH value slowly. Moreover, never pour the standards back into the bottle, as they might have been contaminated!

How should I set up my instrument?

Not only is the right choice of calibration buffers essential, it is also very important that you set up your instrument correctly. It’s not only the pH measurement that is sensitive to temperature, pH buffers are as well, and the measured pH value can change with the temperature. This temperature dependency of the pH buffers is usually depicted with buffer tables.

Most instruments already include buffer table templates from various buffer manufacturers. Several tables are available that contain the information about the exact pH value at various temperatures for a certain buffer. These tables are unique for each manufacturer.

The instrument will then select the correct pH value according to the measured temperature. If your buffer is not available with a table, make sure you enter the correct pH value or use a custom buffer table to store the information. As seen here, a temperature change of only 5 °C can have an influence of > 0.04 pH units.

Therefore selecting the manufacturer of your buffer solutions within the calibration parameters is important to obtain an accurate calibration.

Why do I have to measure the temperature?

You might wonder why you should always measure the temperature when you perform pH measurements. Most pH electrodes used for pH measurement have a temperature sensor directly included. This is because the pH value is temperature-dependent. Let me digress for a moment:

In 1889 the Nernst equation was established, describing the potential of an electrochemical cell as a function of concentrations of the ions taking part in the reaction. The relationship between potential and pH [-log(H+)] is given by the formula:

Where U is the measured potential, U0 the standard electrode potential, R the universal gas constant, T the absolute temperature, n the charge (here, +1), and F the Faraday constant. The central term

is called the Nernst slope and gives the mV change per pH unit. As you can see, this term includes the absolute temperature, meaning the slope of your calibration is temperature-dependent. The higher the temperature, the steeper the slope.

Modern pH meters will correct the slope for this temperature variation when the calibration and measurement are not performed at the same temperature.

However, there is an effect that cannot be corrected by the instrument: samples do not have the same pH value at different temperatures! This can already be seen when looking at the example buffer table above. This temperature dependence is different for each sample. Therefore: Always measure your samples at the same temperature if you want to compare their pH values. Also be sure to carry out the pH calibration at the same temperature at which you are measuring your samples. This will greatly reduce the error of your pH measurement.

How do I perform my calibration?

First, prepare your electrode for calibration: open the refilling plug to ensure proper electrolyte outflow, rinse the electrode well with deionized water, and place the sensor into the buffer solution. An important note: both glass membrane and diaphragm must be covered with the buffer solution.

Additionally, assure that you position the electrode in the beaker for maximum reproducibility, especially when stirring. Never place the sensor haphazardly into the beaker where the glass membrane is touching the glass of the beaker; this can cause scratches on the glass membrane, leading to erroneous results.

Do you even have to stir at all? No, you do not! However, as there can be effects on the measured potential depending on the stirring speed, make sure that you always choose the same stirring speed among all buffer solutions, but also for calibration and subsequent measurements. Also, make sure that you do not stir so strongly that a vortex is formed, and avoid any splashing of the solution.

Now, you can start your calibration. Most instruments decide autonomously when the reading is stable by monitoring the drift (mV change per minute). Sometimes it is also possible to stop the buffer measurement after a fixed time interval. However, this requires enough time for the electrode to reach a stable potential as otherwise the calibration will be biased.

Between the buffer solutions, the electrode is rinsed with deionized water. Never dry the electrode afterwards with a tissue, paper towel, or a cloth! This can lead to electrostatic charges on the electrode or even scratches on the glass membrane. Both will lead to longer response times, and in the latter case – to irreversible damage.

What do «slope» and «offset» mean?

Once you’ve finished the calibration, the instrument will display the calibration result. The calibration results usually consists of a slope and offset value. In this section, I want to explain their meaning.

The slope is normally given in % and is calculated from

the measured slope of the calibration divided by the theoretical slope (Nernst Slope) which is equal to 59.16 mV per pH unit at 25 °C. This is done in order to be able to correct the slope for temperature differences between calibration and measurement.

The second parameter that is evaluated is the pH(0), which is the pH value measured at 0 mV. In an ideal case, 0 mV corresponds to a pH value of 7. However, reality usually does not stick to the ideal case. Sometimes, the offset potential (Uoff) is also given, which corresponds to the potential at pH 7.

After calibration, always check the slope and the pH(0). The slope should fall between 95 and 103% and the pH(0) should lie between pH 6.8 and 7.2 (Uoff within ± 15 mV).

If you would like to get more information about your pH electrode, you can either perform a pH electrode test, which is implemented in some instruments from Metrohm, or a test according to Metrohm application bulletin AB-188.

If the pH(0) is outside the recommended range, this can be caused by a contaminated electrolyte or your probe may require a general cleaning.

If the slope is lower than 95%, this can be related to expired or contaminated buffer solutions. However, old and slow electrodes can also exhibit slopes outside of the limits. Therefore, always use fresh buffers.

If the slope is still too low even with fresh buffer, or the pH(0) is outside the recommended range after cleaning and subsequent reconditioning, it is time to replace the electrode.

To summarize

  • Select the calibration frequency and buffer types according to your samples.
  • Make sure that you always use fresh, high quality, and certified buffers as your calibration can only be as good as your buffers.
  • Set up your instrument correctly and use a fixed electrode position for the best reproducibility.
  • Measure the temperature for calibration and subsequent measurement. Moreover, only compare pH values of samples measured at the same temperature.
  • After calibration, check that your data for slope and pH(0) are within the optimal limits.

Would you like to learn even more about pH measurement? Come to our website and check out our informative webinars!

Metrohm webinars

Available below for pH measurement

Post written by Dr. Sabrina Gschwind, Jr. PM Titration (Sensors) at Metrohm International Headquarters, Herisau, Switzerland.

Moisture Analysis – Karl Fischer Titration, NIRS, or both?

Moisture Analysis – Karl Fischer Titration, NIRS, or both?

In addition to the analysis of the pH value, weighing, and acid-base titration, measurement of water content is one of the most common determinations in laboratories worldwide. Moisture determination is important for nearly every industry, e.g., for lubricants, food and feed, and pharmaceuticals.

Figure 1. Water drops in a spider web

For lubricants, the water concentration is very important to know because excess moisture expedites wear and tear of the machinery. For food and feed, moisture content must be within a narrow range so that the food does not taste dry or stale, nor that it is able to provide a breeding ground for bacteria and fungi, resulting in spoilage. For pharmaceuticals, the water content in solid dosage forms (tablets) and lyophilized products is monitored closely. For the latter, the regulations state that the moisture content needs to be below 2%.

Karl Fischer Titration

Karl Fischer (KF) Titration for water determination was introduced back in the 1930’s, and to this day remains one of the most tried and trusted methods. It is a fast and highly selective method, which means that water, and only water, is determined. KF titration is based on the following two redox reactions.

In the first reaction, methanol and sulfur dioxide react to form the respective ester. Upon addition of iodine, the ester is oxidized to the sulfate species in a water-consuming reaction. The reaction finishes when no water is left.

Figure 2. Manual sample injection for volumetric KF Titration

KF titration can be used for the determination of the water content in all sample types: liquids, solids, slurries, or even gases. For concentrations between 0.1% and 100%, volumetric KF titration is the method of choice, whereas for lower moisture content between 0.001% and 1%, coulometric KF titration is recommended.

Depending on the sample type, its water content, and its solubility in the KF reagents, the sample can either be added directly to the titration vessel, or would first need to be dissolved in a suitable solvent. Suitable solvents are those which do not react with the KF reagents — therefore aldehydes and ketones are ruled out. In case the sample is dissolved in a solvent, a blank correction with the pure solvent also needs to be performed. For the measurement, the sample is injected directly into the titration vessel using a syringe and needle (Fig. 2). The endpoint is detected by a polarized double Pt pin electrode, and from this the water concentration is directly calculated.

Insoluble or hygroscopic samples can be analyzed using the gas extraction technique with a KF Oven. Here, the sample is sealed in small vial, and the water is evaporated by heat then is subsequently carried to the titration cell.

Figure 3. Fully automated KF Titration with the Metrohm 874 KF Oven Sample Processor

For more information, download our free Application Bulletins: AB-077 for volumetric Karl Fischer titration and AB-137 for coulometric Karl Fischer analysis.

If you would like some deeper insight, download our free monograph: “Water determination by Karl Fischer Titration”. 

Near-infrared spectroscopy

Near-infrared spectroscopy (NIRS) is a technique that has been used for myriad applications in the areas of food and feed, polymers, and textiles since the 1980’s. A decade later, other segments began using this technique, such as for pharmaceutical, personal care, and petroleum products.

NIRS detects overtones and combination bands of molecular vibrations. Among the typical vibrations in organic molecules for functional groups such as -CH, -NH, -SH, and -OH, it is the -OH moiety which is an especially strong near infrared absorber. That is also the reason why moisture quantification is one of the key applications of NIR spectroscopy.

For a further explanation, read our previous blog entry on this subject: Benefits of NIR spectroscopy: Part 2.

NIR spectroscopy is used for the quantification of water in solids, liquids, and slurries. The detection limit for moisture in solids is about 0.1%, whereas for liquids it is in the range of 0.02% (200 mg/L), However, in special cases (e.g., water in THF), moisture detection limits of 40–50 mg/L have been achieved.

This technique does not require any sample preparation, which means that samples can be used as-is. Solid samples are measured in high quality disposable sample vials, whereas liquids are measured in high quality disposable cuvettes. Figure 4 displays how the different samples are positioned on the analyzer for a measurement.

Detailed information about the NIRS technique has been described in our previous blog article: Benefits of NIR spectroscopy: Part 1.

Figure 4. Solid (left) and liquid (right) sample positioning for NIR measurements

NIRS is a secondary technique, meaning it can only be used for routine analysis for moisture quantification after a prediction model has been developed. This can be understood by an analogy to HPLC, for which measuring standards to create a calibration curve is among the initial steps. The same applies to NIRS: first, spectra with known moisture content must be measured and then a prediction model is created.

The development of prediction models has been described in detail in our previous blog article: Benefits of NIR spectroscopy: Part 3.

The schematic outline is shown in Figure 5.

Figure 5. Workflow for NIR Method implementation for moisture analysis

For creation of the calibration set, around 30–50 samples need to be measured with both NIRS and KF titration, and the values obtained from KF titration must be linked to the NIR spectra. The next steps are model development and validation (steps 2 and 3 in Figure 5), which are quite straightforward for moisture analysis. Water is a strong NIR absorber, and its peaks are always around 1900–2000 nm (combination band) and 1400–1550 nm (first overtone). This is shown in Figure 6 below.

Figure 6. NIR Spectra of moisturizing creams, showing the absorptions related to H2O at 1400–1550 nm and 1900–2000 nm

After creation and validation of the prediction model, near-infrared spectroscopy can be used for routine moisture determination of that substance. The results for moisture content will be obtained within 1 minute, without any sample preparation or use of chemicals. Also, the analyst does not need to be a chemist, as all they need to do is place a sample on the instrument and press start.

You can find even more information about moisture determination by near-infrared spectroscopy in polyamides, caprolactam, lyophilized products, fertilizers, lubricants, and ethanol/hydrocarbon blends below by downloading our free Application Notes.

Your choice for moisture measurements: KF Titration, NIRS, or both!

As summarized in Table 1, KF Titration and NIR Spectroscopy each have their advantages. KF Titration is a versatile method with a low level of detection. Its major advantage is that it will always work, no matter if you have a sample type that you measure regularly or whether it is a sample type that you encounter for the first time.

Table 1. Overview of characteristics of moisture determination via titration and NIR spectroscopy

NIR spectroscopy requires a method development process, meaning it is not suitable for sample types that always vary (e.g., different types of tablets, different types of oil). NIRS however is a very good method for sample types that are always identical, for example for moisture content in lyophilized products or for moisture content in chemicals, such as fertilizers.

For the implementation of a NIR moisture method, it is required that samples are measured with KF titration as the primary method for the model development. In addition, during the routine use of a NIR method, it is important to confirm once in a while (e.g., every 50th or every 100th sample) with KF Titration that the NIR model is still robust, and to ensure that the error has not increased. If a change is noticed, extra samples need to be added to the prediction model to cover the observed sample variation.

In conclusion, both KF Titration and NIR spectroscopy are powerful techniques for measuring moisture in an array of samples. Which technique to use depends on the application and the individual preference of the user.

For more information

Download our free whitepaper:

Karl Fischer titration and near-infrared spectroscopy in perfect synergy

Post written by Dr. Dave van Staveren (Head of Competence Center Spectroscopy), Dr. Christian Haider (Head of Competence Center Titration), and Iris Kalkman (Product Specialist Titration) at Metrohm International Headquarters, Herisau, Switzerland.

What to consider when standardizing titrant

What to consider when standardizing titrant

If you perform titrations on a regular basis, then you’ve certainly heard about standardization of the titrant. When carrying out a standardization you determine the titer, which is a correction factor for your titrant concentration, as it is normally not exactly the value written on the reagent bottle label. In this blog entry, I want to give you some valuable information about why standardization is important and how to determine the titer.

Please note this blog entry will not deal with the standardization of Karl Fischer titrants.

What is the titer factor?

Titration is an absolute method (or primary method), meaning it is of utmost importance to know the exact concentration of the titrant you are using for your results to be accurate and repeatable by other analysts. This is why you need to carry out a standardization.

Usually the difference between the nominal concentration (e.g., 0.1 mol/L) and the absolute concentration (e.g., 0.0998 mol/L) is given by a dimensionless factor (e.g., 0.0998). The absolute concentration is obtained by multiplying the nominal concentration with this factor, which is usually called «titer». In some cases, it is the absolute concentration which is called «titer».

Over the following sections I will present you the essentials on standardization, regardless if you use the word «titer» for the correction factor or for the absolute concentration.

Why should you standardize your titrant?

Knowing the exact titrant concentration is important for correct titration results. This is especially true for self-made titrants, but this is also an important step for commercially available titrants. Titrants can age over time, and thus their concentrations will change.

For example: alkaline titrants, such as NaOH or KOH, will absorb CO2 from ambient air, or iodine-rich solutions will release iodine. Therefore, standardization will give you more security to obtain the correct results for your titrations. 

What can I do to prevent changes to the titer factor?

This depends on which titrant you use for the analysis. The easiest thing to consider is the bottle you plan to store your titrant inside. Some titrants are light-sensitive, and should be stored in dark brown or opaque glass bottles. Others may react with glass, and are best stored in plastic bottles.

Titrants best stored in brown glass bottles:

  • Iodine (I2)
  • Potassium permanganate (KMnO4)
  • Silver nitrate (AgNO3)

Titrants best stored in plastic bottles:

  • Aqueous bases (e.g., NaOH, KOH)
  • Non-aqueous bases (e.g., TBAOH)

Another preventive measure is the use of absorber or adsorber material filled into a tube which is connected to the ventilation part of your buret. This is especially important for titrants which react with CO2 or water from the air.

Use soda lime to absorb CO2 and a molecular sieve for moisture. Even if your titrant is not sensitive, it is still recommended to fill the tube with cotton, which will prevent the entry of dust into the bottle.

The image shown here (click to enlarge) shows an example of an absorber tube filled with soda lime attached to a buret for NaOH. This will avoid the solution losing strength due to carbon dioxide in the ambient air.

Titrants for which soda lime for CO2 absorption should be used:

  • Aqueous and non-aqueous bases (e.g., NaOH, KOH, TBAOH)
  • Sodium thiosulfate (Na2S2O3)

Titrants for which molecular sieve for moisture adsorption should be used:

  • Perchloric acid (HClO4) in glacial acetic acid

How often should I standardize my titrant?

This question cannot be answered with a general number. Frequency of titrant standardization depends on multiple factors, such as titrant stability, the number of titrations per day/week/month, and the required accuracy for your results.

You should always carry out a standardization when you open a titrant bottle for the first time.

The following table is a guideline which should help you to select the frequency for standardizing your titrants. If you are unsure about the stability of your titrant, carry out frequent standardizations (e.g., daily) over a longer period of time until you are able to establish a standardization frequency based on your obtained titer data. The obtained data will show you how much your titer changes over time, and you can then select a suitable determination frequency. Newer software offers the possibility of monitoring your titer. This will help you as well during this task.

Stable titrants:

  • Aqueous acids (e.g., HCl, H2SO4)
  • EDTA
  • Silver nitrate (AgNO3)
  • Sodium thiosulfate (Na2S2O3)
  • Cationic and anionic surfactants

Unstable titrants:

  • Aqueous and non-aqueous bases (e.g., NaOH, KOH, TBAOH)
  • Non-aqueous acids (e.g., HClO4)
  • Iodine (I2)
  • Potassium permanganate (KMnO4)

How to determine the titer

The titer is determined using a primary standard or an already standardized titrant. In either case, be sure to carry out the standardization at the same temperature as the sample titration, as the temperature influences the density of the titrant. Titrants expand at higher temperatures, and thus their titer factor decreases.

Describing the titer determination for every titrant would be beyond the scope of this blog. I will therefore only describe the titer determination procedure here for both cases – using a primary standard or an already standardized titrant – in a general way. If you want to know more about which primary standard is recommended for which titrant, then check out our corresponding Application Bulletin. 

Download the free Metrohm Application Bulletin here:

If you are using a primary standard, dry it at a suitable temperature for a few hours. Allow it to cool down in a desiccator until the substance reaches room temperature, then weigh out an appropriate amount of dried standard for the titration. The weight of the standard depends on the titrant concentration and on the buret volume. I recommend a standard weight which leads to an equivalence point at approximately 50% of the buret volume. If your weight is less than 100 mg, I recommend to prepare a standard solution with your primary standard, as otherwise the weighing error becomes too large.

After you have weighed out your standard or pipetted your standard solution into a beaker, add enough diluent (solvent or water) to immerse the measuring and reference part of the sensor, and start the titration.

If you are using an already standardized titrant, the procedure is a bit simpler. Don’t forget, this titrant should be freshly standardized with a primary standard. Accurately pipette an appropriate amount of standardized titrant into a titration beaker. Add enough diluent (solvent or water) to immerse the measuring and reference part of the sensor, and start the titration.

Shifting gears: What are primary standards?

Primary standards fulfill several criteria which makes them ideal for the standardization of titrants. Primary standards are of:

  • High purity and stability
  • Low hygroscopy (to minimize weight changes)
  • High molecular weight (to minimize weighing errors)

Additionally, they are traceable to standard reference materials (e.g., NIST traceable).

How to calculate the titer factor

After you’ve finished the titrations for the standardization, now it’s time to calculate the titer factor. Again, the formula for the calculation differs slightly depending on whether you have used a solid, dry primary standard or a standard solution / standardized titrant.

 

For a solid, dry primary standard use the following formula:

mSTD:  Weight of primary standard in mg

MSTD:  Molecular weight of primary standard in g/mol

VEP:  Volume at the equivalence point in mL

cTitrant:  Nominal titrant concentration in mol/L

s:  Stoichiometric factor

For a standard solution / standardized titrant use the following formula:

VSTD:  Volume of standard solution / standardized titrant in mL

cSTD:  Absolute concentration of standard solution / standardized titrant in mol/L

VEP:  Volume at the equivalence point in mL

cTitrant:  Nominal titrant concentration in mol/L

s:  Stoichiometric factor

Modern titrators are capable of automatically calculating the titer factor and saving the result together with other relevant titrant data such as concentration and sample name, further improving the data security in your lab.

To summarize:

Standardization of the titrant is not so difficult, just keep in mind to:

  • Carry out standardization regularly — even for ready-made titrants to improve result accuracy of your results.
  • Use dry primary standards or freshly standardized titrants.
  • Carry out the standardization at the same temperature as the sample titration.

If you want to learn more about how you can improve your titration, have a look at our blog entry “How to transfer manual titration to autotitration”, where you can find practical tips about how to improve your titrations.

Want to learn more?

Download our free monograph:

Practical aspects of modern titration

Post written by Lucia Meier, Product Specialist Titration at Metrohm International Headquarters, Herisau, Switzerland.

Improving your conductivity measurements

Improving your conductivity measurements

Have you ever performed a conductivity measurement and obtained incorrect results? There are several possible reasons for this. In this post, I want to show you how you may overcome some of these issues.

By itself, conductivity measurement is performed quite easily. One takes a conductivity cell and a suitable measuring device, inserts the conductivity cell into the sample solution and reads the value given. However, there are some challenges such as choosing the right sensor, the temperature dependency of conductivity, or the CO2 uptake, which falsify your results.

The following topics will be covered in the rest of this post (click to jump to the topic):

 

So many measuring cells – which one to use?

The first and most important question about conductivity measurement is: which sensor is the most suitable for your application? The measuring range is dependent on the cell constant of your conductivity cell, and therefore this choice requires a few considerations:

  • What is the expected conductivity of my sample?
  • Do I have a broad range of conductivities within my samples?
  • What is the amount of sample I have available for measurement?

There are different types of conductivity measuring cells available on the market. Two-electrode cells have the advantage that they can be constructed within a smaller geometry, and are more accurate at low conductivities. On the other hand, other types of measuring cells show no influences towards polarization, have a larger linear range, and are less sensitive towards contaminations.

Figure 1 below shows you the wide application range of sensors with different cell constants. As a general rule: Sensors with a low cell constant are used for samples with a low conductivity and sensors with high cell constants should be used for high conductivity samples.

Figure 1. Illustration of the range of applications for different conductometric measuring cells offered by Metrohm (click to enlarge).

To get more information, check out our Electrode finder and select «conductivity measurement».

Determination of the cell constant

Each conductivity cell has its own conductivity cell constant and therefore needs to be determined regularly. The nominal cell constant is dependent of the area of the platinum contacts and the distance between the two surfaces:

:  Cell constant in cm-1
Aeff :  Effective area of the electrodes in cm2
delectrodes :  Distance between the electrodes in cm

However, no sensor is perfect and the effective cell constant does not exactly agree with the ideal cell constant. Thus, the effective cell constant is determined experimentally by measuring a suitable standard. Its measured conductivity is compared to the theoretical value:

:  Cell constant in cm-1
ϒtheor. :  Theoretical conductivity of the standard at the reference temperature in S/cm
Gmeas :  Measured conductance in S

With increasing lifetime usage, the properties of the measuring cell might change. Changing its properties means also changing its cell constant. Therefore, it is necessary to check the cell constant with a standard from time to time and to perform a redetermination of the cell constant if necessary.

Temperature dependency of the conductivity

Have you ever asked yourself why the conductivity is normally referred to at 20 °C or 25 °C in the literature? The reasoning is that the conductivity itself is very temperature-dependent and will vary with different temperatures. It is difficult to compare conductivity values measured at different temperatures, as the deviation is approximately 2%/°C. Therefore, please make sure you measure in a thermostated vessel or you use a temperature compensation coefficient.

What is a temperature compensation coefficient anyway?

The temperature compensation coefficient is a correction factor, which will correct your measured value at a certain temperature to the defined reference temperature. The factor itself depends on the sample matrix and is different for each sample.

For example, if you measure a value of 10 mS/cm at 24 °C, then the device will correct your value with a linear correction of 2%/°C to 10.2 mS/cm to the reference temperature of 25 °C. This feature of linear temperature compensation is very common and is implemented in most devices.

However, the temperature compensation coefficient is not linear for every sample. If the linear temperature compensation is not accurate enough, you can also use the feature of recording a temperature compensation function. There, you will measure the conductivity of your sample at different temperatures and afterwards fit a polynomial function though the measured points. For future temperature corrections, this polynomial function will be used, and more accurate results will be obtained.

And… what about the conductivity standard?

Figure 2. The blue curve shows the actual conductivity (mS/cm) and the orange line is a linear temperature compensation. The temperature compensation here varies from 2.39–4.04 %/°C.

Which standard do I have to choose?

In contrast to pH calibration, the conductivity cell only requires a one-point calibration. For this purpose, you need to choose a suitable standard, which has a conductivity value in the same range as your sample and is inert towards external influences.

As an example, consider a sample of deionized water, which has an expected conductivity of approximately 1 µS/cm. If you calibrate the conductivity cell with a higher conductivity standard around 12.88 mS/cm, this will lead to an enormous error in your measured sample value.

Most conductivity cells will not be suitable for both ranges. For such low conductivities (1 µS/cm), it is better to use a 100 µS/cm conductivity standard. While lower conductivity standards are available, proper handling becomes more difficult. For such low conductivities, the influence of CO2 influence increases.

Last but not least: To stir or not to stir?

This is a controversial question, as stirring has both advantages and disadvantages. Stirring enables your sample solution to be homogeneous, but it might also enhance the carbon dioxide uptake from ambient air.

Either way, it does not matter if you choose to stir or not to stir, just make sure that the same procedure is applied each time for the determination of the cell constant, and for the determination of the conductivity of your sample. Personally, I recommend to stir slightly, because then a stable value is reached faster and the effect of carbon dioxide uptake is almost negligible.

To summarize, it is quite easy to perform conductometric measurements, but some important points should be considered thoroughly before starting the analysis, like the temperature dependency, choice of suitable conductometric measuring cell, and the choice of calibration standard. Otherwise false results may be obtained.

Curious about conductivity measurements?

Read through our free comprehensive monograph:

Conductometry – Conductivity measurement

Additionally, you can download our free two-part Application Bulletin AB-102 – Conductometry below:   

Post written by Iris Kalkman, Product Specialist Titration at Metrohm International Headquarters, Herisau, Switzerland.