Select Page
Photometric complexometric titration

Photometric complexometric titration

…and how to choose the right wavelength for indication

Complexometric titration was discovered in 1945 when Gerold Schwarzenbach observed that aminocarboxylic acids form stable complexes with metal ions, which can change their color by addition of an indicator. From the 1950’s on, this technique gained popularity for the determination of water hardness. Soon it was clear that aside from magnesium and calcium, other metal ions could also be titrated in this way. The use of masking agents and new indicators gave further possibilities to determine not only the whole amount of metal ions present in solution, but also to separate and analyze them. A new titration type was born: complexometric titration.

Dear readers, have you ever performed a complexometric titration? According to my assumption, quite a lot of you will respond with “yes” as it is one of the most frequently used types of titration. However, I assume you probably struggled over the detection of the endpoint and over the titration itself. In contrast to other types of titrations, the boundary conditions such as pH and reaction time play an even bigger role in complexometry since the complex binding constant is very pH dependent and the reaction might be slow. This article presents the most common challenges and how to overcome them when carrying out complexometric titrations.

For a complexometric titration analysis, it is very important to know the qualitative composition of your sample. This determines the indicator, the complexing agent, and the masking agent you need to use.

Due to the length of the article, I have provided an easy legend of the topics so you can click and jump directly to the area that interests you the most.

Complexometric titration and complex-forming constant

Complexometric reactions always consist of a metal ion which reacts with a ligand to form a metal complex. Figure 1 shows an example of such a chemical reaction of a metal ion Mn+ with Ethylene diamine tetra acetic acid (EDTA). EDTA is the most commonly used titrant for complexometric titrations and reacts in a stoichiometric ratio of 1:1. As shown on the right side of Figure 1, EDTA can form six coordinational bonds, in different words: EDTA has a denticity of six. The more coordinational bonds a ligand can form, the more stable is the formed complex.

Figure 1. Example complexation reaction of a metal M with charge n+ with EDTA.

As with most chemical reactions, this type of reaction stays in an equilibrium. Depending on the metal ion used, this equilibrium can shift more to the left (reactants) or on the right (products) of the equation. For a titration, it is mandatory that the equilibrium is on the right side (complex-forming). The equilibrium constant is defined as shown in Equation 1.

Equation 1. Equilibrium constant, where c = concentration of the individual substances.

Equation 1 also illustrates why it is so important to keep the pH value constant. The concentration of hydronium ions influences the complex-forming constant by a factor of the square of its concentration (e.g., if one titrated with H2Na2EDTA). This means if the pH value of the reaction is changed, its complex-forming constant is also changed, which influences your titration.

Generally, the higher the concentration of the complex in comparison to the free metal / Ligand concentration, the higher the Kc and also the log(Kc) value. Some log(Kc) values are shown later on in Table 2 and can give you a hint regarding which titrant is most suitable for your titration.

Complexometric reactions are often conducted as a photometric titration. This means an indicator is added to the solution so that a color change at the endpoint can be observed. 

Color Indicator

As in acid–base titration, the color indicator is a molecule which indicates when the end of titration (the endpoint, or EP) is reached by a change in the solution color. For acid–base titration, the color change is induced by a change of pH, whereas in complexometric titration the color change is induced by the absence/presence of metal ions. Table 1 gives you an overview of different color indicators and the metals which can be determined with them.

Table 1. List of color indicators for different kinds of metal ions.

It is very important to choose the right indicator, especially when analyzing metal mixtures. By choosing an appropriate indicator, a separation of the metal ions can already take place.

As an example, consider a mixture of Zn2+ and Mg2+, which is titrated with EDTA. The log(Kc) value for the zinc ion is 16.5, and 8.8 for the magnesium ion. If we choose to titrate this sample with PAN-indicator then the indicator will selectively bind to the zinc, but not magnesium. As zinc has the higher complex-formation constant, the zinc ion will react first with EDTA, which will lead to a color change, and the endpoint can be detected. In such a case, the separation of the ions is possible. If this is not the case, the choice of a more suitable complexing agent might help you to obtain a separation of metal ions.

Complexing agent

At the beginning of your titration, the metal ions are freely accessible. By addition of the complexing agent (your titrant), the metal ions become bound. The prerequisite for that is a higher complex-formation constant of the metal with the complexing agent than with the indicator. In 95% of cases, this does not pose a problem. Some complexing agents are mentioned in Table 2. In general, ions with higher charges will have a higher complex-formation constant. 

However, what can you do if you are still not able to separate your metal ions sufficiently and determine them individually? The answer to that is: use a masking agent to make the second metal ion “invisible” to the titrant.

Table 2. Complex-formation constants log(Kc) of different complexing agents with various metal ions. The higher the number in the table the higher the binding strength between metal ion and ligand. As an example: aluminum binds stronger to DCTA than to EDTA.

Masking agents

In general, masking agents are substances which have a higher complex-formation constant with the metal ion than the complexing agent. Metal ions which react with the masking agent can no longer be titrated, and therefore the metal ion of interest (which does not react with the masking agent) can be determined separately in the mixture using the complexing agent. Table 3 shows a small selection of common masking agents. There are many more masking agents available which can be used for the separation of metal ions.

Table 3. A selection of different masking agents.

Complexometric titration is still often carried out manually, as the color change is easily visible. However, this leads to several problems. My previous post “Why your titration results aren’t reproducible: The main error sources in manual titration” explains the many challenges of manual titration.

Subjective color perception and different readings lead to systematic errors, which can be prevented by choosing a proper electrode or using an optical sensor, which accurately indicates the color change. This optical sensor changes its signal depending on the amount of light reaching the photodetector. It is usually the easiest choice when switching from manual titration to automated titration, because usually it does not require any changes to your SOP.

Which wavelength is optimal for indication?

Figure 2. The Optrode from Metrohm can detect changes in absorbance at 470, 502, 520, 574, 590, 610, 640, and 660 nm.

If you choose to automate your complexometric titration and indicate the color change with a proper sensor, you should use the Optrode. This sensor offers eight different wavelengths enabling its use with many different indicators.

Perhaps you’re asking yourself “why do I need eight different wavelengths”? The answer is simple. This sensor monitors the absorbance of a certain wavelength in the solution. Each wavelength change is best detected when the light is strongly absorbed by the color of the sample solution, either before or after the endpoint is reached. For example, during a color change from blue to yellow, it is recommended to select the wavelength 574 nm (yellow) for the detection of the color change, as it is the complementary color of blue. For even more accuracy, the optimal wavelength can be chosen by knowing the UV/VIS spectra of the indicator before and after complexation.

Figure 3. Left: spectra of complexed (purple) and uncomplexed (blue) Eriochrome Black T are shown. Right: the difference in absorption of the two spectra is shown.

On the left side of Figure 3 is a graph with the spectra of complexed and uncomplexed Eriochrome Black T. The uncomplexed solution has a blue tint, whereas the complexed one is more violet. On the right, another graph shows the difference of both spectra. According to this graph, the maximum difference in absorption is obtained at a wavelength of 660 nm. Therefore, it is recommended to use this wavelength for the detection of the color change.

For more examples of indicators and their spectra, check out our free monograph “Complexometric (Chelometric) Titrations”.

Challenges when performing complexometric titrations

As mentioned in the introduction, complexometric titrations are a bit more demanding compared to other types of titration.

First, the indicators themselves are normally pH indicators, and most complexation reactions are pH-dependent as well. For example, the titration of iron(III) is performed in acidic conditions, while the complexation of calcium can only take place under alkaline conditions. This leads to the fact that the pH has to be maintained constantly while performing complexometric titrations. Otherwise, the color change might not be visible, indicated incorrectly, or the complexation might not take place.

Second, complexation reactions do not occur immediately, as with e.g. precipitation reactions. The reaction might take some time. As an example, the complexation reaction of aluminum with EDTA can take up to ten minutes to be completed. Therefore it is also important to keep this factor in mind.

Perhaps a back-titration needs to be performed in such a case to increase accuracy and precision. Please have a look at our blog post “What to consider during back-titration” for more information about this topic.

Summary

Complexometric titrations are easy to perform as long as some important points are kept in mind:

  • If more than one type of metal is present in your sample, you might need to consider a masking agent or a more suitable pH range.
  • Reaction duration of your complexation reaction might be long. In this case, a back-titration or titration at elevated temperatures might be a better option.
  • Make sure that you maintain a stable pH during your titration. This can be achieved by addition of an adequate buffer solution.
  • Switching from manual to automated titration will increase accuracy and prevent common systematic errors. When using an optical sensor, make sure that you choose the right wavelength for the detection of the endpoint.

For a general overview of complexometric titration, have a look at Metrohm Application Bulletin AB-101 – Complexometric titrations with the Cu ISE.

For more detailed information

Download our free monograph:

Complexometric (Chelatometric) Titrations

Post written by Iris Kalkman, Product Specialist Titration at Metrohm International Headquarters, Herisau, Switzerland.

Moisture Analysis – Karl Fischer Titration, NIRS, or both?

Moisture Analysis – Karl Fischer Titration, NIRS, or both?

In addition to the analysis of the pH value, weighing, and acid-base titration, measurement of water content is one of the most common determinations in laboratories worldwide. Moisture determination is important for nearly every industry, e.g., for lubricants, food and feed, and pharmaceuticals.

Figure 1. Water drops in a spider web

For lubricants, the water concentration is very important to know because excess moisture expedites wear and tear of the machinery. For food and feed, moisture content must be within a narrow range so that the food does not taste dry or stale, nor that it is able to provide a breeding ground for bacteria and fungi, resulting in spoilage. For pharmaceuticals, the water content in solid dosage forms (tablets) and lyophilized products is monitored closely. For the latter, the regulations state that the moisture content needs to be below 2%.

Karl Fischer Titration

Karl Fischer (KF) Titration for water determination was introduced back in the 1930’s, and to this day remains one of the most tried and trusted methods. It is a fast and highly selective method, which means that water, and only water, is determined. KF titration is based on the following two redox reactions.

In the first reaction, methanol and sulfur dioxide react to form the respective ester. Upon addition of iodine, the ester is oxidized to the sulfate species in a water-consuming reaction. The reaction finishes when no water is left.

Figure 2. Manual sample injection for volumetric KF Titration

KF titration can be used for the determination of the water content in all sample types: liquids, solids, slurries, or even gases. For concentrations between 0.1% and 100%, volumetric KF titration is the method of choice, whereas for lower moisture content between 0.001% and 1%, coulometric KF titration is recommended.

Depending on the sample type, its water content, and its solubility in the KF reagents, the sample can either be added directly to the titration vessel, or would first need to be dissolved in a suitable solvent. Suitable solvents are those which do not react with the KF reagents — therefore aldehydes and ketones are ruled out. In case the sample is dissolved in a solvent, a blank correction with the pure solvent also needs to be performed. For the measurement, the sample is injected directly into the titration vessel using a syringe and needle (Fig. 2). The endpoint is detected by a polarized double Pt pin electrode, and from this the water concentration is directly calculated.

Insoluble or hygroscopic samples can be analyzed using the gas extraction technique with a KF Oven. Here, the sample is sealed in small vial, and the water is evaporated by heat then is subsequently carried to the titration cell.

Figure 3. Fully automated KF Titration with the Metrohm 874 KF Oven Sample Processor

For more information, download our free Application Bulletins: AB-077 for volumetric Karl Fischer titration and AB-137 for coulometric Karl Fischer analysis.

If you would like some deeper insight, download our free monograph: “Water determination by Karl Fischer Titration”. 

Near-infrared spectroscopy

Near-infrared spectroscopy (NIRS) is a technique that has been used for myriad applications in the areas of food and feed, polymers, and textiles since the 1980’s. A decade later, other segments began using this technique, such as for pharmaceutical, personal care, and petroleum products.

NIRS detects overtones and combination bands of molecular vibrations. Among the typical vibrations in organic molecules for functional groups such as -CH, -NH, -SH, and -OH, it is the -OH moiety which is an especially strong near infrared absorber. That is also the reason why moisture quantification is one of the key applications of NIR spectroscopy.

For a further explanation, read our previous blog entry on this subject: Benefits of NIR spectroscopy: Part 2.

NIR spectroscopy is used for the quantification of water in solids, liquids, and slurries. The detection limit for moisture in solids is about 0.1%, whereas for liquids it is in the range of 0.02% (200 mg/L), However, in special cases (e.g., water in THF), moisture detection limits of 40–50 mg/L have been achieved.

This technique does not require any sample preparation, which means that samples can be used as-is. Solid samples are measured in high quality disposable sample vials, whereas liquids are measured in high quality disposable cuvettes. Figure 4 displays how the different samples are positioned on the analyzer for a measurement.

Detailed information about the NIRS technique has been described in our previous blog article: Benefits of NIR spectroscopy: Part 1.

Figure 4. Solid (left) and liquid (right) sample positioning for NIR measurements

NIRS is a secondary technique, meaning it can only be used for routine analysis for moisture quantification after a prediction model has been developed. This can be understood by an analogy to HPLC, for which measuring standards to create a calibration curve is among the initial steps. The same applies to NIRS: first, spectra with known moisture content must be measured and then a prediction model is created.

The development of prediction models has been described in detail in our previous blog article: Benefits of NIR spectroscopy: Part 3.

The schematic outline is shown in Figure 5.

Figure 5. Workflow for NIR Method implementation for moisture analysis

For creation of the calibration set, around 30–50 samples need to be measured with both NIRS and KF titration, and the values obtained from KF titration must be linked to the NIR spectra. The next steps are model development and validation (steps 2 and 3 in Figure 5), which are quite straightforward for moisture analysis. Water is a strong NIR absorber, and its peaks are always around 1900–2000 nm (combination band) and 1400–1550 nm (first overtone). This is shown in Figure 6 below.

Figure 6. NIR Spectra of moisturizing creams, showing the absorptions related to H2O at 1400–1550 nm and 1900–2000 nm

After creation and validation of the prediction model, near-infrared spectroscopy can be used for routine moisture determination of that substance. The results for moisture content will be obtained within 1 minute, without any sample preparation or use of chemicals. Also, the analyst does not need to be a chemist, as all they need to do is place a sample on the instrument and press start.

You can find even more information about moisture determination by near-infrared spectroscopy in polyamides, caprolactam, lyophilized products, fertilizers, lubricants, and ethanol/hydrocarbon blends below by downloading our free Application Notes.

Your choice for moisture measurements: KF Titration, NIRS, or both!

As summarized in Table 1, KF Titration and NIR Spectroscopy each have their advantages. KF Titration is a versatile method with a low level of detection. Its major advantage is that it will always work, no matter if you have a sample type that you measure regularly or whether it is a sample type that you encounter for the first time.

Table 1. Overview of characteristics of moisture determination via titration and NIR spectroscopy

NIR spectroscopy requires a method development process, meaning it is not suitable for sample types that always vary (e.g., different types of tablets, different types of oil). NIRS however is a very good method for sample types that are always identical, for example for moisture content in lyophilized products or for moisture content in chemicals, such as fertilizers.

For the implementation of a NIR moisture method, it is required that samples are measured with KF titration as the primary method for the model development. In addition, during the routine use of a NIR method, it is important to confirm once in a while (e.g., every 50th or every 100th sample) with KF Titration that the NIR model is still robust, and to ensure that the error has not increased. If a change is noticed, extra samples need to be added to the prediction model to cover the observed sample variation.

In conclusion, both KF Titration and NIR spectroscopy are powerful techniques for measuring moisture in an array of samples. Which technique to use depends on the application and the individual preference of the user.

For more information

Download our free whitepaper:

Karl Fischer titration and near-infrared spectroscopy in perfect synergy

Post written by Dr. Dave van Staveren (Head of Competence Center Spectroscopy), Dr. Christian Haider (Head of Competence Center Titration), and Iris Kalkman (Product Specialist Titration) at Metrohm International Headquarters, Herisau, Switzerland.

Improving your conductivity measurements

Improving your conductivity measurements

Have you ever performed a conductivity measurement and obtained incorrect results? There are several possible reasons for this. In this post, I want to show you how you may overcome some of these issues.

By itself, conductivity measurement is performed quite easily. One takes a conductivity cell and a suitable measuring device, inserts the conductivity cell into the sample solution and reads the value given. However, there are some challenges such as choosing the right sensor, the temperature dependency of conductivity, or the CO2 uptake, which falsify your results.

The following topics will be covered in the rest of this post (click to jump to the topic):

 

So many measuring cells – which one to use?

The first and most important question about conductivity measurement is: which sensor is the most suitable for your application? The measuring range is dependent on the cell constant of your conductivity cell, and therefore this choice requires a few considerations:

  • What is the expected conductivity of my sample?
  • Do I have a broad range of conductivities within my samples?
  • What is the amount of sample I have available for measurement?

There are different types of conductivity measuring cells available on the market. Two-electrode cells have the advantage that they can be constructed within a smaller geometry, and are more accurate at low conductivities. On the other hand, other types of measuring cells show no influences towards polarization, have a larger linear range, and are less sensitive towards contaminations.

Figure 1 below shows you the wide application range of sensors with different cell constants. As a general rule: Sensors with a low cell constant are used for samples with a low conductivity and sensors with high cell constants should be used for high conductivity samples.

Figure 1. Illustration of the range of applications for different conductometric measuring cells offered by Metrohm (click to enlarge).

To get more information, check out our Electrode finder and select «conductivity measurement».

Determination of the cell constant

Each conductivity cell has its own conductivity cell constant and therefore needs to be determined regularly. The nominal cell constant is dependent of the area of the platinum contacts and the distance between the two surfaces:

:  Cell constant in cm-1
Aeff :  Effective area of the electrodes in cm2
delectrodes :  Distance between the electrodes in cm

However, no sensor is perfect and the effective cell constant does not exactly agree with the ideal cell constant. Thus, the effective cell constant is determined experimentally by measuring a suitable standard. Its measured conductivity is compared to the theoretical value:

:  Cell constant in cm-1
ϒtheor. :  Theoretical conductivity of the standard at the reference temperature in S/cm
Gmeas :  Measured conductance in S

With increasing lifetime usage, the properties of the measuring cell might change. Changing its properties means also changing its cell constant. Therefore, it is necessary to check the cell constant with a standard from time to time and to perform a redetermination of the cell constant if necessary.

Temperature dependency of the conductivity

Have you ever asked yourself why the conductivity is normally referred to at 20 °C or 25 °C in the literature? The reasoning is that the conductivity itself is very temperature-dependent and will vary with different temperatures. It is difficult to compare conductivity values measured at different temperatures, as the deviation is approximately 2%/°C. Therefore, please make sure you measure in a thermostated vessel or you use a temperature compensation coefficient.

What is a temperature compensation coefficient anyway?

The temperature compensation coefficient is a correction factor, which will correct your measured value at a certain temperature to the defined reference temperature. The factor itself depends on the sample matrix and is different for each sample.

For example, if you measure a value of 10 mS/cm at 24 °C, then the device will correct your value with a linear correction of 2%/°C to 10.2 mS/cm to the reference temperature of 25 °C. This feature of linear temperature compensation is very common and is implemented in most devices.

However, the temperature compensation coefficient is not linear for every sample. If the linear temperature compensation is not accurate enough, you can also use the feature of recording a temperature compensation function. There, you will measure the conductivity of your sample at different temperatures and afterwards fit a polynomial function though the measured points. For future temperature corrections, this polynomial function will be used, and more accurate results will be obtained.

And… what about the conductivity standard?

Figure 2. The blue curve shows the actual conductivity (mS/cm) and the orange line is a linear temperature compensation. The temperature compensation here varies from 2.39–4.04 %/°C.

Which standard do I have to choose?

In contrast to pH calibration, the conductivity cell only requires a one-point calibration. For this purpose, you need to choose a suitable standard, which has a conductivity value in the same range as your sample and is inert towards external influences.

As an example, consider a sample of deionized water, which has an expected conductivity of approximately 1 µS/cm. If you calibrate the conductivity cell with a higher conductivity standard around 12.88 mS/cm, this will lead to an enormous error in your measured sample value.

Most conductivity cells will not be suitable for both ranges. For such low conductivities (1 µS/cm), it is better to use a 100 µS/cm conductivity standard. While lower conductivity standards are available, proper handling becomes more difficult. For such low conductivities, the influence of CO2 influence increases.

Last but not least: To stir or not to stir?

This is a controversial question, as stirring has both advantages and disadvantages. Stirring enables your sample solution to be homogeneous, but it might also enhance the carbon dioxide uptake from ambient air.

Either way, it does not matter if you choose to stir or not to stir, just make sure that the same procedure is applied each time for the determination of the cell constant, and for the determination of the conductivity of your sample. Personally, I recommend to stir slightly, because then a stable value is reached faster and the effect of carbon dioxide uptake is almost negligible.

To summarize, it is quite easy to perform conductometric measurements, but some important points should be considered thoroughly before starting the analysis, like the temperature dependency, choice of suitable conductometric measuring cell, and the choice of calibration standard. Otherwise false results may be obtained.

Curious about conductivity measurements?

Read through our free comprehensive monograph:

Conductometry – Conductivity measurement

Additionally, you can download our free two-part Application Bulletin AB-102 – Conductometry below:   

Post written by Iris Kalkman, Product Specialist Titration at Metrohm International Headquarters, Herisau, Switzerland.

Why your titration results aren’t reproducible: The main error sources in manual titration

Why your titration results aren’t reproducible: The main error sources in manual titration

In the practical course in Analytical Chemistry during my first semester at university, I had to titrate a lot. Thinking back on it, I remember carefully dosing titrant with the glass buret, the cumbersome process of refilling the buret, and the constant suspicion that I hadn’t correctly chosen the endpoint.

Everyone in class kept getting different results—but we were never quite sure why. At the time, I wasn’t as experienced as I am now. Today, after 10 years of experience in titration, I’ve learned that the results of manual titration depend quite a lot on the person carrying it out. Here are the top error sources in manual titration and how you can avoid them.  

Choosing the right indicator I’m sure you’ve learned at some point that the pH value of the titration endpoint depends on the acid dissociation constant (Ka) of the acid and base that are used. If a strong base is titrated by a strong acid, the pH value at the endpoint is around 7. The titration of a strong base with a weak acid shifts the endpoint towards the alkaline range. The titration of a strong acid with a weak base will result in an endpoint in the acidic range. This explains why several different indicators are used in acid-base titrations. But which is the right one to choose?
The chart above shows some of the most frequently used pH indicators. You can probably imagine that you won’t get correct results when the pH of your endpoint is around 7, but you use crystal violet or methyl orange as the indicator. Luckily, most standards and SOPs specify an indicator. Follow the instructions, and you’re on the safe side!

Endpoint recognition is subjective

The problems really start when you try to recognize the endpoint. Have you ever thought about the nuances of the color change?

Above, you see five stages of an acid-base titration of c(HCl) = 1 mol/L with c(NaOH) = 1 mol/L. The only difference between each image and its predecessor is one additional drop of titrant. Where would you choose the endpoint in this case?

Is the endpoint reached in picture 1, where only a faint pink is visible? Or is it reached in picture 3 where the color becomes more intense? Or even in picture 5, at which point the pink color is most vibrant? Between picture 1 and picture 5, just four drops of titrant were added. With the pharmaceutical definition of a drop as a volume of 50 µL, this corresponds to 200 µL of titrant or about 7.3 mg of hydrochloric acid—an enormous error.

Reading the buret volume

Do you remember how to correctly read the buret? You have to stand on a footstool and make sure that you read the meniscus value horizontally. Do you know why?

The volume reading depends upon the angle from which you view the buret. In the case shown here, the readings vary up to 0.2 mL (200 µL) from the actual value, depending on the reading angle. The more your line of sight deviates from the horizontal, the more inaccurate the reading—and the result. You can assume an average error of 200 µL. This is a lot for a titration, as I showed in the previous example!

Improving objectivity and accuracy

How can you eliminate these errors? The easiest one to overcome is the reading error. The solution for this is to use an electronic buret. When using an electronic buret, all you need to do is fill it with the titrant and then you press a button. The device automatically measures the volume and gives you a digital readout. Using an electronic buret ensures already a high level of objectivity for your results.

It also improves the accuracy of your results. I don’t have to tell you how important accuracy is in analytical chemistry, but I’ll give an example. Imagine you determined the purity of gold at 90%, but in reality, it’s 99% pure. You would lose a lot of money when selling your gold under this pretense!

Earlier, I showed that visual endpoint recognition using a color indicator can result in errors of up to 200 µL. An inaccurate buret reading can lead to an additional 200 µL error. While using an electronic buret doesn’t help you achieve a more objective endpoint recognition, it does reduce the minimum volume addition per drop: it’s no longer 50 µL, but can be as small as 0.25 µL depending on the cylinder volume you use. This substantially lowers the error resulting from endpoint recognition. The following minimum volume additions are common:

The next step: Automated titration

If you want to overcome all sources of error described in this post, you’ll have to switch to automated titration, or autotitration. In this case, you will use a sensor to measure pH change in the sample and a mathematical algorithm to detect the endpoint—an indicator isn’t required anymore. Additionally you have the same precision as with the electronic buret.

Want to learn more?

Download our free White Paper:

Manual vs. Automated Titration: Benefits and Advantages to Switching

Post written by Iris Kalkman, Product Specialist Titration at Metrohm International Headquarters, Herisau, Switzerland.