INTRODUCTION
The quantitative estimation is the method to determine how much of each constituent is in the sample1-2. Estimation of a given drug or medicine in the dosage forms needs the quantitative analysis of that drug or medicinal in it. The first quantitative analyses were gravimetric, made possible by the invention of a precise balance. It was soon found that carefully calibrated glassware made considerable saving of time through the volumetric measurement of gravimetrically standardized solutions.
In contrast to above two techniques namely gravimetric and volumetric analysis, methods of drug analysis today utilize rather sophisticated instruments involving simple physico-chemical properties.
In instrumental analysis, a physical property of a drug is utilized to determine its chemical composition. A study of the physical properties of drug molecules is a prerequisite for product formulation and often leads to a better understanding of the inter-relationship between molecular structure and drug action. These properties may be thought of as either additive or constitutive such as mass is an additive property. Many physical properties are constitutive and yet have some measure of additivity such as specific gravity, surface tension and viscosity. Molar refraction of a compound is the sum of the refraction of the atoms and groups making up the compound. The arrangements of atoms in each group are different however and so the refractive index of two molecules will be different; that is the individual groups in two different molecules contribute different amounts to the overall refraction of the molecules. In the instrumental analysis some physical properties of molecules such as absorption of radiation, scattering of radiation, Raman Effect, emission of radiation, rotation of the plane of the polarized light and diffraction phenomenon are involving interaction with the radiant energy. Physical properties encompass specific relations between the molecules and well defined forms of energy e.g. Half-cell potential, current voltage, electric conductivity, dielectric constant, heat of reaction, thermal conductivity or other yardsticks of measurements. By carefully associating specific physical properties with the chemical nature of closely related molecules. Conclusions can be drawn that (1) Describe the spatial arrangement of drug molecules (2) Provide evidence for the relative chemical or physical behavior of a molecule and (3) Suggest methods for quantitative and qualitative analysis of a particular pharmaceutical agents3,4.
Analytical methods, in a broad sense, can be classified into chemical methods and instrumental methods. Chemical methods are defined as those that depend on chemical operations in combination with the manipulation of simple instruments. In general, the measurement of mass, i.e. gravimetric and of volume, i.e., volumetric analysis falls in this class. An instrumental method encompasses the use of more complicated instrumentation based on analytical methods:
Although in recent years, spectrophotometric methods are extensively used, but it would be wrong to conclude that instrumental methods have totally replaced chemical methods. In fact, chemical steps are often an integral part of an instrumental method. The sampling, dissolution, change in oxidation state, removal of excess reagent, pH adjustment, addition of complexing agent, precipitation, concentration and the removal of interferences are the various chemical steps which are part of an instrumental method. In recent years HPLC5 (High Performance Liquid Chromatography) is extensively used, because HPLC is not limited by sample volatility or thermal stability. HPLC is able to separate macromolecules and ionic species, labile natural products, polymeric material and a wide variety of other high molecular weight poly-functional group because of the relatively high pressure necessary to perform this type of chromatography; a more elaborate experimental setup is required.
Because of the high cost of the instrument and costly analytical process, small-scale industries cannot afford to procure and use HPLC. So, in spite of other advantages of HPLC, we often select the spectrophotometric method of analysis. The variation of the color of a system with change in concentration of some component forms the basis of what the chemists commonly term as colorimetric analysis. The color is usually due to the formation of a colored compound by the addition of an appropriate reagent, or it may be inherent in the constituent itself. Colorimetry is concerned with the determination of the concentration of a substance by measurement of the relative absorption of light with respect to a known concentration of the substance. Colorimetric determinations are usually made with a simple instrument termed a colorimeter.
In spectrophotometric analysis a source of radiation is used that extend into the ultraviolet region of the spectrum. From this, definite wavelengths of radiation are chosen possessing a bandwidth of less than 1 nm. This process necessitates the use of a more complicated and consequently more expensive instrument. All atoms and molecules are capable of absorbing energy in accordance with certain restrictions; these limitations depend upon the structure of the substance. Energy may be furnished in the form of electromagnetic radiation (light). The kind and amount of radiation absorbed by a molecule depend upon the structure of the molecule, the amount of radiation absorbed also depend upon the number of molecules interacting with the radiation. The study of these dependencies is called absorption spectroscopy.
THEORY OF SPECTROPHOTOMETRY AND COLORIMETRY
Wavelength and Energy:
Absorption and emission of radiant energy by molecules and atoms is the basis for optical spectroscopy. By interpretation of these data both qualitative and quantitative information can be obtained. Qualitatively, the positions of the absorption and emission lines or bands, which occur in the electromagnetic spectrum, indicate the presence of a specific substance. Quantitatively, the intensities of the some absorption and emission lines or bands for the unknown and standards are measured. The concentrations of the unknown is then determined from these data6,7.
The absorption and the emission of energy in the electro-magnetic spectrum occur in discrete packets of photons. The relation between the energy of a photon and the frequency appropriate for the description of its propagation is
E = hv
Where E = Energy in ergs
v = Represents frequency in cycles per second
h = Plank's constant (6.6256 x 10-27 erg-sec)
The data obtained from a spectroscopic measurement are in the form of a plot of radiant absorbed or emitted as a function of position in the electromagnetic spectrum. This is known as a spectrum and the position of absorption or emission is measured in units of energy, wavelength or frequency8.
Beer-Lambert’s law:
Colorimetry is the determination of the light absorbing capacity of a system. A quantitative determination is therefore, carried out by subjecting a colored solution to those wavelengths of visible energy which are absorbed by that solution. UV and visible absorption bands are due to electronic transitions in the region of 200 nm to 780 nm. In case of organic molecules, the electronic transitions could be ascribed to a s, p or n electron transition from the ground state to an excited state (s*, p* or n*).There are four types of absorption bands that occur due to the electronic transition of a molecule9,10:
R - Bands: n ® p*, in compounds with C=O or NO2 group
k - Bands: p ® p*, in conjugated systems.
b - Bands (Benzenoid bands): Due to aromatic and heteroaromatic systems
E - Bands (ethylenic bands): In aromatic systems.
When light (monochromatic or heterogeneous) falls upon a homogeneous medium, a portion of the incident light is reflected, a portion is absorbed within the medium and the remainder is transmitted. If the intensity of the incident light is expressed by I, that of the absorbed light by Ia, that of the transmitted light by It, and that of the reflected light by Ir, then:
I = Ia + It + Ir .......................... (1)
Credit for investigating the change of absorption of light with the thickness of the medium is frequently given to Lambert; Beer later applied similar experiments to solutions of different concentrations and published his results. The two separate laws governing absorption are usually known as Lambert's law and Beer's law. In the form they are referred to as the
Beer-Lambert law. Mathematically, the radiation-concentration and radiation-path-length relation can be expressed by11
...................... (2)
The more familiar equation used in spectrometry
log (I/I) = Î cl .........................(3)
Where I is the intensity of the incident energy
I is the intensity of the emergent energy
c is the concentration
l is the thickness of the absorber (in cm)
andÎ is the molar absorbtivity for concentration in moles/L
, which is encountered less frequently in the literature, represents a concentration of 1% w/v and 1 cm cell thickness and is used primarily in the investigation of those substances of unknown or undetermined molecular weight. A typical UV absorption spectrum, shown in fig. 1, is the result of plotting wavelength v/s absorbtivity, Îmax is denoted by lmax.
PRINCIPLE OF QUANTITATIVE SPECTROPHOTOMETRIC ASSAY OF MEDICINAL SUBSTANCES:
The assay of an absorbing substance may be quickly carried out by preparing a solution in a transparent solvent and measuring its absorbance at a suitable wavelength. The concentration of the absorbing substance calculated from the measured absorbance using one of three principal procedures.
Use of a standard absorbtivity value:
This procedure is adopted by official compendia, e.g. British Pharmacopoeia, for substances such as methyl testosterone that has reasonably broad absorption variation of instrumental parameters e.g. slit width, scan speed.
Use of a calibration graph:
In this procedure the absorbances of a number (typically 4-6) of standard solutions of the reference substance at concentrations encompassing the sample concentrations are measured and a calibration graph is constructed. The concentration of the analyte in the sample solution is read from the graph as the concentration corresponding to the absorbance of the solution.
Single or double point standardization:
The single-point procedure involves the measurement of the absorbance of a sample solution and of a standard solution of the reference substance. The standard and sample solutions are prepared in a similar manner. Ideally, the concentration of the standard solution should be close to that of the sample solution. A 'two-point bracketing' standardization is therefore required to determine the concentration of the sample solutions. The concentration of one of the standard solutions is greater than that of the sample while the other standard solution has a lower concentration than the sample.
DIFFERENT SPECTROPHOTOMETRIC SIMULTANEOUS ESTIMATION METHODS FOR MULTICOMPONENT SAMPLES13
The spectrophotometric assay of drugs rarely involves the measurement of absorbance of sample containing only one absorbing component. The pharmaceutical analyst frequently encounters the situation where the concentration of one or more substances is required in samples known to contain other absorbing substances, which potentially interfere in the assay.
The basis of all the spectrophotometric techniques for multi-component samples is the property that all wavelengths:
(a)The absorbance of a solution is the sum of absorbances of the individual components; or
(bThe measured absorbance is the difference between the total absorbance of the solution in the sample cell and that of the solution in the reference (blank) cell.
In multi-component formulations the concentration of the absorbing substance is calculated from the measured absorbance using one of the following procedures:
(a)Assay as a single-component sample: The concentration of a component in a sample which contains other absorbing substances may be determined by a simple spectrophotometric measurement of absorbance, provided that the other components have a sufficient small absorbance at the wavelength of measurement.
(b) Assay using absorbance corrected for interference: If the identity, concentration and absorbtivity of the absorbing interferents are known, it is possible to calculate their contribution to the total absorbance of a mixture.
(c) Simultaneous equation method: If a sample contains two absorbing drugs (X and Y) each of which absorbs at the lmax of the other, it may be possible to determine both drugs by the technique of simultaneous equations (Vierodt's method).
Where:
a) The absorptivity of X at λ 1 and λ 2, ax1 and ax2 respectively.
b) The absorptivity of Y at λ 1 and λ 2, ay1 and ay2 respectively.
c) The absorbances of the diluted sample at λ 1 and λ 2, A1 and A2 respectively.
Criteria for obtaining maximum precision, based upon absorbance ratios that place limits on the relative concentrations of the components of the mixture.
The criteria are that the ratios
should lie outside the range 0.1- 2.0 for the precise determination of Y and X respectively. These criteria are satisfied only when the λ max of the two components are reasonably dissimilar. An additional criterion is that the two components do not interact chemically.
To reduce the random errors during measurements, sometimes instead of carrying out analysis of two components at two wavelengths, it is carried out at 3 or 4 wavelengths. The equations will no longer have a unique solution but the best solution can be find out by the least square criterion.
(d) Absorbance ratio method14: The absorbance ratio method is a modification of the simultaneous equations procedure. It depends on the property that, for a substance, which obeys Beer's law at all wavelengths. Q-analysis is based on the relationship between absorbance ratio value of a binary mixture and relative concentrations of such a mixture. The ratio of two absorbance determined on the same solution at two different wavelengths is constant. This constant was termed as “Hufner’s Quotient’ or Q-value which is independent of concentration and solution thickness e.g. two different dilutions of the same substances give the same absorbance ratio A1/ A2. in the USP this ratio is referred to as a Q value. In the quantitative assay of two components in admixture by the absorbance ratio method, absorbance are measured at two wavelengths, one being the λ max of one of the components (λ 2) and the other being a wavelength of equal absorptivity of the two components (λ 1), an iso-absorptive point.
Cx = Qm – Qy / Qx – Qy . A1/ ax1
Equation gives the concentration of X in terms of absorbance ratios, the absorbance of the mixture and the absorptivity of the compounds at the iso-absorptive wavelengths. Accurate dilutions of the sample solution and of the standard solutions of X and Y are necessary for the accurate measurement of A1 and A2 respectively.
(e) Geometric correction method: A number of mathematical correction procedures have been developed which reduce or eliminate the background irrelevant absorption that may be present in samples of biological origin. The simplest of this procedure is the three point geometric procedure, which may be applied if the irrelevant absorption is linear at the three wavelengths selected. If the wavelengths λ 1, λ 2 and λ 3 are selected to that the background absorbances B1, B2 and B3 are linear, then the corrected absorbance D of the drug may be calculated from the three absorbances A1, A2 and A3of the sample solution at λ 1, λ 2 and λ 3 respectively as follows
Let vD and wD be the absorbance of the drug alone in the sample solution at λ 1 and λ 3 respectively, i.e. v and w are the absorbance ratios vD/D and wD/D respectively.
B1 = A1 – vD, B2 = A2 –D and B3 = A3 –wD
Let y and z be the wavelengths intervals (λ 2 – λ 1) and (λ 3- λ 2) respectively
D= y(A2 -A3) + z(A2 – A1) / y (1-w) + z(1-v)
This is a general equation which may be applied in any situation where A1, A2 and A3 of the sample, the wavelength intervals y and z and the absorbance ratio v and w are known.
(f) Orthogonal polynomial method15: The technique of orthogonal polynomials is another mathematical correction procedure, which involves more complex calculations than the three-point correction procedure. The basis of the method is that an absorption spectrum may be represented in terms of orthogonal functions as follows
A(λ ) = p P (λ ) + p1 P1 (λ ) + p2 P2 (λ ) ….. pn Pn (λ )
Where A denotes the absorbance at wavelength λ belonging to a set of n+1 equally spaced wavelengths at which the orthogonal polynomials, P (λ ) , P1 (λ ), P2 (λ ) ….. Pn (λ ) are each defined.
The accuracy of the orthogonal functions procedure depends on the correct choice of the polynomial order and the set of the wavelengths. Usually, quadratic or cubic polynomials are selected depending on the shape of the absorption spectra of the drug and the irrelevant absorption. The set of the wavelengths is defined by the number of wavelengths, the interval and the mean wavelength of the set (λ m). Approximately linear irrelevant absorption is normally eliminated using six to eight wavelengths, although many more up to 20, wavelengths may be required if the irrelevant absorption contains high-frequency components. The wavelengths interval and λ m are best obtained from a convulated absorption curve. This is a plot of the absorptivity coefficient for a specified order of polynomial, a specified number of wavelengths and a specified wavelengths interval against the λ m of the set of wavelengths. The optimum set of wavelengths corresponds with a maximum or minimum in the convoluted curve of the analyte and with a coefficient of zero in the convoluted curve of the irrelevant absorption. In favorable circumstances the concentration of an absorbing drug in admixture with another may be calculated if the correct choice of polynomial parameters is made, thereby eliminating the contribution of the drug from the polynomial of the mixture.
(g)Difference spectrophotometry16-20: Difference spectroscopy provides a sensitive method for detecting small changes in the environment of a chromophore or it can be used to demonstrate ionization of a chromophore leading to identification and quantitation of various components in a mixture. The selectivity and accuracy of spectrophotometric analysis of samples containing absorbing interferents may be markedly improved by the technique of difference spectrophotometry. The essential feature of a difference spectrophotometric assay is that the measured value is the difference absorbance (Δ A) between two equimolar solutions of the analyte in different forms which exhibit different spectral characteristics.
The criteria for applying difference spectrophotometry to the assay of a substance in the presence of other absorbing substances are that:
A) Reproducible changes may be induced in the spectrum of the analyte by the addition of one or more reagents.
B) The absorbance of the interfering substances is not altered by the reagents.
The simplest and most commonly employed technique for altering the spectral properties of the analyte properties of the analyte is the adjustment of the pH by means of aqueous solutions of acid, alkali or buffers. The ultraviolet-visible absorption spectra of many substances containing ionisable functional groups e.g. phenols, aromatic carboxylic acids and amines, are dependent on the state of ionization of the functional groups and consequently on the pH of the solution.
If the individual absorbances, Aalk and Aacid are proportional to the concentration of the analyte and path length, the Δ A also obeys the Beer-Lambert law and a modified equation may be derived
Δ A = Δ abc
Where Δ a is the difference absorptivity of the substance at the wavelength of measurement.
If one or more other absorbing substances is present in the sample which at the analytical absorbance Ax in the alkaline and acidic solutions, its interference in the spectrophotometric measurement is eliminated
Δ A = (Aalk + Ax) – (Aacid + Ax)
The selectivity of the Δ A procedure depends on the correct choice of the pH values to induce the spectral change of the analyte without altering the absorbance of the interfering components of the sample. The use of 0.1M sodium hydroxide and 0.1M hydrochloric acid to induce the Δ A of the analyte is convenient and satisfactory when the irrelevant absorption arises from pH-insenstive substances. Unwanted absorption from pH-sensitive components of the sample may also be eliminated if the pKa values of the analyte and interferents differ by more than 4.
(h) Derivative spectrophotometry: Direct spectrophotometric determination of multicomponent formulation is often complicated by interference from formulation matrix and spectral overlapping; such interferences can be treated in many ways like solving two simultaneous equations, using absorbance ratios at certain wavelengths, but still may give erroneous results21. Other approaches include PH induced differential least squares22 and orthogonal function methods23. Also the compensation technique can be used to detect and eliminate unwanted or irrelevant absorption. Derivative spectrophotometry is a useful means of resolving two overlapping spectra and eliminating matrix interferences or interferences due to an indistinct shoulder on side of an absorption band24. Derivative spectrophotometry involves the conversion of a normal spectrum to its first, second or higher derivative spectrum. In the context of derivative spectrophotometry, the normal absorption spectrum is referred to as the fundamental, zeroth order or D spectrum. The absorbance of a sample is differentiated with respect to wavelength λ to generate first, second or higher order derivative
[A] = f (λ ): zero order
[dA/dλ ] = f (λ ): first order
[d2A/d λ 2] = f (λ ): second order
The first derivative spectrum of an absorption band is characterized by a maximum, a minimum, and a cross-over point at the λ max of the absorption band. The second derivative spectrum is characterized by two satellite maxima and an inverted band of which the minimum corresponds to the λ max of the fundamental band.
The spectral transformation confer two principal advantages on derivative spectrophotometry, firstly an even order spectrum is of narrower spectral bandwidth than its fundamental spectrum, secondly derivative spectrophotometry discriminates in favor of substances of narrow spectral bandwidth substances. This is because the derivative amplitude.
The enhanced resolution and bandwidth discrimination increases with increasing derivative order. However it is also found that the concomitant increase in electronic noise inherent in the generation of the higher order spectra, and the consequent reduction of the signal to noise ratio, place serious practical limitations on the higher order spectra.
The important features of derivative technique include enhanced information content, discrimination against back ground noise and greater selectivity in quantitative analysis. It can be used for detection and determination of impurities in drugs, chemicals and also in food additives and industrial wastes25.
(i)Least square approximation26-28: Occasionally one finds it advisable to admit that experimental measurements are not as accurate as might be desired, but are subject to random errors. An answer with higher probable accuracy can be obtained if excess experimental information is applied. Instead of carrying out analysis of two components at two wavelengths, it is carried out at three or four wavelengths. If it is carried out at three wavelengths the problem becomes the solution of three equations in two unknowns. This can not be carried out by any other method. The best solution is the least square criterion, which is found by multiplying by transponse of the absorptivity matrix. This now gives two equations in two unknowns, such that the solution to these two equations is also the optimum solution to the three original equations. The method yields a higher precision of determinations for systems whose absorption spectra are very similar. With increasing diversity of the absorption curves, the efficiency of the method of measurements taken at a large number of wavelengths i.e. the method of an over-determined system of linear equations, decreases and for systems with highly diversified curves it may even deteriorate the precision of the determination.
All the foregoing methods of calculating the content of individual component in a multicomponent analysis fail to use the entire information capacity of the spectrophotometric method of analysis. Only the method that stores the whole spectra of standard substance in the computer memory and uses the algorithm matching the absorption spectrum of the sample with the spectrum obtained mathematically by adding up the individual spectra of components makes a full use of the information load of the spectrophotometric method. This is also the operating principle of advanced design uv-vis spectrophotometers equipped with multicomponent analysis program.
SELECTION PARAMETERS OF AN ANALYTICAL METHOD
Once the problem is defined the following important factors are considered in choosing the analytical method. These are concentration range, required accuracy and sensitivity, selectivity time requirements and cost of analysis.
Concentration Range:
The ability to match the method to the optimum sample size is usually gained through experience and awareness of the different methods.
Sensitivity, as it applied to an analytical method, corresponds to the minimum concentration or lowest concentration of a substance that is detectable with a specified reliability. It is often expressed numerically as a detection limit or sensitivity. Different analytical methods will provide different sensitivities and the one chosen will depend on the sensitivity that is required to solve a particular problem. Accuracy refers to the correctness of the result achieved by the analytical method.
Selectivity:
Selectivity is an indication of the preference that a particular method shows for one substance over another.
Time and Cost:
Time and cost often go hand in hand usually are a reflection of the equipment, personnel and space required to complete a determination.
ANALYTICAL METHOD VALIDATION
Regulatory perspective: In the US, there was no mention of the word validation in the cGMP’s of 1971, and precision and accuracy were stated as laboratory controls. It was only in the cGMP guidelines of March 28, 1979, that the need for validation was implied. It was done in two sections:
i)Section 211.165, it was the word validation was used and
ii)Section 211.194, in which the proof of suitability,
Accuracy and reliability was made compulsory for regulatory submission. Subsequently a guideline was issued on 1st February, 1987, for submitting samples and analytical data for method validation. The world health organization (WHO) published a guideline under the title ‘validation of analytical procedures used in the examination of pharmaceutical materials’. It appeared in the 32nd report of the WHO expert committee on specifications for pharmaceutical preparations which was published in 1992.
The international conference on harmonization (ICH) which has been on the forefront of developing the harmonized tripartite guidelines under the titles ‘text on validation of analytical procedures (Q2A)’ and ‘ Validation of analytical procedures: methodology (Q2B)’. on 1st march,1999, the FDA published a final guideline on the validation of analytical procedures. The contents of this guideline were prepared under the auspices of the technical requirements for registration of pharmaceuticals for human use. According to section 501 of the federal food, drugs and cosmetics act, assays and specifications in monographs of the USP and the NF constitutes legal standards. As a result every analytical method should be validated according to the current pharmacopoeial standards.
The ability to provide timely, accurate, and reliable data is central to the role of analytical chemists and is especially true in the discovery, development, and manufacture of pharmaceuticals. Analytical data are used to screen potential drug candidates, aid in the development of drug syntheses, support formulation studies, monitor the stability of bulk pharmaceuticals and formulated products, and test final products for release. The quality of analytical data is a key factor in the success of a drug development program. The process of method development and validation has a direct impact on the quality of these data.
Although a thorough validation cannot rule out all potential problems, the process of method development and validation should address the most common ones. Examples of typical problems that can be minimized or avoided are synthesis impurities that co elute with the analyte peak in an HPLC assay; a particular type of column that no longer produces the separation needed because the supplier of the column has changed the manufacturing process; an assay method that is transferred to a second laboratory where they are unable to achieve the same detection limit; and a quality assurance audit of a validation report that finds no documentation on how the method was performed during the validation.
Problems increase as additional people, laboratories, and equipment are used to perform the method. When the method is used in the developer's laboratory, a small adjustment can usually be made to make the method work, but the flexibility to change it is lost once the method is transferred to other laboratories or used for official product testing. This is especially true in the pharmaceutical industry, where methods are submitted to regulatory agencies and changes may require formal approval before they can be implemented for official testing. The best way to minimize method problems is to perform adequate validation experiments during development.
Method Validation
Method validation is the process of proving that an analytical method is acceptable for its intended purpose. For pharmaceutical methods, guidelines from the United States Pharmacopeia (USP) 29, International Conference on Harmonization (ICH) 30, and the Food and Drug Administration (FDA) 31,32 provide a framework for performing such validations. In general, methods for regulatory submission must include studies on specificity, linearity, accuracy, precision, range, detection limit, quantitation limit, and robustness. Although there is general agreement about what type of studies should be done, there is great diversity in how they are performed 33. The literature contains diverse approaches to performing validations 34-35. Validation requirements are continually changing and vary widely, depending on the type of drug being tested, the stage of drug development, and the regulatory group that will review the drug application. In the early stages of drug development, it is usually not necessary to perform all of the various validation studies. Many researchers focus on specificity, linearity, accuracy, and precision studies for drugs in the preclinical through Phase II (preliminary efficacy) stages. The remaining studies are performed when the drug reaches the Phase III (efficacy) stage of development and has a higher probability of becoming a marketed product. The process of validating a method cannot be separated from the actual development of the method conditions, because the developer will not know whether the method conditions are acceptable until validation studies are performed. The development and validation of a new analytical method may therefore be an iterative process. Results of validation studies may indicate that a change in the procedure is necessary, which may then require revalidation. During each validation study, key method parameters are determined and then used for all subsequent validation steps. To minimize repetitious studies and ensure that the validation data are generated under conditions equivalent to the final procedure, we recommend the following sequence of studies.
Establish minimum criteria:
The first step in the method development and validation cycle should be to set minimum requirements, which are essentially acceptance specifications for the method. A complete list of criteria should be agreed on by the developer and the end users before the method is developed so that expectations are clear. For example, is it critical that method precision (RSD) be 2%? Does the method need to be accurate to within 2% of the target concentration? During the actual studies and in the final validation report, these criteria will allow clear judgment about the acceptability of the analytical method. The statistics generated for making comparisons are similar to what analysts will generate later in the routine use of the method and therefore can serve as a tool for evaluating later questionable data. More rigorous statistical evaluation techniques are available and should be used in some instances, but these may not allow as direct a comparison for method troubleshooting during routine use.
Specificity:
Specificity is the ability of the method to accurately measure the analyte response in the presence of all potential sample components. The response of the analyte in test mixtures containing the analyte and all potential sample components (placebo formulation, synthesis intermediates, excipients, degradation products, process impurities, etc.) is compared with the response of a solution containing only the analyte. If an analytical procedure is able to separate and resolve the various components of a mixture and detect the analyte quantitatively, the method is called selective. If the method determines or measures quantitatively the compound of interest in the sample matrix without separation it is said to be specific. Measuring a method’s specificity is extremely important during the validation of non-chromatographic methods because they do not contain a separation step that ensures non-interference from Excipients. They rely on intrinsic differences in chemical or physical properties to ensure their ability to accurately determine the concentration of analyte in complex sample mixture.
Linearity:
A linearity study verifies that the sample solutions are in a concentration range where analyte response is linearly proportional to concentration. For assay methods, this study is generally performed by preparing standard solutions at five concentration levels, from 50 to 150% of the target analyte concentration; at least six replicates per concentration must be used. The 50 to 150% range for this study is wider than what is required by the FDA guidelines. In the final method procedure, a tighter range of three standards is generally used, such as 80, 100, and 120% of target; and in some instances, a single standard concentration is used. Validating over a wider range provides confidence that the routine standard levels are well removed from nonlinear response concentrations, that the method covers a wide enough range to incorporate the limits of content uniformity testing, and that it allows quantitation of crude samples in support of process development. For impurity methods, linearity is determined by preparing standard solutions at five concentration levels over a range such as 0.05-2.5 wt%. Acceptability of linearity data is often judged by examining the correlation coefficient and y-intercept of the linear regression line for the response versus concentration plot. A correlation coefficient of > 0.999 is generally considered as evidence of acceptable fit of the data to the regression line. The y-intercept should be less than a few percent of the response obtained for the analyte at the target level. Although these are very practical ways of evaluating linearity data, they are not true measures of linearity 36-37. These parameters, by themselves, can be misleading and should not be used without a visual examination of the response versus concentration plot.
Accuracy:
The accuracy of a method is the closeness of the measured value to the true value for the sample. Accuracy is usually determined in one of four ways. First, accuracy can be assessed by analyzing a sample of known concentration and comparing the measured value to the true value. National Institute of Standards and Technology (NIST) reference standards are often used; however, such a well-characterized sample is usually not available for new drug-related analytes. The second approach is to compare test results from the new method with results from an existing alternate method that is known to be accurate. Again, for pharmaceutical studies, such an alternate method is usually not available. The third and fourth approaches are based on the recovery of known amounts of analyte spiked into sample matrix. The third approach, which is the most widely used recovery study, is performed by spiking analyte in blank matrices. For assay methods, spiked samples are prepared in triplicate at three levels over a range of 50--150% of the target concentration. If potential impurities have been isolated, they should be added to the matrix to mimic impure samples. For impurity methods, spiked samples are prepared in triplicate at three levels over a range that covers the expected impurity content of the sample, such as 0.1--2.5 wt%. The analyte levels in the spiked samples should be determined using the same quantitation procedure as will be used in the final method procedure (i.e., same number and levels of standards, same number of sample and standard injections, etc.). The percent recovery should then be calculated. The fourth approach is the technique of standard additions, which can also be used to determine recovery of spiked analyte. This approach is used if it is not possible to prepare a blank sample matrix without the presence of the analyte. This can occur, for example, with lyophilized material, in which the speciation in the lyophilized material is significantly different when the analyte is absent. An example of an accuracy criteria for an assay method is that the mean recovery will be 100 + 2% at each concentration over the range of 80--120% of the target concentration. For an impurity method, the mean recovery will be within 0.1% absolute of the theoretical concentration or 10% relative, whichever is greater, for impurities in the range of 0.1--2.5 wt%.
Determine the range:
The range of an analytical method is the concentration interval over which acceptable accuracy, linearity, and precision are obtained. In practice, the range is determined using data from the linearity and accuracy studies.
The following minimum specified ranges should be considered:
1) For the assay of an active substance or a finished product normally from 80-120% of the test concentration.
2) For the determination of an impurity; from reporting level of an impurity to 120% of the specification.
3) For content uniformity; covering minimum range is from 70-130%of the test concentration, unless a wider more range based on the nature of the dosage form.
4)For dissolution testing + 20% over the specified range.
5) If assay and purity are performed together as one test and only a 100 % standard is used, linearity should cover the range from reporting level of the impurities to 120% of the assay specification.
Determine precision:
The precision of an analytical method is the amount of scatter in the results obtained from multiple analyses of a homogeneous sample. To be meaningful, the precision study must be performed using the exact sample and standard preparation procedures that will be used in the final method.
The first type of precision study is instrument precision or repeatability. In this a method when reported by the same analyst, same test method and under same set of laboratory conditions within a short interval of time, also known as intra-assay precision.
Second type of precision is reproducibility; the measure of test methods variability when carried out by different analysts in different laboratories using different equipments, reagents and laboratory settings and on different days. It is assessed by means of an inter-laboratory crossover studies. Reproducibility should be considered in case of the standardization of an analytical procedure, for instance, for inclusion of procedures in pharmacopoeias.
Scope:
Once these validation studies are complete, the method developers should be confident in the ability of the method to provide good quantitation in their own laboratories. This result may be sufficient for many methods, especially in the early phases of drug development. The remaining studies should provide greater assurance that the method will work well in other laboratories, where different operators, instruments, and reagents are involved and where it will be used over much longer periods of time. This is a good time to begin accumulating data for two or more system suitability criteria, which are required prior to routine use of the method to ensure that it is performing appropriately.
Detection limit:
The detection limit of a method is the lowest analyte concentration that produces a response detectable above the noise level of the system, typically, three times the noise level. It is a limit test where concentrations below this may not be detected while concentrations above this limit are certainly detected in analysis. The detection limit should be estimated early in the method development-validation process and should be repeated using the specific wording of the final procedure if any changes have been made. It is important to test the method detection limit on different instruments, such as those used in the different laboratories to which the method will be transferred. An example of a detection limit criteria is that, at the 0.05% level, an impurity will have S/N = 3.
Quantitation limit:
The quantitation limit is the lowest level of analyte that can be accurately and precisely measured. This limit is required only for impurity methods and is determined by reducing the analyte concentration until a level is reached where the precision of the method is unacceptable. If not determined experimentally, the quantitation limit is often calculated as the analyte concentration that gives S/N = 10. An example of quantitation limit criteria is that the limit will be defined as the lowest concentration level for which an RSD „ 20% is obtained when an intra-assay precision study is performed.
Stability:
During the earlier validation studies, the method developer gained some information on the stability of reagents, standards, and sample solutions. For routine testing in which many samples are prepared and analyzed each day, it is often essential that solutions be stable enough to allow for delays such as instrument breakdowns or overnight analyses. At this point, the limits of stability should be tested. Samples and standards should be tested over at least a 48-h period, and quantitation of components should be determined by comparison to freshly prepared standards. If the solutions are not stable over 48 h, storage conditions or additives should be identified that can improve stability. An example of stability criteria for assay methods is that sample and standard solutions and the mobile phase will be stable for 48 h under defined storage conditions. Acceptable stability is 2% change in standard or sample response, relative to freshly prepared standards. The mobile phase is considered to have acceptable stability if aged mobile phase produces equivalent chromatography (capacity factors, resolution, or tailing factor) and assay results are within 2% of the value obtained with fresh mobile phase. In case of uv the absorption must be same.
Robustness:
The robustness of a method is its ability to remain unaffected by small changes in parameters such as percent organic content, pH of the solvent, buffer concentration and temperature. These method parameters may be evaluated one factor at a time or simultaneously as part of a factorial experiment 38.
The evaluation of the robustness should be considered during the development phase. Often such testing is not performed as a part of the official method validation during the transfer of the method to another laboratory.
CONCLUSION
We can conclude that the simultaneous spectrophotometric methods for quantitative estimation of pharmaceuticals are fast, less time consuming, reproducible and highly sensitive even microgram of compound can be measured. Performing a thorough method validation can be a tedious process, but the quality of data generated with the method is directly linked to the quality of this process. Time constraints often do not allow for sufficient method validations. Many researchers have experienced the consequences of invalid methods and realized that the amount of time and resources required to solve problems discovered later exceeds what would have been expended initially if the validation studies had been performed properly.
Saturday, December 5, 2009
Thursday, June 11, 2009
Parts per million Conversions
Parts per million Conversions
PPM conversion values and serial dilutions : How to dilute and calculate ppm concentrations
and volumes, and how to convert ppm to molarity and percentage amounts.
ppm = parts per million
ppm is a term used in chemistry to denote a very, very low concentration of a solution. One gram in 1000 ml is 1000 ppm and one thousandth of a gram (0.001g) in 1000 ml is one ppm.
One thousanth of a gram is one milligram and 1000 ml is one liter, so that 1 ppm = 1 mg per liter = mg/Liter.
ppm is derived from the fact that the density of water is taken as 1kg/L = 1,000,000 mg/L, and 1mg/L is 1mg/1,000,000mg or one part in one million.
OBSERVE THE FOLLOWING UNITS
1 ppm = 1mg/l = 1ug /ml = 1000ug/L
ppm = ug/g =ug/ml = ng/mg = pg/ug = 10 -6
ppm = mg/litres of water
1 gram pure element disolved in 1000ml = 1000 ppm
PPB = Parts per billion = ug/L = ng/g = ng/ml = pg/mg = 10 -9
Making up 1000 ppm solutions
1. From the pure metal : weigh out accurately 1.000g of metal, dissolve in 1 : 1 conc. nitric or hydrochloric acid, and make up to the mark in 1 liter volume deionised water.
2. From a salt of the metal :
e.g. Make a 1000 ppm standard of Na using the salt NaCl.
FW of salt = 58.44g.
At. wt. of Na = 23
1g Na in relation to FW of salt = 58.44 / 23 = 2.542g.
Hence, weigh out 2.542g NaCl and dissolve in 1 liter volume to make a 1000 ppm Na standard.
3. From an acidic radical of the salt :
e.g. Make a 1000 ppm phosphate standard using the salt KH2PO4
FW of salt = 136.09
FW of radical PO4 = 95
1g PO4 in relation to FW of salt = 136.09 / 95 = 1.432g.
Hence, weigh out 1.432g KH2PO4 and dissolve in 1 liter volume to make a 1000 ppm PO4 standard.
Dilution Formula = M1V1 = M2V2
req is the required value you want.
req ppm x req vol
-------------------------- = no of mls for req vol
stock
e.g. Make up 50 mls vol of 25 ppm from 100 ppm
25 x 50 / 100 = 12.5 mls. i.e. 12.5 mls of 100 ppm in 50 ml volume will give a 25 ppm solution
Serial dilutions
Making up 10-1 M to 10-5 M solutions from a 1M stock solution.
Pipette 10 ml of the 1M stock into a 100 ml volumetric flask and make up to the mark to give a 10-1 M soln.
Now, pipette 10 ml of this 10-1 M soln. into another 100 ml flask and make up to the mark to give a 10-2 M soln.
Pipette again, 10 ml of this 10-2 M soln. into yet another 100 ml flask and make up to mark to give a 10-3 M soln.
Pipette a 10 ml of this 10-3 M soln. into another 100 ml flask and make up to mark to give a 10-4 M soln.
And from this 10-4 M soln. pipette 10 ml into a 100 ml flask and make up to mark to give a final 10-5 M solution.
Molarity to ppm
conc. in mg/l
Molarity = ------------------------
gram mol solute x 1000
Example : What is the Molarity of Ca in a 400 ppm solution of CaCO3.
Solute = 1 gram mole Ca = 40 (At. Wt.) = 40g/liter = 40 x 1000 = 40,000 mg/liter = 40,000 ppm.
Solution = 400ppm (given)
Hence Molarity = 400 divided by 40000 = 0.01M
And ppm is 0.01 x 40 x 1000 = 400 ppm.(cross multiply)
The FW of an ion species is equal to its concentration in ppm at 10-3M. Fluoride has a FW of 19, hence a 10-3M concentration is equal to 19ppm, 1M is equal to 19,000 ppm and 1ppm is equal to 5.2 x 10-5M.
ISE molarity/ppm conversions
Ppm (parts per million) to % (parts per hundred)
Example:
1 ppm = 1/1,000,000 = 0.000001 = 0.0001%
10 ppm = 10/1,000,000 = 0.00001 = 0.001%
100 ppm = 100/1,000,000 = 0.0001 = 0.01%
200 ppn = 200/1,000,000 = 0.0002 = 0.02%
5000 ppm = 5000/1,000,000 = 0.005 = 0.5%
10,000 ppm = 10000/1,000,000 = 0.01 = 1.0%
20,000 ppm = 20000/1,000,000 = 0.02 = 2.0%
(Parts per hundred) % to ppm
Example:
0.01% = 0.0001
0.0001 x 1,000,000 = 100 ppm
Ppm (parts per million) to % (parts per hundred)
Example:
1 ppm = 1/1,000,000 = 0.000001 = 0.0001%
10 ppm = 10/1,000,000 = 0.00001 = 0.001%
100 ppm = 100/1,000,000 = 0.0001 = 0.01%
200 ppn = 200/1,000,000 = 0.0002 = 0.02%
5000 ppm = 5000/1,000,000 = 0.005 = 0.5%
10,000 ppm = 10000/1,000,000 = 0.01 = 1.0%
20,000 ppm = 20000/1,000,000 = 0.02 = 2.0%
PPM conversion values and serial dilutions : How to dilute and calculate ppm concentrations
and volumes, and how to convert ppm to molarity and percentage amounts.
ppm = parts per million
ppm is a term used in chemistry to denote a very, very low concentration of a solution. One gram in 1000 ml is 1000 ppm and one thousandth of a gram (0.001g) in 1000 ml is one ppm.
One thousanth of a gram is one milligram and 1000 ml is one liter, so that 1 ppm = 1 mg per liter = mg/Liter.
ppm is derived from the fact that the density of water is taken as 1kg/L = 1,000,000 mg/L, and 1mg/L is 1mg/1,000,000mg or one part in one million.
OBSERVE THE FOLLOWING UNITS
1 ppm = 1mg/l = 1ug /ml = 1000ug/L
ppm = ug/g =ug/ml = ng/mg = pg/ug = 10 -6
ppm = mg/litres of water
1 gram pure element disolved in 1000ml = 1000 ppm
PPB = Parts per billion = ug/L = ng/g = ng/ml = pg/mg = 10 -9
Making up 1000 ppm solutions
1. From the pure metal : weigh out accurately 1.000g of metal, dissolve in 1 : 1 conc. nitric or hydrochloric acid, and make up to the mark in 1 liter volume deionised water.
2. From a salt of the metal :
e.g. Make a 1000 ppm standard of Na using the salt NaCl.
FW of salt = 58.44g.
At. wt. of Na = 23
1g Na in relation to FW of salt = 58.44 / 23 = 2.542g.
Hence, weigh out 2.542g NaCl and dissolve in 1 liter volume to make a 1000 ppm Na standard.
3. From an acidic radical of the salt :
e.g. Make a 1000 ppm phosphate standard using the salt KH2PO4
FW of salt = 136.09
FW of radical PO4 = 95
1g PO4 in relation to FW of salt = 136.09 / 95 = 1.432g.
Hence, weigh out 1.432g KH2PO4 and dissolve in 1 liter volume to make a 1000 ppm PO4 standard.
Dilution Formula = M1V1 = M2V2
req is the required value you want.
req ppm x req vol
-------------------------- = no of mls for req vol
stock
e.g. Make up 50 mls vol of 25 ppm from 100 ppm
25 x 50 / 100 = 12.5 mls. i.e. 12.5 mls of 100 ppm in 50 ml volume will give a 25 ppm solution
Serial dilutions
Making up 10-1 M to 10-5 M solutions from a 1M stock solution.
Pipette 10 ml of the 1M stock into a 100 ml volumetric flask and make up to the mark to give a 10-1 M soln.
Now, pipette 10 ml of this 10-1 M soln. into another 100 ml flask and make up to the mark to give a 10-2 M soln.
Pipette again, 10 ml of this 10-2 M soln. into yet another 100 ml flask and make up to mark to give a 10-3 M soln.
Pipette a 10 ml of this 10-3 M soln. into another 100 ml flask and make up to mark to give a 10-4 M soln.
And from this 10-4 M soln. pipette 10 ml into a 100 ml flask and make up to mark to give a final 10-5 M solution.
Molarity to ppm
conc. in mg/l
Molarity = ------------------------
gram mol solute x 1000
Example : What is the Molarity of Ca in a 400 ppm solution of CaCO3.
Solute = 1 gram mole Ca = 40 (At. Wt.) = 40g/liter = 40 x 1000 = 40,000 mg/liter = 40,000 ppm.
Solution = 400ppm (given)
Hence Molarity = 400 divided by 40000 = 0.01M
And ppm is 0.01 x 40 x 1000 = 400 ppm.(cross multiply)
The FW of an ion species is equal to its concentration in ppm at 10-3M. Fluoride has a FW of 19, hence a 10-3M concentration is equal to 19ppm, 1M is equal to 19,000 ppm and 1ppm is equal to 5.2 x 10-5M.
ISE molarity/ppm conversions
Ppm (parts per million) to % (parts per hundred)
Example:
1 ppm = 1/1,000,000 = 0.000001 = 0.0001%
10 ppm = 10/1,000,000 = 0.00001 = 0.001%
100 ppm = 100/1,000,000 = 0.0001 = 0.01%
200 ppn = 200/1,000,000 = 0.0002 = 0.02%
5000 ppm = 5000/1,000,000 = 0.005 = 0.5%
10,000 ppm = 10000/1,000,000 = 0.01 = 1.0%
20,000 ppm = 20000/1,000,000 = 0.02 = 2.0%
(Parts per hundred) % to ppm
Example:
0.01% = 0.0001
0.0001 x 1,000,000 = 100 ppm
Ppm (parts per million) to % (parts per hundred)
Example:
1 ppm = 1/1,000,000 = 0.000001 = 0.0001%
10 ppm = 10/1,000,000 = 0.00001 = 0.001%
100 ppm = 100/1,000,000 = 0.0001 = 0.01%
200 ppn = 200/1,000,000 = 0.0002 = 0.02%
5000 ppm = 5000/1,000,000 = 0.005 = 0.5%
10,000 ppm = 10000/1,000,000 = 0.01 = 1.0%
20,000 ppm = 20000/1,000,000 = 0.02 = 2.0%
Gas Chromatography
GAS CHROMATOGRAPHY
The gas chromatography technique was first carried out in Austria, and the first exploitation of the method was made by Archer J P Martin and Anthony T James in 1952, when they reported the gas chromatography of organic acids and amines. The "support" was coated with a non-volatile liquid and placed into a heated glass tube. Mixtures injected into the tube and carried through by compressed gas resulted in well defined zones.
This development was a great asset to petroleum chemists who recognized it as a simple and rapid method of analysis of the complex hydrocarbon mixtures encountered in petroleum products. A major advance came with the elimination of the support material and the coating of the liquid onto the wall of a long capillary tube; the advent of capillary columns. This development made it possible to carry out separations of many different components in a single chromatographic analysis.
The discovery of the structure of insulin, for example, was made possible when the British biochemist, Frederick Sanger, rationally and methodically applied the technique to the fragments of the ruptured insulin molecule, for which he received the 1958 Nobel prize for chemistry.
THE PRINCIPLES OF GAS CHROMATOGRAPHY
Chromatography is the separation of a mixture of compounds (solutes) into separate components. By separating the sample into individual components, it is easier to identify (qualitate) and measure the amount (quantitate) of the various sample components. There are numerous chromatographic techniques and corresponding instruments. Gas chromatography (GC) is one of these techniques. It is estimated that 10-20% of the known compounds can be analyzed by GC. To be suitable for GC analysis, a compound must have sufficient volatility and thermal stability. If all or some of a compound or molecules are in the gas or vapor phase at 400-450°C, and they do not decompose at these temperatures, the compound can probably be analyzed by GC.
One or more high purity gases are supplied to the GC. One of the gases (called the carrier gas) flows into the injector, through the column and then into the detector. A sample is introduced into the injector usually with a syringe or an exterior sampling device. The injector is usually heated to 150-250°C which causes the volatile sample solutes to vaporize. The vaporized solutes are transported into the column by the carrier gas. The column is maintained in a temperature controlled oven.
The solutes travel through the column at a rate primarily determined by their physical properties, and the temperature and composition of the column. The various solutes travel through the column at different rates. The fastest moving solute exits (elutes) the column first then is followed by the remaining solutes in corresponding order. As each solute elutes from the column, it enters the heated detector. An electronic signal is generated upon interaction of the solute with the detector. The size of the signal is recorded by a data system and is plotted against elapsed time to produce a chromatogram.
The ideal chromatogram has closely spaced peaks with no overlap of the peaks. Any peaks that overlap are called coeluting. The time and size of a peak are important in that they are used to identify and measure the amount of the compound in the sample. The size of the resulting peak corresponds to the amount of the compound in the sample. A larger peak is obtained as the concentration of the corresponding compound increases. If the column and all of operating conditions are kept the same, a given compound always travels through the column at the same rate. Thus, a compound can be identified by the time required for it to travel through the column (called the retention time).
The identity of a compound cannot be determined solely by its retention time. A known amount of an authentic, pure sample of the compound has to be analyzed and its retention time and peak size determined. This value can be compared to the results from an unknown sample to determine whether the target compound is present (by comparing retention times) and its amount (by comparing peak sizes).
If any of the peaks overlap, accurate measurement of these peaks is not possible. If two peaks have the same retention time, accurate identification is not possible. Thus, it is desirable to have no peak overlap or co-elution
RETENTION TIME (tR)
Retention time (tR)is the time it takes a solute to travel through the column. The retention time is assigned to the corresponding solute peak. The retention time is a measure of the amount of time a solute spends in a column. It is the sum of the time spent in the stationary phase and the mobile phase.
COLUMN BLEED:
Column bleed is the background generated by all columns. It is the continuous elution of the compounds produced from normal degradation of the stationary phase. Column bleed increases at higher temperatures.
COLUMN TEMPERATURE LIMITS:
Columns have lower and upper temperature limits. If a column is used below its lower temperature limit, rounded and wide peaks are obtained (i.e., loss of efficiency).
No column damage has occurred; however, the column does not function properly.
Using the column at or above its lower limit maintains good peak shapes.
Upper temperature limits are often stated as two numbers. The lower one is the isothermal temperature limit. The column can be used indefinitely at this temperature and reasonable column bleed and lifetime are realized.
The upper number is the temperature program limit. A column can be maintained at this temperature for 10-15 minutes without severely shortening column lifetime or experiencing excessively high column bleed.
Exposing the column to higher temperatures or for longer time periods results in higher column bleed and shorter column lifetimes. Exceeding the upper temperature limits may damage the stationary phase and the inertness of the fused silica tubing.
COLUMN CAPACITY:
Column capacity is the maximum amount of a solute that can be introduced into a column before significant peak distortion occurs.
Overloaded peaks are asymmetric with a leading edge. Overloaded peaks are often described as "shark fin" shaped. Tailing peaks are obtained if a PLOT column is overloaded. No damage occurs if a column is overloaded.
________________________________________
STATIONARY PHASES
POLYSILOXANES:
Polysiloxanes are the most common stationary phases. They are available in the greatest variety and are the most stable, robust and versatile.
The most basic polysiloxane is the 100% methyl substituted. When other groups are present, the amount is indicated as the percent of the total number of groups. For example, a 5% diphenyl-95% dimethyl polysiloxane contains 5% phenyl groups and 95% methyl groups. The "di-" prefix indicates that each silicon atom contains two of that particular group. Sometimes this prefix is omitted even though two identical groups are present.
If the methyl percentage is not stated, it is understood to be present in the amount necessary to make 100% (e.g., 50% phenyl-methyl polysiloxane contains 50% methyl substitution).
Cyanopropylphenyl percent values can be misleading. A 14% cyanopropylphenyl-dimethyl polysiloxane contains 7% cyanopropyl and 7% phenyl (along with 86% methyl). The cyanopropyl and phenyl groups are on the same silicon atom, thus their amounts are summed.
POLYETHYLENE GLYCOLS:
Polyethylene glycols (PEG) are widely used as stationary phases. Stationary phases with "wax" or "FFAP" in their name are some type of polyethylene glycol. Polyethylene glycols stationary phases are not substituted, thus the polymer is 100% of the stated material. They are less stable, less robust and have lower temperature limits than most polysiloxanes.
With typical use, they exhibit shorter lifetimes and are more susceptible to damage upon over heating or exposure to oxygen.
The unique separation properties of polyethylene glycol makes these liabilities tolerable. Polyethylene glycol stationary phases must be liquids under GC temperature conditions.
GAS - SOLID:(PLOT Columns)
Gas-solid stationary phases are comprised of a thin layer (usually <10 um) of small particles adhered to the surface of the tubing.
These are porous layer open tubular (PLOT) columns. The sample compounds undergo a gas-solid adsorption/desorption process with the stationary phase. The particles are porous, thus size exclusion and shape selectivity processes also occur.
Various derivatives of styrene, aluminum oxides and molecular sieves are the most common PLOT column stationary phases.
PLOT columns are very retentive. They are used to obtain separations that are impossible with conventional stationary phases. Also, many separations requiring subambient temperatures with polysiloxanes or polyethylene glycols can be easily accomplished above ambient temperatures with PLOT columns.
Hydrocarbon and sulfur gases, noble and permanent gases, and low boiling point solvents are some of the more common compounds separated with PLOT columns.
Some PLOT columns may occasionally lose particles of the stationary phase. For this reason, using PLOT columns that may lose particles with detectors negatively affected by particulate matter is not recommended. Mass spectrometers are particularly susceptible to this problem due to the presence of a strong vacuum at the exit of the column.
BONDED AND CROSS-LINKED STATIONARY PHASES:
Cross-linked stationary phases have the individual polymer chains linked via covalent bonds.
Bonded stationary phases are covalently bonded to the surface of the tubing.
Both techniques impart enhanced thermal and solvent stability to the stationary phase. Also, columns with bonded and cross-linked stationary phases can be solvent rinsed to remove contaminants.
Most polysiloxanes and polyethylene glycol stationary phases are bonded and cross-linked.
A few stationary phases are available in an nonbonded version; some stationary phases are not available in bonded and cross-linked versions. Use a bonded and cross-linked stationary phase if one is available.
________________________________________
COLUMN DEGRADATION
CAUSES OF COLUMN PERFORMANCE DEGRADATION
COLUMN BREAKAGE:
Fused silica columns break wherever there is a weak point in the polyimide coating. The polyimide coating protects the fragile fused silica tubing. The continuous heating and cooling of the oven, vibrations caused by the oven fan and being wound on a circular cage all place stress on the tubing. Eventually breakage occurs at a weak point. Weak spots are created when the polyimide coating is scratched or abraded. This usually occurs when a sharp point or edge is dragged over the tubing. Column hangers and tags, metal edges in the GC oven, column cutters and miscellaneous items on the lab bench are just some of the common sources of sharp edges or points.
It is rare for a column to spontaneously break. Column manufacturing practices tend to expose any weak tubing and eliminate it from use in finished columns. Larger diameter columns are more prone to breakage. This means that greater care and prevention against breakage must be taken with 0.45-0.53 mm I.D. tubing than with 0.18-0.32 mm I.D. tubing.
A broken column is not always fatal. If a broken column was maintained at a high temperature either continuously or with multiple temperature program runs, damage to the column is very likely. The back half of the broken column has been exposed to oxygen at elevated temperatures which rapidly damages the stationary phase The front half is fine since carrier gas flowed through this length of column. If a broken column has not been heated or only exposed to high temperatures or oxygen for a very short time, the back half has probably not suffered any significant damage.
A union can be installed to repair a broken column. Any suitable union will work to rejoin the column. No more than 2-3 unions should be installed on any one column. Problems with dead volume (peak tailing) may occur with multiple unions.
THERMAL DAMAGE:
Exceeding a column upper temperature limit results in accelerated degradation of the stationary phase and tubing surface. This results in the premature onset of excessive column bleed, peak tailing for active compounds and/or loss of efficiency (resolution). Fortunately, thermal damage is a slower process, thus prolonged times above the temperature limit are required before significant damage occurs. Thermal damage is greatly accelerated in the presence of oxygen. Overheating a column with a leak or high oxygen levels in the carrier gas results in rapid and permanent column damage.
Setting the maximum oven temperature at or a few degrees above the column temperature limit is the best method to prevent thermal damage. This prevents the accidental overheating of the column. If a column is thermally damaged, it may still be functional. Remove the column from the detector. Heat the column for 8-16 hours at its isothermal temperature limit. Remove 10-15 cm from the detector end of the column. Reinstall the column and condition as usual. The column usually does not return to its original performance; however, it is often still functional. The life of the column will be reduced after thermal damage.
OXYGEN DAMAGE:
Oxygen is an enemy to most capillary GC columns. While no column damage occurs at or near ambient temperatures, severe damage occurs as the column temperature increases. In general, the temperature and oxygen concentration at which significant damages occurs is lower for polar stationary phases. It is constant exposure to oxygen that is the problem. Momentary exposure such as an injection of air or a very short duration septum nut removal is not a problem.
A leak in the carrier gas flow path (e.g., gas lines, fittings, injector) is the most common source of oxygen exposure. As the column is heated, very rapid degradation of the stationary phase occurs. This results in the premature onset of excessive column bleed, peak tailing for active compounds and/or loss of efficiency (resolution). These are the same symptoms as for thermal damage. Unfortunately, by the time oxygen damage is discovered, significant column damage has already occurred. In less severe cases, the column may still be functional but at a reduced performance level. In more severe cases, the column is irreversibly damaged.
Maintaining an oxygen and leak free system is the best prevention against oxygen damage. Good GC system maintenance includes periodic leak checks of the gas lines and regulators, regular septa changes, using high quality carrier gases, installing and changing oxygen traps, and changing gas cylinders before they are completely empty.
CHEMICAL DAMAGE:
There are relatively few compounds that damage stationary phases. Introducing non-volatile compounds (high molecular weight or high boiling point) in a column often degrades performance, but damage to the stationary phase does not occur. These residues can often be removed and performance returned by solvent rinsing the column
Inorganic or mineral bases and acids are the primary compounds to avoid introducing in a column. The acids include hydrochloric (HCl), sulfuric (H2SO4), nitric (HNO3), phosphoric (H3PO4) and chromic (CrO3). The bases include potassium hydroxide (KOH), sodium hydroxide (NaOH) and ammonium hydroxide (NH4OH). Most of these acids and bases are not very volatile and accumulate at the front of the column. If allowed to remain, the acids or bases damage the stationary phase. This results in the premature onset of excessive column bleed, peak tailing for active compounds and/or loss of efficiency (resolution). The symptoms are very similar to thermal and oxygen damage.
Hydrochloric acid and ammonium hydroxide are the least harmful of the group. Both tend to follow any water that is present in the sample. If the water is not or only poorly retained by the column, the residence time of HCl and NH4OH in the column is short. This tends to eliminate or minimize any damage by these compounds. Thus, if HCl or NH4OH are present in a sample, using conditions or a column with no water retention will render these compounds relatively harmless to the column.
The only organic compounds that have been reported to damage stationary phases are perfluoroacids. Examples include trifluoroacetic, pentafluoropropanoic and heptafluorobutyric acid. They need to be present at high levels (e.g., 1% or higher). Most of the problems are experienced with splitless or Megabore direct injections where large volumes of the sample are deposited at the front of the column.
Since chemical damage is usually limited to the front of the column, trimming or cutting 1/2-1 meter from the front of the column often eliminates any chromatographic problems. In more severe cases, 5 or more meters may need to be removed. The use of a guard column or retention gap will minimize the amount of column damage; however, frequent trimming of the guard column may be necessary. The acid or base often damages the surface of the deactivated fused silica tubing which leads to peak shape problems for active compounds.
COLUMN CONTAMINATION
Column contamination is one of the most common problems encountered in capillary GC. Unfortunately, it mimics a very wide variety of problems and is often misdiagnosed as another problem. A contaminated column is usually not damaged, but it may be rendered unusable.
There are two basic types of contaminants: nonvolatile and semi-volatile. Nonvolatile contaminants or residues do not elute and accumulate in the column. The column becomes coated with these residues which interfere with the proper partitioning of solutes in and out of the stationary phase. Also, the residues may interact with active solutes resulting in peak adsorption problems (evident as peak tailing or loss of peak size). Active solutes are those containing a hydroxyl (-OH) or amine (-NH) group, and some thiols (-SH) and aldehydes.
Semivolatile contaminants or residues accumulate in the column, but eventually elute. Hours to days may elapse before they completely leave the column. Like nonvolatile residues, they may cause peak shape and size problems and, in addition, are usually responsible for many baseline problems (instability, wander, drift, ghost peaks, etc.).
Contaminants originate from a number of sources with injected samples being the most common. Extracted samples are among the worse types. Biological fluids and tissues, soils, waste and ground water, and similar types of matrices contain high amounts of semivolatile and nonvolatile materials. Even with careful and thorough extraction procedures, small amounts of these materials are present in the injected sample. Several to hundreds of injections may be necessary before the accumulated residues cause problems. Injection techniques such as on-column, splitless and Megabore direct place a large amount of sample into the column, thus column contamination is more common wirh these injection techniques.
Occasionally contaminants originate from materials in gas lines and traps, ferrule and septa particles, or anything coming in contact with the sample (vials, solvents, syringes, pipettes, etc.). These types of contaminants are probably responsible when a contamination problem suddenly develops and similar samples in previous months or years did not cause any problems.
Minimizing the amount of semivolatiles and nonvolatile sample residues is the best method to reduce contamination problems. Unfortunately, the presence and identity of potential contaminants are often unknown. Rigorous and thorough sample cleanup is the best protection against contamination problems. The use of a guard column or retention gap often reduces the severity or delays the onset of column contamination induced problems. If a column becomes contaminated, it is best to solvent rinse the column to remove the contaminants.
Maintaining a contaminated column at high temperatures for long periods of time (often called baking out a column) is not recommended. Baking out a column may convert some of the contaminating residues into insoluble materials that cannot be solvent rinsed from the column. If this occurs, the column cannot be salvaged in most cases.
Sometimes the column can be cut in half and the back half may still be useable. Baking out a column should be limited to 1-2 hours at the isothermal temperature limit of the column.
PROBLEMS IN GAS CHROMATOGRAPHY
TROUBLESHOOTING:
EVALUATING THE PROBLEM:
The first step in any troubleshooting effort is to step back and evaluate the situation. Rushing to solve the problem often results in a critical piece of important information being overlooked or neglected. In addition to the problem, look for any other changes or differences in the chromatogram. Many problems are accompanied by other symptoms. Retention time shifts, altered baseline noise or drift, or peak shape changes are only a few of the other clues that often point to or narrow the list of possible causes. Finally, make note of any changes or differences involving the sample. Solvents, vials, pipettes, storage conditions, sample age, extraction or preparation techniques, or any other factor influencing the sample environment can be responsible.
SIMPLE CHECKS AND OBSERVATIONS:
A surprising number of problems involve fairly simple and often overlooked components of the GC system or analysis. Many of these items are transparent in the daily operation of the GC and are often taken for granted (set it and forget it). The areas and items to check include:
1. Gases - pressures, carrier gas average linear velocity, and flow rates (detector, split vent, septum purge).
2. Temperatures - column, injector, detector and transfer lines.
3. System parameters - purge activation times, detector attenuation and range, mass ranges, etc.
4. Gas lines and traps - cleanliness, leaks, expiration.
5. Injector consumables - septa, liners, O-rings and ferrules.
6. Sample integrity - concentration, degradation, solvent, storage.
7. Syringes - handling technique, leaks, needle sharpness, cleanliness.
8. Data system - settings and connections.
GHOST PEAKS AND CARRYOVER:
System contamination is responsible for most ghost peaks or carryover problems. If the extra ghost peaks are similar in width to the sample peaks (with similar retention times), the contaminants were most likely introduced into the column at the same time as the sample. The extra compounds may be present in the injector (i.e., contamination) or in the sample itself. Impurities in solvents, vials, caps and syringes are only some of the possible sources. Injecting sample and solvent blanks may help to find possible sources of the contaminants. If the ghost peaks are much broader than the sample peaks, the contaminants were most likely already in the column when the injection was made. These compounds were still in the column when a previous GC run was terminated. They elute during a later run and are often very broad. Sometimes numerous ghost peaks from multiple injections overlap and elute as a hump or blob. This often takes on the appearance of baseline drift or wander.
Increasing the final temperature or time in the temperature program is one method to minimize or eliminate a ghost peak problem. Alternatively, a short bake-out after each run or series of runs may remove the highly retained compounds from the column before they cause a problem. Performing a condensation test is a good method to determine whether a contaminated injector is the source of the carryover or ghost peaks.
The gas chromatography technique was first carried out in Austria, and the first exploitation of the method was made by Archer J P Martin and Anthony T James in 1952, when they reported the gas chromatography of organic acids and amines. The "support" was coated with a non-volatile liquid and placed into a heated glass tube. Mixtures injected into the tube and carried through by compressed gas resulted in well defined zones.
This development was a great asset to petroleum chemists who recognized it as a simple and rapid method of analysis of the complex hydrocarbon mixtures encountered in petroleum products. A major advance came with the elimination of the support material and the coating of the liquid onto the wall of a long capillary tube; the advent of capillary columns. This development made it possible to carry out separations of many different components in a single chromatographic analysis.
The discovery of the structure of insulin, for example, was made possible when the British biochemist, Frederick Sanger, rationally and methodically applied the technique to the fragments of the ruptured insulin molecule, for which he received the 1958 Nobel prize for chemistry.
THE PRINCIPLES OF GAS CHROMATOGRAPHY
Chromatography is the separation of a mixture of compounds (solutes) into separate components. By separating the sample into individual components, it is easier to identify (qualitate) and measure the amount (quantitate) of the various sample components. There are numerous chromatographic techniques and corresponding instruments. Gas chromatography (GC) is one of these techniques. It is estimated that 10-20% of the known compounds can be analyzed by GC. To be suitable for GC analysis, a compound must have sufficient volatility and thermal stability. If all or some of a compound or molecules are in the gas or vapor phase at 400-450°C, and they do not decompose at these temperatures, the compound can probably be analyzed by GC.
One or more high purity gases are supplied to the GC. One of the gases (called the carrier gas) flows into the injector, through the column and then into the detector. A sample is introduced into the injector usually with a syringe or an exterior sampling device. The injector is usually heated to 150-250°C which causes the volatile sample solutes to vaporize. The vaporized solutes are transported into the column by the carrier gas. The column is maintained in a temperature controlled oven.
The solutes travel through the column at a rate primarily determined by their physical properties, and the temperature and composition of the column. The various solutes travel through the column at different rates. The fastest moving solute exits (elutes) the column first then is followed by the remaining solutes in corresponding order. As each solute elutes from the column, it enters the heated detector. An electronic signal is generated upon interaction of the solute with the detector. The size of the signal is recorded by a data system and is plotted against elapsed time to produce a chromatogram.
The ideal chromatogram has closely spaced peaks with no overlap of the peaks. Any peaks that overlap are called coeluting. The time and size of a peak are important in that they are used to identify and measure the amount of the compound in the sample. The size of the resulting peak corresponds to the amount of the compound in the sample. A larger peak is obtained as the concentration of the corresponding compound increases. If the column and all of operating conditions are kept the same, a given compound always travels through the column at the same rate. Thus, a compound can be identified by the time required for it to travel through the column (called the retention time).
The identity of a compound cannot be determined solely by its retention time. A known amount of an authentic, pure sample of the compound has to be analyzed and its retention time and peak size determined. This value can be compared to the results from an unknown sample to determine whether the target compound is present (by comparing retention times) and its amount (by comparing peak sizes).
If any of the peaks overlap, accurate measurement of these peaks is not possible. If two peaks have the same retention time, accurate identification is not possible. Thus, it is desirable to have no peak overlap or co-elution
RETENTION TIME (tR)
Retention time (tR)is the time it takes a solute to travel through the column. The retention time is assigned to the corresponding solute peak. The retention time is a measure of the amount of time a solute spends in a column. It is the sum of the time spent in the stationary phase and the mobile phase.
COLUMN BLEED:
Column bleed is the background generated by all columns. It is the continuous elution of the compounds produced from normal degradation of the stationary phase. Column bleed increases at higher temperatures.
COLUMN TEMPERATURE LIMITS:
Columns have lower and upper temperature limits. If a column is used below its lower temperature limit, rounded and wide peaks are obtained (i.e., loss of efficiency).
No column damage has occurred; however, the column does not function properly.
Using the column at or above its lower limit maintains good peak shapes.
Upper temperature limits are often stated as two numbers. The lower one is the isothermal temperature limit. The column can be used indefinitely at this temperature and reasonable column bleed and lifetime are realized.
The upper number is the temperature program limit. A column can be maintained at this temperature for 10-15 minutes without severely shortening column lifetime or experiencing excessively high column bleed.
Exposing the column to higher temperatures or for longer time periods results in higher column bleed and shorter column lifetimes. Exceeding the upper temperature limits may damage the stationary phase and the inertness of the fused silica tubing.
COLUMN CAPACITY:
Column capacity is the maximum amount of a solute that can be introduced into a column before significant peak distortion occurs.
Overloaded peaks are asymmetric with a leading edge. Overloaded peaks are often described as "shark fin" shaped. Tailing peaks are obtained if a PLOT column is overloaded. No damage occurs if a column is overloaded.
________________________________________
STATIONARY PHASES
POLYSILOXANES:
Polysiloxanes are the most common stationary phases. They are available in the greatest variety and are the most stable, robust and versatile.
The most basic polysiloxane is the 100% methyl substituted. When other groups are present, the amount is indicated as the percent of the total number of groups. For example, a 5% diphenyl-95% dimethyl polysiloxane contains 5% phenyl groups and 95% methyl groups. The "di-" prefix indicates that each silicon atom contains two of that particular group. Sometimes this prefix is omitted even though two identical groups are present.
If the methyl percentage is not stated, it is understood to be present in the amount necessary to make 100% (e.g., 50% phenyl-methyl polysiloxane contains 50% methyl substitution).
Cyanopropylphenyl percent values can be misleading. A 14% cyanopropylphenyl-dimethyl polysiloxane contains 7% cyanopropyl and 7% phenyl (along with 86% methyl). The cyanopropyl and phenyl groups are on the same silicon atom, thus their amounts are summed.
POLYETHYLENE GLYCOLS:
Polyethylene glycols (PEG) are widely used as stationary phases. Stationary phases with "wax" or "FFAP" in their name are some type of polyethylene glycol. Polyethylene glycols stationary phases are not substituted, thus the polymer is 100% of the stated material. They are less stable, less robust and have lower temperature limits than most polysiloxanes.
With typical use, they exhibit shorter lifetimes and are more susceptible to damage upon over heating or exposure to oxygen.
The unique separation properties of polyethylene glycol makes these liabilities tolerable. Polyethylene glycol stationary phases must be liquids under GC temperature conditions.
GAS - SOLID:(PLOT Columns)
Gas-solid stationary phases are comprised of a thin layer (usually <10 um) of small particles adhered to the surface of the tubing.
These are porous layer open tubular (PLOT) columns. The sample compounds undergo a gas-solid adsorption/desorption process with the stationary phase. The particles are porous, thus size exclusion and shape selectivity processes also occur.
Various derivatives of styrene, aluminum oxides and molecular sieves are the most common PLOT column stationary phases.
PLOT columns are very retentive. They are used to obtain separations that are impossible with conventional stationary phases. Also, many separations requiring subambient temperatures with polysiloxanes or polyethylene glycols can be easily accomplished above ambient temperatures with PLOT columns.
Hydrocarbon and sulfur gases, noble and permanent gases, and low boiling point solvents are some of the more common compounds separated with PLOT columns.
Some PLOT columns may occasionally lose particles of the stationary phase. For this reason, using PLOT columns that may lose particles with detectors negatively affected by particulate matter is not recommended. Mass spectrometers are particularly susceptible to this problem due to the presence of a strong vacuum at the exit of the column.
BONDED AND CROSS-LINKED STATIONARY PHASES:
Cross-linked stationary phases have the individual polymer chains linked via covalent bonds.
Bonded stationary phases are covalently bonded to the surface of the tubing.
Both techniques impart enhanced thermal and solvent stability to the stationary phase. Also, columns with bonded and cross-linked stationary phases can be solvent rinsed to remove contaminants.
Most polysiloxanes and polyethylene glycol stationary phases are bonded and cross-linked.
A few stationary phases are available in an nonbonded version; some stationary phases are not available in bonded and cross-linked versions. Use a bonded and cross-linked stationary phase if one is available.
________________________________________
COLUMN DEGRADATION
CAUSES OF COLUMN PERFORMANCE DEGRADATION
COLUMN BREAKAGE:
Fused silica columns break wherever there is a weak point in the polyimide coating. The polyimide coating protects the fragile fused silica tubing. The continuous heating and cooling of the oven, vibrations caused by the oven fan and being wound on a circular cage all place stress on the tubing. Eventually breakage occurs at a weak point. Weak spots are created when the polyimide coating is scratched or abraded. This usually occurs when a sharp point or edge is dragged over the tubing. Column hangers and tags, metal edges in the GC oven, column cutters and miscellaneous items on the lab bench are just some of the common sources of sharp edges or points.
It is rare for a column to spontaneously break. Column manufacturing practices tend to expose any weak tubing and eliminate it from use in finished columns. Larger diameter columns are more prone to breakage. This means that greater care and prevention against breakage must be taken with 0.45-0.53 mm I.D. tubing than with 0.18-0.32 mm I.D. tubing.
A broken column is not always fatal. If a broken column was maintained at a high temperature either continuously or with multiple temperature program runs, damage to the column is very likely. The back half of the broken column has been exposed to oxygen at elevated temperatures which rapidly damages the stationary phase The front half is fine since carrier gas flowed through this length of column. If a broken column has not been heated or only exposed to high temperatures or oxygen for a very short time, the back half has probably not suffered any significant damage.
A union can be installed to repair a broken column. Any suitable union will work to rejoin the column. No more than 2-3 unions should be installed on any one column. Problems with dead volume (peak tailing) may occur with multiple unions.
THERMAL DAMAGE:
Exceeding a column upper temperature limit results in accelerated degradation of the stationary phase and tubing surface. This results in the premature onset of excessive column bleed, peak tailing for active compounds and/or loss of efficiency (resolution). Fortunately, thermal damage is a slower process, thus prolonged times above the temperature limit are required before significant damage occurs. Thermal damage is greatly accelerated in the presence of oxygen. Overheating a column with a leak or high oxygen levels in the carrier gas results in rapid and permanent column damage.
Setting the maximum oven temperature at or a few degrees above the column temperature limit is the best method to prevent thermal damage. This prevents the accidental overheating of the column. If a column is thermally damaged, it may still be functional. Remove the column from the detector. Heat the column for 8-16 hours at its isothermal temperature limit. Remove 10-15 cm from the detector end of the column. Reinstall the column and condition as usual. The column usually does not return to its original performance; however, it is often still functional. The life of the column will be reduced after thermal damage.
OXYGEN DAMAGE:
Oxygen is an enemy to most capillary GC columns. While no column damage occurs at or near ambient temperatures, severe damage occurs as the column temperature increases. In general, the temperature and oxygen concentration at which significant damages occurs is lower for polar stationary phases. It is constant exposure to oxygen that is the problem. Momentary exposure such as an injection of air or a very short duration septum nut removal is not a problem.
A leak in the carrier gas flow path (e.g., gas lines, fittings, injector) is the most common source of oxygen exposure. As the column is heated, very rapid degradation of the stationary phase occurs. This results in the premature onset of excessive column bleed, peak tailing for active compounds and/or loss of efficiency (resolution). These are the same symptoms as for thermal damage. Unfortunately, by the time oxygen damage is discovered, significant column damage has already occurred. In less severe cases, the column may still be functional but at a reduced performance level. In more severe cases, the column is irreversibly damaged.
Maintaining an oxygen and leak free system is the best prevention against oxygen damage. Good GC system maintenance includes periodic leak checks of the gas lines and regulators, regular septa changes, using high quality carrier gases, installing and changing oxygen traps, and changing gas cylinders before they are completely empty.
CHEMICAL DAMAGE:
There are relatively few compounds that damage stationary phases. Introducing non-volatile compounds (high molecular weight or high boiling point) in a column often degrades performance, but damage to the stationary phase does not occur. These residues can often be removed and performance returned by solvent rinsing the column
Inorganic or mineral bases and acids are the primary compounds to avoid introducing in a column. The acids include hydrochloric (HCl), sulfuric (H2SO4), nitric (HNO3), phosphoric (H3PO4) and chromic (CrO3). The bases include potassium hydroxide (KOH), sodium hydroxide (NaOH) and ammonium hydroxide (NH4OH). Most of these acids and bases are not very volatile and accumulate at the front of the column. If allowed to remain, the acids or bases damage the stationary phase. This results in the premature onset of excessive column bleed, peak tailing for active compounds and/or loss of efficiency (resolution). The symptoms are very similar to thermal and oxygen damage.
Hydrochloric acid and ammonium hydroxide are the least harmful of the group. Both tend to follow any water that is present in the sample. If the water is not or only poorly retained by the column, the residence time of HCl and NH4OH in the column is short. This tends to eliminate or minimize any damage by these compounds. Thus, if HCl or NH4OH are present in a sample, using conditions or a column with no water retention will render these compounds relatively harmless to the column.
The only organic compounds that have been reported to damage stationary phases are perfluoroacids. Examples include trifluoroacetic, pentafluoropropanoic and heptafluorobutyric acid. They need to be present at high levels (e.g., 1% or higher). Most of the problems are experienced with splitless or Megabore direct injections where large volumes of the sample are deposited at the front of the column.
Since chemical damage is usually limited to the front of the column, trimming or cutting 1/2-1 meter from the front of the column often eliminates any chromatographic problems. In more severe cases, 5 or more meters may need to be removed. The use of a guard column or retention gap will minimize the amount of column damage; however, frequent trimming of the guard column may be necessary. The acid or base often damages the surface of the deactivated fused silica tubing which leads to peak shape problems for active compounds.
COLUMN CONTAMINATION
Column contamination is one of the most common problems encountered in capillary GC. Unfortunately, it mimics a very wide variety of problems and is often misdiagnosed as another problem. A contaminated column is usually not damaged, but it may be rendered unusable.
There are two basic types of contaminants: nonvolatile and semi-volatile. Nonvolatile contaminants or residues do not elute and accumulate in the column. The column becomes coated with these residues which interfere with the proper partitioning of solutes in and out of the stationary phase. Also, the residues may interact with active solutes resulting in peak adsorption problems (evident as peak tailing or loss of peak size). Active solutes are those containing a hydroxyl (-OH) or amine (-NH) group, and some thiols (-SH) and aldehydes.
Semivolatile contaminants or residues accumulate in the column, but eventually elute. Hours to days may elapse before they completely leave the column. Like nonvolatile residues, they may cause peak shape and size problems and, in addition, are usually responsible for many baseline problems (instability, wander, drift, ghost peaks, etc.).
Contaminants originate from a number of sources with injected samples being the most common. Extracted samples are among the worse types. Biological fluids and tissues, soils, waste and ground water, and similar types of matrices contain high amounts of semivolatile and nonvolatile materials. Even with careful and thorough extraction procedures, small amounts of these materials are present in the injected sample. Several to hundreds of injections may be necessary before the accumulated residues cause problems. Injection techniques such as on-column, splitless and Megabore direct place a large amount of sample into the column, thus column contamination is more common wirh these injection techniques.
Occasionally contaminants originate from materials in gas lines and traps, ferrule and septa particles, or anything coming in contact with the sample (vials, solvents, syringes, pipettes, etc.). These types of contaminants are probably responsible when a contamination problem suddenly develops and similar samples in previous months or years did not cause any problems.
Minimizing the amount of semivolatiles and nonvolatile sample residues is the best method to reduce contamination problems. Unfortunately, the presence and identity of potential contaminants are often unknown. Rigorous and thorough sample cleanup is the best protection against contamination problems. The use of a guard column or retention gap often reduces the severity or delays the onset of column contamination induced problems. If a column becomes contaminated, it is best to solvent rinse the column to remove the contaminants.
Maintaining a contaminated column at high temperatures for long periods of time (often called baking out a column) is not recommended. Baking out a column may convert some of the contaminating residues into insoluble materials that cannot be solvent rinsed from the column. If this occurs, the column cannot be salvaged in most cases.
Sometimes the column can be cut in half and the back half may still be useable. Baking out a column should be limited to 1-2 hours at the isothermal temperature limit of the column.
PROBLEMS IN GAS CHROMATOGRAPHY
TROUBLESHOOTING:
EVALUATING THE PROBLEM:
The first step in any troubleshooting effort is to step back and evaluate the situation. Rushing to solve the problem often results in a critical piece of important information being overlooked or neglected. In addition to the problem, look for any other changes or differences in the chromatogram. Many problems are accompanied by other symptoms. Retention time shifts, altered baseline noise or drift, or peak shape changes are only a few of the other clues that often point to or narrow the list of possible causes. Finally, make note of any changes or differences involving the sample. Solvents, vials, pipettes, storage conditions, sample age, extraction or preparation techniques, or any other factor influencing the sample environment can be responsible.
SIMPLE CHECKS AND OBSERVATIONS:
A surprising number of problems involve fairly simple and often overlooked components of the GC system or analysis. Many of these items are transparent in the daily operation of the GC and are often taken for granted (set it and forget it). The areas and items to check include:
1. Gases - pressures, carrier gas average linear velocity, and flow rates (detector, split vent, septum purge).
2. Temperatures - column, injector, detector and transfer lines.
3. System parameters - purge activation times, detector attenuation and range, mass ranges, etc.
4. Gas lines and traps - cleanliness, leaks, expiration.
5. Injector consumables - septa, liners, O-rings and ferrules.
6. Sample integrity - concentration, degradation, solvent, storage.
7. Syringes - handling technique, leaks, needle sharpness, cleanliness.
8. Data system - settings and connections.
GHOST PEAKS AND CARRYOVER:
System contamination is responsible for most ghost peaks or carryover problems. If the extra ghost peaks are similar in width to the sample peaks (with similar retention times), the contaminants were most likely introduced into the column at the same time as the sample. The extra compounds may be present in the injector (i.e., contamination) or in the sample itself. Impurities in solvents, vials, caps and syringes are only some of the possible sources. Injecting sample and solvent blanks may help to find possible sources of the contaminants. If the ghost peaks are much broader than the sample peaks, the contaminants were most likely already in the column when the injection was made. These compounds were still in the column when a previous GC run was terminated. They elute during a later run and are often very broad. Sometimes numerous ghost peaks from multiple injections overlap and elute as a hump or blob. This often takes on the appearance of baseline drift or wander.
Increasing the final temperature or time in the temperature program is one method to minimize or eliminate a ghost peak problem. Alternatively, a short bake-out after each run or series of runs may remove the highly retained compounds from the column before they cause a problem. Performing a condensation test is a good method to determine whether a contaminated injector is the source of the carryover or ghost peaks.
Determination of Nitrogen - Kjeldahl Analyser
Determination of Nitrogen - Kjeldahl Analyser
Introduction
Nitrogen content of a sample may be required for effluent treatment purposes, to determine the protein content of food, or to find the ammonium content of a fertilizer, The determination of nitrogen content of a sample using the kjeldahl procedure involves the destruction of the sample matrix and the conversion of nitrogenous matter to ammonium salts. This digestion is carried out with concentrated sulphuric acid at temperatures above its boiling point. (CARE! Sulphuric acid, particular when hot, is very corrosive and must be handled with care)
The ammonium salt is then converted to ammonia by reaction with sodium hydroxide, the ammonia is steam-distilled off and trapped in a boric acid solution, and its value expressed in the desired form,including NH3-N, NO3-N, crude protein etc, using the appropriate calculation.
Procedure
Sample digestion
1. Place into the kjeldahl tubes provided, a weighed quantity of sample. The mass to be used will depend on the nitrogen content of the sample. Each sample will be analysed in triplicate
2. Into each tube, place one catalyst tablet (note type and composition), followed by the required volume of sulphuric acid. Rack and leave in fume hood to predigest overnight.
3. After predigestion,place the tubes into the heating block of the kjeldahl digester apparatus, put the vapor trap into place, and turn on the water vacuum pump to remove acid fumes from the sample tubes.
4. Switch on the heater set at 400° and allow the samples to digest completely, such that all the sample is dissolved and the extract is clear blue in color. Note the time taken for this process.
Automated Ammonia determination
1. At the end of the digestion period,lift the tubes clear off the heating block and allow to cool to near ambient temperature
Place one tube at a time into the sample port of the steam distillation unit and ensure that it is firmly held in place.
2. Follow the instructions provided for the operation of the distillation and titration unit,and note all the operating parameters.
3. Start the distillation and titration process and record the nitrogen value obtained.
4. Ensure that sample blanks, certified reference material, and samples are analysed. Correct all sample values, using the blank value.
5. Calculate the mean % recovery ± s.d. of nitrogen from Certified Reference Material.
6. Express the nitrogen content of the sample as weight % Crude Protein.
Introduction
Nitrogen content of a sample may be required for effluent treatment purposes, to determine the protein content of food, or to find the ammonium content of a fertilizer, The determination of nitrogen content of a sample using the kjeldahl procedure involves the destruction of the sample matrix and the conversion of nitrogenous matter to ammonium salts. This digestion is carried out with concentrated sulphuric acid at temperatures above its boiling point. (CARE! Sulphuric acid, particular when hot, is very corrosive and must be handled with care)
The ammonium salt is then converted to ammonia by reaction with sodium hydroxide, the ammonia is steam-distilled off and trapped in a boric acid solution, and its value expressed in the desired form,including NH3-N, NO3-N, crude protein etc, using the appropriate calculation.
Procedure
Sample digestion
1. Place into the kjeldahl tubes provided, a weighed quantity of sample. The mass to be used will depend on the nitrogen content of the sample. Each sample will be analysed in triplicate
2. Into each tube, place one catalyst tablet (note type and composition), followed by the required volume of sulphuric acid. Rack and leave in fume hood to predigest overnight.
3. After predigestion,place the tubes into the heating block of the kjeldahl digester apparatus, put the vapor trap into place, and turn on the water vacuum pump to remove acid fumes from the sample tubes.
4. Switch on the heater set at 400° and allow the samples to digest completely, such that all the sample is dissolved and the extract is clear blue in color. Note the time taken for this process.
Automated Ammonia determination
1. At the end of the digestion period,lift the tubes clear off the heating block and allow to cool to near ambient temperature
Place one tube at a time into the sample port of the steam distillation unit and ensure that it is firmly held in place.
2. Follow the instructions provided for the operation of the distillation and titration unit,and note all the operating parameters.
3. Start the distillation and titration process and record the nitrogen value obtained.
4. Ensure that sample blanks, certified reference material, and samples are analysed. Correct all sample values, using the blank value.
5. Calculate the mean % recovery ± s.d. of nitrogen from Certified Reference Material.
6. Express the nitrogen content of the sample as weight % Crude Protein.
Friday, January 30, 2009
Effective HPLC method development - A Detailed view
Effective HPLC method development
1. Introduction
Optimization of HPLC method development has been discussed extensively in many standard
textbooks. However, most of the discussions have been focused on the optimization of HPLC
conditions. This article will look at this topic from other perspectives. All critical steps in
method development will be summarized and a sequence of events that required to develop the
method efficiently will be proposed. The steps will be discussed in the same order as they would
be investigated during the method development process. The rational will be illustrated by
focussing on developing a stability-indicating HPLC-UV method for related substances
(impurities). The principles, however, will be applicable to most other HPLC methods.
In order to have an efficient method development process, the following three questions must be
answered:
1.1. What are the critical components for a HPLC method?
The 3 critical components for a HPLC method are: sample preparation, HPLC analysis
and standardization (calculations). During the preliminary method development stage,
all individual components should be investigated before the final method optimization.
This gives the scientist a chance to critically evaluate the method performance in each
component and streamline the final method optimization.
1.2. What should be the percentage of time spent on different steps of the method development?
The rest of the article will discuss the recommended sequence of events, and the
percentage of time that should be spent on each step in order to meet the method
development timeline. One common mistake is that most scientists focus too much
on the HPLC chromatographic conditions and neglect the other 2 components of
the method (i.e., sample preparation, standardization). The recommended timeline
would help scientists investigate different aspects of the method development and
allocate appropriate time in all steps.
1.3. How should a method development experiment be designed?
A properly designed method development experiment should consider the following
important questions:
What sample should be used at each stage?
What should the scientists look for in these experiments?
What are the acceptance criteria?
We will see these questions in the following discussions.
2. Method Development Timeline
The following is a suggested method development timeline for a typical HPLC-UV
related substance method. The percentage of time spent on each stage is proposed to
ensure the scientist will allocate sufficient time to different steps. In this approach, the
three critical components for a HPLC method (sample preparation, HPLC analysis and
standardization) will first be investigated individually. Each of these steps will be
discussed in more detail in the following paragraphs.
Step 1: Define method objectives and understand the chemistry (10%)
Determine the goals for method development (e.g., what is the intended use of the method?), and
to understand the chemistry of the analytes and the drug product.
Step 2: Initial HPLC conditions (20%)
Develop preliminary HPLC conditions to achieve minimally acceptable separations. These
HPLC conditions will be used for all subsequent method development experiments.
Step 3: Sample preparation procedure (10%)
Develop a suitable sample preparation scheme for the drug product.
Step 4: Standardization (10%)
Determine the appropriate standardization method and the use of relative response factors in
calculations.
Step 5: Final method optimization/robustness (20%)
Identify the “weaknesses” of the method and optimize the method through experimental design.
Understand the method performance with different conditions, different instrument set ups and
different samples.
Step 6: Method validation (30%)
Complete method validation according to ICH guidelines
3. Define Method Objectives
There is no absolute end to the method development process. The question is what is the
“acceptable method performance”? The acceptable method performance is determined by the
objectives set in this step. This is one of the most important considerations often overlooked by
scientists. In this section, the different end points (i.e., expectations) will be discussed in
descending order of significance.
3.1 Analytes:
For a related substance method, determining the “significant and relevant” related substances is
very critical. With limited experience with the drug product, a good way to determine the
significant related substances is to look at the degradation products observed during stress
testing. Significant degradation products observed during stress testing should be investigated in
the method development.
Based on the current ICH guidelines on specifications, the related substances method for active
pharmaceutical ingredients (API) should focus on both the API degradation products and
synthetic impurities, while the same method for drug products should focus only on the
degradation products. In general practice, unless there are any special toxicology concerns,
related substances below the limit of quantitation (LOQ) should not be reported and therefore
should not be investigated.
In this stage, relevant related substances should be separated into 2 groups:
3.1.1. Significant related substances: Linearity, accuracy and response factors should be
established for the significant related substances during the method validation. To limit the
workload during method development, usually 3 or less significant related substances should
be selected in a method.
3.1.2 Other related substances: These are potential degradation products that are not
significant in amount. The developed HPLC conditions only need to provide good
resolution for these related substances to show that they do not exist in significant levels.
3.2 Resolution (Rs)
A stability indicating method must resolve all significant degradation products from each other.
Typically the minimum requirement for baseline resolution is 1.5. This limit is valid only for 2
Gaussian-shape peaks of equal size. In actual method development, Rs = 2.0 should be used as a
minimum to account for day to day variability, non-ideal peak shapes and differences in peak
sizes.
3.3 Limit of Quantitation (LOQ)
The desired method LOQ is related to the ICH reporting limits. If the corresponding ICH
reporting limit is 0.1%, the method LOQ should be 0.05% or less to ensure the results are
accurate up to one decimal place. However, it is of little value to develop a method with an LOQ
much below this level in standard practice because when the method is too sensitive, method
precision and accuracy are compromised.
3.4 Precision, Accuracy
Expectations for precision and accuracy should be determined on a case by case basis. For a
typical related substance method, the RSD of 6 replicates should be less than 10%. Accuracy
should be within 70 % to 130% of theory at the LOQ level.
3.5 Analysis time
A run time of about 5-10 minutes per injection is sufficient in most routine related substance
analyses. Unless the method is intended to support a high-volume assay, shortening the run time
further is not recommended as it may compromise the method performance in other aspects (e.g.,
specificity, precision and accuracy.)
3.6 Adaptability for Automation
For methods that are likely to be used in a high sample volume application, it is very important
for the method to be “automatable”. The manual sample preparation procedure should be easy to
perform. This will ensure the sample preparation can be automated in common sample
preparation workstations.
4. Understand the Chemistry
Similar to any other research project, a comprehensive literature search of the chemical and
physical properties of the analytes (and other structurally related compounds) is essential to
ensure the success of the project.
4.1 Chemical Properties
Most sample preparations involve the use of organic-aqueous and acid-base extraction
techniques. Therefore it is very helpful to understand the solubility and pKa of the analytes.
Solubility in different organic or aqueous solvents determines the best composition of the sample
solvent. pKa determines the pH in which the analyte will exist as a neutral or ionic species. This
information will facilitate an efficient sample extraction scheme and determine the optimum pH
in mobile phase to achieve good separations.
4.2 Potential Degradation Products
Subjecting the API or drug product to common stress conditions provides insight into the
stability of the analytes under different conditions. The common stress conditions include acidic
pH, basic pH, neutral pH, different temperature and humidity conditions, oxidation, reduction
and photo-degradation. These studies help to determine the significant related substances to
be used in method development, and to determine the sample solvent that gives the best sample
solution stability.
In addition, the structures of the analytes will indicate the potential active sites for degradation.
Knowledge from basic organic chemistry will help to predict the reactivity of the functional
groups. For example, some excipients are known to contain trace level of peroxide impurities.
If the analyte is susceptible to oxidation, these peroxide impurities could possibly produce
significant degradation products.
4.3 Sample Matrix
Physical (e.g., solubility) and chemical (e.g., UV activity, stability, pH effect) properties of the
sample matrix will help to design an appropriate sample preparation scheme. For example,
Hydroxypropyl Methylcellulose (HPMC) is known to absorb water to form a very viscous
solution, therefore it is essential to use mostly organic solvents in sample preparation.
5. Initial Method Conditions
The objective at this stage is to quickly develop HPLC conditions for subsequent method
development experiments. A common mistake is that scientists spend too much time at this
stage trying to get a perfect separation.
5.1 Preliminary HPLC Conditions
In order to develop preliminary HPLC conditions in a timely fashion, scientists should use
artificial mixtures of active pharmaceutical ingredients and related substances at relatively high
concentrations (e.g., 1-2% of related substance relative to API) to develop the preliminary HPLC
conditions. The concentration ratio between API and the related substances should be maintained
to ensure the chromatography represents that of a real sample. Alternatively, a highly stressed
sample (e.g., 5% degradation) can also be used at this stage. With the known composition and
high levels of degradation products in the sample, one can evaluate the chromatography to
determine whether there are adequate separations for all analytes. The high concentrations of
related substances are used to ensure all peaks will be detected.
Computer assisted method development can be very helpful in developing the preliminary HPLC
conditions quickly. Since the objective at this stage is to quickly develop HPLC conditions for
subsequent method development experiments, scientists should focus on the separation of the
significant related substances (section 3.1.1) instead of trying to achieve good resolution for all
related substances. These significant related substances should be baseline resolved from each
other with Rs > 2.0. After the preliminary method development, the HPLC conditions can be
further fine-tuned at a later stage to achieve the required specificity for the other related substances.
5.2 Aged HPLC Column
An aged HPLC column should be used to develop the initial HPLC conditions. Usually it is
more difficult to achieve the required resolution with an aged column (e.g., column with about
200 injections). This will reflect the worst case scenario likely to be encountered in actual
method uses, and help the long-term method robustness.
In general, develop all methods with HPLC columns from the same vendor. The preferred brand
of HPLC column should be selected primarily based on the long term stability and lot to lot
reproducibility.
6. Sample Preparation
6.1 Selection of Sample Solvent
This stage focuses on the selection of the sample solvent (for extraction) and the proper sample
preparation procedures. Investigate the effect of sample solvents of different % organic, pH,
extraction volume and extraction procedure on accuracy, precision, sensitivity (LOQ) and the
changes in the chromatography (e.g., peak shape, resolution). Whenever possible use the mobile
phase in the sample preparation. This will ensure that there will not be any compatibility issues
between the sample solution and the HPLC conditions.
6.1.1 Accuracy:
To investigate the accuracy in sample preparation (i.e., extraction efficiency),
prepare a spiked solution by adding known amounts of related substances into a sample
matrix. Compare responses of the spike solutions and the neat standard solutions to assess the
recovery from the sample preparation. In this stage, since only one particular step is being
investigated (i.e., sample preparation), close to theoretical recovery should be observed at this
point (e.g., 90-110%).
6.1.2 Precision:
Use the stressed sample to represent the worst case scenario and perform
replicate sample preparations from the same sample composite. Investigate the consistency of
the related substance profile (i.e., any missing peaks?) and the repeatability results from these
preparations.
6.2 Another objective is to determine the sample concentration that gives an acceptable LOQ
(Signal to Noise ratio, S/N) in low level spike concentrations. The sample concentration should
be low enough to maintain linearity and precision, but high enough to achieve the desired LOQ.
For example, if the ICH reporting limit for this drug product is 0.1%, the LOQ of the method
should be less than 0.05% (i.e., desired LOQ, in %). By using spike sample solutions of very
diluted concentrations for the significant related substances, estimate the concentrations that give
a S/N of about 10 for the significant related substances. This estimated concentration is the
approximate LOQ concentration (i.e., estimated LOQ concentration, in g/mL).
The following equation can be used to estimate the target sample concentration for the method:
Target sample concentration =
estimated LOQ concentration (g/mL) x 1/desired LOQ (%) x 100%
7. Standardization
7.1 Area % method
If the response of the active pharmaceutical ingredient is linear from LOQ to the nominal sample
concentration, use the % area approach where the related substance is reported as % area. This is
the most straightforward approach, and doesn’t require the preparation of standard solutions. It
also has the highest precision since preparation to preparation variation will not affect the results.
However, in order to ensure the concentration is linear within this range, the sample
concentration is usually limited and this will reduce the method sensitivity (i.e., increase LOQ).
In general, use this approach as long as the desired LOQ can be achieved.
7.2 External Standard method
Use the external standard method if the response of the active pharmaceutical ingredient is not
linear throughout the whole range, or the desired LOQ can not be achieved by the area %
method. The concentration of standard solution should be high enough to ensure the standard
solution can be prepared accurately and precisely on a routine basis, it should be low enough to
approximate the concentration of related substance in the sample solution. In general, the
standard concentration should correspond to about 5 % of related substances.
7.3 Wavelength Selection and Relative Response Factor
Generate the linearity plot of API and related substances at different wavelengths. At this point,
Photodiode Array Detector can be used to investigate the linearity of the active pharmaceutical
ingredient and related substances in the proposed concentration range. By comparing the
linearity slopes of the active pharmaceutical ingredient and the related substances, one can
estimate the relative response factors of the related substances at different wavelengths.
Disregard of whether Area % or External Standard approach is used, if the relative response
factors of some significant related substances are far from unity, a response factor correction
must be applied.
The optimum wavelength of detection is the wavelength that gives the highest sensitivity (max)
for the significant related substances and minimizes the difference in response factors between
those of the active pharmaceutical ingredient and the related substances.
After the optimum wavelength is determined, use a highly stressed sample (e.g., 5% degradation)
to verify that the selected wavelength will give the highest % related substance results.
7.4 Overall accuracy
A final check of the method performance is to determine the overall accuracy of the method.
Unlike the accuracy from sample preparation (section 6.1.1), which simply compares the
response of the analyte with and without spiking with matrix, the overall accuracy compares the
% related substances calculated from an accuracy solution with that of the theoretical value.
The accuracy solutions are the solutions spiked with known concentrations of related substances
and matrix. Since the extraction efficiency, choice of wavelength and the bias in standardization
influence the calculated related substance result, this is the best way to investigate the accuracy
of the method. Overall accuracy reflects the true accuracy of the method.
8. Method Optimization/ Robustness
After the individual components of the method are optimized, perform the final optimization of
the method to improve the accuracy, precision and LOQ. Use an experimental design approach
to determine the experimental factors that have significant impact on the method. This is very
important in determining what factors need to be investigated in the robustness testing during the
method validation (see section 9). To streamline the method optimization process, use Plackett
Burmann Design (or similar approach) to simultaneously determine the main effects of many
experimental factors.
Some of the typical experimental factors that need to be investigated are:
HPLC conditions: % organic, pH, flow rate, temperature, wavelength, column age.
Sample preparation: % organic, pH, shaking/sonication, sample size, sample age.
Calculation/standardization: integration, wavelength, standard concentration, response
factor correction.
Typical responses that need to be investigated are:
Results: precision (%RSD), % related substance of significant related substances, total related
substances.
Chromatography: resolution, tailing factor, separation of all related substances (section 3.1.1 and
3.1.2).
9. Method validation
9.1 Robustness
Method validation should be treated as a “final verification” of the method performance and
should not be used as part of the method development. Some of the typical method validation
parameters should be studied thoroughly in the previous steps. In some cases, robustness can be
completed in the final method optimization before method validation. At this point, the
robustness experiments should be limited only to the most significant factors (usually less than 4
factors). In addition, unlike the final method optimization (see section 8), the experimental
factors should be varied within a narrow range to reflect normal day to day variation. During the
method validation, the purpose is to demonstrate that the method performance will not be
significantly impacted by slight variations of the method conditions.
9.2 Linearity, Accuracy, Response Factor
Linearity, accuracy and response factors should be established for the significant related
substances (section 3.1.1) during the method validation. In order to limit the workload of method
development, usually less than 3 significant related substances should be selected in a method.
Therefore, the other related substances (section 3.1.2) should not be included in these
experiments.
9.3 System suitability criteria
It is advisable to run system suitability tests in these robustness experiments. During the
robustness testing of the method validation, critical method parameters such as mobile phase
composition and column temperature are varied to mimic the day-to-day variability. Therefore,
the system suitability results from these robustness experiments should reflect the expected
range. Consequently, the limits for system suitability tests can be estimated from these
experiments.
10. Conclusion
All of the critical steps in method development have been summarized and prioritized. The steps
for method development are discussed in the same order as they would be investigated in the
actual method development process. These steps will ensure all critical method parameters are
optimized before the method validation.
In order to develop a HPLC method effectively, most of the effort should be spent in method
development and optimization as this will improve the final method performance. The method
validation, however, should be treated as an exercise to summarize or document the overall
method performance for its intended purpose.
If you have any doubts, write me to prabhusankarshasun@gmail.com
If you have any suggestions, please tell me to the above address.
With Regards,
R.Prabhusankar
1. Introduction
Optimization of HPLC method development has been discussed extensively in many standard
textbooks. However, most of the discussions have been focused on the optimization of HPLC
conditions. This article will look at this topic from other perspectives. All critical steps in
method development will be summarized and a sequence of events that required to develop the
method efficiently will be proposed. The steps will be discussed in the same order as they would
be investigated during the method development process. The rational will be illustrated by
focussing on developing a stability-indicating HPLC-UV method for related substances
(impurities). The principles, however, will be applicable to most other HPLC methods.
In order to have an efficient method development process, the following three questions must be
answered:
1.1. What are the critical components for a HPLC method?
The 3 critical components for a HPLC method are: sample preparation, HPLC analysis
and standardization (calculations). During the preliminary method development stage,
all individual components should be investigated before the final method optimization.
This gives the scientist a chance to critically evaluate the method performance in each
component and streamline the final method optimization.
1.2. What should be the percentage of time spent on different steps of the method development?
The rest of the article will discuss the recommended sequence of events, and the
percentage of time that should be spent on each step in order to meet the method
development timeline. One common mistake is that most scientists focus too much
on the HPLC chromatographic conditions and neglect the other 2 components of
the method (i.e., sample preparation, standardization). The recommended timeline
would help scientists investigate different aspects of the method development and
allocate appropriate time in all steps.
1.3. How should a method development experiment be designed?
A properly designed method development experiment should consider the following
important questions:
What sample should be used at each stage?
What should the scientists look for in these experiments?
What are the acceptance criteria?
We will see these questions in the following discussions.
2. Method Development Timeline
The following is a suggested method development timeline for a typical HPLC-UV
related substance method. The percentage of time spent on each stage is proposed to
ensure the scientist will allocate sufficient time to different steps. In this approach, the
three critical components for a HPLC method (sample preparation, HPLC analysis and
standardization) will first be investigated individually. Each of these steps will be
discussed in more detail in the following paragraphs.
Step 1: Define method objectives and understand the chemistry (10%)
Determine the goals for method development (e.g., what is the intended use of the method?), and
to understand the chemistry of the analytes and the drug product.
Step 2: Initial HPLC conditions (20%)
Develop preliminary HPLC conditions to achieve minimally acceptable separations. These
HPLC conditions will be used for all subsequent method development experiments.
Step 3: Sample preparation procedure (10%)
Develop a suitable sample preparation scheme for the drug product.
Step 4: Standardization (10%)
Determine the appropriate standardization method and the use of relative response factors in
calculations.
Step 5: Final method optimization/robustness (20%)
Identify the “weaknesses” of the method and optimize the method through experimental design.
Understand the method performance with different conditions, different instrument set ups and
different samples.
Step 6: Method validation (30%)
Complete method validation according to ICH guidelines
3. Define Method Objectives
There is no absolute end to the method development process. The question is what is the
“acceptable method performance”? The acceptable method performance is determined by the
objectives set in this step. This is one of the most important considerations often overlooked by
scientists. In this section, the different end points (i.e., expectations) will be discussed in
descending order of significance.
3.1 Analytes:
For a related substance method, determining the “significant and relevant” related substances is
very critical. With limited experience with the drug product, a good way to determine the
significant related substances is to look at the degradation products observed during stress
testing. Significant degradation products observed during stress testing should be investigated in
the method development.
Based on the current ICH guidelines on specifications, the related substances method for active
pharmaceutical ingredients (API) should focus on both the API degradation products and
synthetic impurities, while the same method for drug products should focus only on the
degradation products. In general practice, unless there are any special toxicology concerns,
related substances below the limit of quantitation (LOQ) should not be reported and therefore
should not be investigated.
In this stage, relevant related substances should be separated into 2 groups:
3.1.1. Significant related substances: Linearity, accuracy and response factors should be
established for the significant related substances during the method validation. To limit the
workload during method development, usually 3 or less significant related substances should
be selected in a method.
3.1.2 Other related substances: These are potential degradation products that are not
significant in amount. The developed HPLC conditions only need to provide good
resolution for these related substances to show that they do not exist in significant levels.
3.2 Resolution (Rs)
A stability indicating method must resolve all significant degradation products from each other.
Typically the minimum requirement for baseline resolution is 1.5. This limit is valid only for 2
Gaussian-shape peaks of equal size. In actual method development, Rs = 2.0 should be used as a
minimum to account for day to day variability, non-ideal peak shapes and differences in peak
sizes.
3.3 Limit of Quantitation (LOQ)
The desired method LOQ is related to the ICH reporting limits. If the corresponding ICH
reporting limit is 0.1%, the method LOQ should be 0.05% or less to ensure the results are
accurate up to one decimal place. However, it is of little value to develop a method with an LOQ
much below this level in standard practice because when the method is too sensitive, method
precision and accuracy are compromised.
3.4 Precision, Accuracy
Expectations for precision and accuracy should be determined on a case by case basis. For a
typical related substance method, the RSD of 6 replicates should be less than 10%. Accuracy
should be within 70 % to 130% of theory at the LOQ level.
3.5 Analysis time
A run time of about 5-10 minutes per injection is sufficient in most routine related substance
analyses. Unless the method is intended to support a high-volume assay, shortening the run time
further is not recommended as it may compromise the method performance in other aspects (e.g.,
specificity, precision and accuracy.)
3.6 Adaptability for Automation
For methods that are likely to be used in a high sample volume application, it is very important
for the method to be “automatable”. The manual sample preparation procedure should be easy to
perform. This will ensure the sample preparation can be automated in common sample
preparation workstations.
4. Understand the Chemistry
Similar to any other research project, a comprehensive literature search of the chemical and
physical properties of the analytes (and other structurally related compounds) is essential to
ensure the success of the project.
4.1 Chemical Properties
Most sample preparations involve the use of organic-aqueous and acid-base extraction
techniques. Therefore it is very helpful to understand the solubility and pKa of the analytes.
Solubility in different organic or aqueous solvents determines the best composition of the sample
solvent. pKa determines the pH in which the analyte will exist as a neutral or ionic species. This
information will facilitate an efficient sample extraction scheme and determine the optimum pH
in mobile phase to achieve good separations.
4.2 Potential Degradation Products
Subjecting the API or drug product to common stress conditions provides insight into the
stability of the analytes under different conditions. The common stress conditions include acidic
pH, basic pH, neutral pH, different temperature and humidity conditions, oxidation, reduction
and photo-degradation. These studies help to determine the significant related substances to
be used in method development, and to determine the sample solvent that gives the best sample
solution stability.
In addition, the structures of the analytes will indicate the potential active sites for degradation.
Knowledge from basic organic chemistry will help to predict the reactivity of the functional
groups. For example, some excipients are known to contain trace level of peroxide impurities.
If the analyte is susceptible to oxidation, these peroxide impurities could possibly produce
significant degradation products.
4.3 Sample Matrix
Physical (e.g., solubility) and chemical (e.g., UV activity, stability, pH effect) properties of the
sample matrix will help to design an appropriate sample preparation scheme. For example,
Hydroxypropyl Methylcellulose (HPMC) is known to absorb water to form a very viscous
solution, therefore it is essential to use mostly organic solvents in sample preparation.
5. Initial Method Conditions
The objective at this stage is to quickly develop HPLC conditions for subsequent method
development experiments. A common mistake is that scientists spend too much time at this
stage trying to get a perfect separation.
5.1 Preliminary HPLC Conditions
In order to develop preliminary HPLC conditions in a timely fashion, scientists should use
artificial mixtures of active pharmaceutical ingredients and related substances at relatively high
concentrations (e.g., 1-2% of related substance relative to API) to develop the preliminary HPLC
conditions. The concentration ratio between API and the related substances should be maintained
to ensure the chromatography represents that of a real sample. Alternatively, a highly stressed
sample (e.g., 5% degradation) can also be used at this stage. With the known composition and
high levels of degradation products in the sample, one can evaluate the chromatography to
determine whether there are adequate separations for all analytes. The high concentrations of
related substances are used to ensure all peaks will be detected.
Computer assisted method development can be very helpful in developing the preliminary HPLC
conditions quickly. Since the objective at this stage is to quickly develop HPLC conditions for
subsequent method development experiments, scientists should focus on the separation of the
significant related substances (section 3.1.1) instead of trying to achieve good resolution for all
related substances. These significant related substances should be baseline resolved from each
other with Rs > 2.0. After the preliminary method development, the HPLC conditions can be
further fine-tuned at a later stage to achieve the required specificity for the other related substances.
5.2 Aged HPLC Column
An aged HPLC column should be used to develop the initial HPLC conditions. Usually it is
more difficult to achieve the required resolution with an aged column (e.g., column with about
200 injections). This will reflect the worst case scenario likely to be encountered in actual
method uses, and help the long-term method robustness.
In general, develop all methods with HPLC columns from the same vendor. The preferred brand
of HPLC column should be selected primarily based on the long term stability and lot to lot
reproducibility.
6. Sample Preparation
6.1 Selection of Sample Solvent
This stage focuses on the selection of the sample solvent (for extraction) and the proper sample
preparation procedures. Investigate the effect of sample solvents of different % organic, pH,
extraction volume and extraction procedure on accuracy, precision, sensitivity (LOQ) and the
changes in the chromatography (e.g., peak shape, resolution). Whenever possible use the mobile
phase in the sample preparation. This will ensure that there will not be any compatibility issues
between the sample solution and the HPLC conditions.
6.1.1 Accuracy:
To investigate the accuracy in sample preparation (i.e., extraction efficiency),
prepare a spiked solution by adding known amounts of related substances into a sample
matrix. Compare responses of the spike solutions and the neat standard solutions to assess the
recovery from the sample preparation. In this stage, since only one particular step is being
investigated (i.e., sample preparation), close to theoretical recovery should be observed at this
point (e.g., 90-110%).
6.1.2 Precision:
Use the stressed sample to represent the worst case scenario and perform
replicate sample preparations from the same sample composite. Investigate the consistency of
the related substance profile (i.e., any missing peaks?) and the repeatability results from these
preparations.
6.2 Another objective is to determine the sample concentration that gives an acceptable LOQ
(Signal to Noise ratio, S/N) in low level spike concentrations. The sample concentration should
be low enough to maintain linearity and precision, but high enough to achieve the desired LOQ.
For example, if the ICH reporting limit for this drug product is 0.1%, the LOQ of the method
should be less than 0.05% (i.e., desired LOQ, in %). By using spike sample solutions of very
diluted concentrations for the significant related substances, estimate the concentrations that give
a S/N of about 10 for the significant related substances. This estimated concentration is the
approximate LOQ concentration (i.e., estimated LOQ concentration, in g/mL).
The following equation can be used to estimate the target sample concentration for the method:
Target sample concentration =
estimated LOQ concentration (g/mL) x 1/desired LOQ (%) x 100%
7. Standardization
7.1 Area % method
If the response of the active pharmaceutical ingredient is linear from LOQ to the nominal sample
concentration, use the % area approach where the related substance is reported as % area. This is
the most straightforward approach, and doesn’t require the preparation of standard solutions. It
also has the highest precision since preparation to preparation variation will not affect the results.
However, in order to ensure the concentration is linear within this range, the sample
concentration is usually limited and this will reduce the method sensitivity (i.e., increase LOQ).
In general, use this approach as long as the desired LOQ can be achieved.
7.2 External Standard method
Use the external standard method if the response of the active pharmaceutical ingredient is not
linear throughout the whole range, or the desired LOQ can not be achieved by the area %
method. The concentration of standard solution should be high enough to ensure the standard
solution can be prepared accurately and precisely on a routine basis, it should be low enough to
approximate the concentration of related substance in the sample solution. In general, the
standard concentration should correspond to about 5 % of related substances.
7.3 Wavelength Selection and Relative Response Factor
Generate the linearity plot of API and related substances at different wavelengths. At this point,
Photodiode Array Detector can be used to investigate the linearity of the active pharmaceutical
ingredient and related substances in the proposed concentration range. By comparing the
linearity slopes of the active pharmaceutical ingredient and the related substances, one can
estimate the relative response factors of the related substances at different wavelengths.
Disregard of whether Area % or External Standard approach is used, if the relative response
factors of some significant related substances are far from unity, a response factor correction
must be applied.
The optimum wavelength of detection is the wavelength that gives the highest sensitivity (max)
for the significant related substances and minimizes the difference in response factors between
those of the active pharmaceutical ingredient and the related substances.
After the optimum wavelength is determined, use a highly stressed sample (e.g., 5% degradation)
to verify that the selected wavelength will give the highest % related substance results.
7.4 Overall accuracy
A final check of the method performance is to determine the overall accuracy of the method.
Unlike the accuracy from sample preparation (section 6.1.1), which simply compares the
response of the analyte with and without spiking with matrix, the overall accuracy compares the
% related substances calculated from an accuracy solution with that of the theoretical value.
The accuracy solutions are the solutions spiked with known concentrations of related substances
and matrix. Since the extraction efficiency, choice of wavelength and the bias in standardization
influence the calculated related substance result, this is the best way to investigate the accuracy
of the method. Overall accuracy reflects the true accuracy of the method.
8. Method Optimization/ Robustness
After the individual components of the method are optimized, perform the final optimization of
the method to improve the accuracy, precision and LOQ. Use an experimental design approach
to determine the experimental factors that have significant impact on the method. This is very
important in determining what factors need to be investigated in the robustness testing during the
method validation (see section 9). To streamline the method optimization process, use Plackett
Burmann Design (or similar approach) to simultaneously determine the main effects of many
experimental factors.
Some of the typical experimental factors that need to be investigated are:
HPLC conditions: % organic, pH, flow rate, temperature, wavelength, column age.
Sample preparation: % organic, pH, shaking/sonication, sample size, sample age.
Calculation/standardization: integration, wavelength, standard concentration, response
factor correction.
Typical responses that need to be investigated are:
Results: precision (%RSD), % related substance of significant related substances, total related
substances.
Chromatography: resolution, tailing factor, separation of all related substances (section 3.1.1 and
3.1.2).
9. Method validation
9.1 Robustness
Method validation should be treated as a “final verification” of the method performance and
should not be used as part of the method development. Some of the typical method validation
parameters should be studied thoroughly in the previous steps. In some cases, robustness can be
completed in the final method optimization before method validation. At this point, the
robustness experiments should be limited only to the most significant factors (usually less than 4
factors). In addition, unlike the final method optimization (see section 8), the experimental
factors should be varied within a narrow range to reflect normal day to day variation. During the
method validation, the purpose is to demonstrate that the method performance will not be
significantly impacted by slight variations of the method conditions.
9.2 Linearity, Accuracy, Response Factor
Linearity, accuracy and response factors should be established for the significant related
substances (section 3.1.1) during the method validation. In order to limit the workload of method
development, usually less than 3 significant related substances should be selected in a method.
Therefore, the other related substances (section 3.1.2) should not be included in these
experiments.
9.3 System suitability criteria
It is advisable to run system suitability tests in these robustness experiments. During the
robustness testing of the method validation, critical method parameters such as mobile phase
composition and column temperature are varied to mimic the day-to-day variability. Therefore,
the system suitability results from these robustness experiments should reflect the expected
range. Consequently, the limits for system suitability tests can be estimated from these
experiments.
10. Conclusion
All of the critical steps in method development have been summarized and prioritized. The steps
for method development are discussed in the same order as they would be investigated in the
actual method development process. These steps will ensure all critical method parameters are
optimized before the method validation.
In order to develop a HPLC method effectively, most of the effort should be spent in method
development and optimization as this will improve the final method performance. The method
validation, however, should be treated as an exercise to summarize or document the overall
method performance for its intended purpose.
If you have any doubts, write me to prabhusankarshasun@gmail.com
If you have any suggestions, please tell me to the above address.
With Regards,
R.Prabhusankar
Subscribe to:
Posts (Atom)