Saturday, December 5, 2009

UV-Vis Spectrometric analysis

INTRODUCTION
The quantitative estimation is the method to determine how much of each constituent is in the sample1-2. Estimation of a given drug or medicine in the dosage forms needs the quantitative analysis of that drug or medicinal in it. The first quantitative analyses were gravimetric, made possible by the invention of a precise balance. It was soon found that carefully calibrated glassware made considerable saving of time through the volumetric measurement of gravimetrically standardized solutions.
In contrast to above two techniques namely gravimetric and volumetric analysis, methods of drug analysis today utilize rather sophisticated instruments involving simple physico-chemical properties.
In instrumental analysis, a physical property of a drug is utilized to determine its chemical composition. A study of the physical properties of drug molecules is a prerequisite for product formulation and often leads to a better understanding of the inter-relationship between molecular structure and drug action. These properties may be thought of as either additive or constitutive such as mass is an additive property. Many physical properties are constitutive and yet have some measure of additivity such as specific gravity, surface tension and viscosity. Molar refraction of a compound is the sum of the refraction of the atoms and groups making up the compound. The arrangements of atoms in each group are different however and so the refractive index of two molecules will be different; that is the individual groups in two different molecules contribute different amounts to the overall refraction of the molecules. In the instrumental analysis some physical properties of molecules such as absorption of radiation, scattering of radiation, Raman Effect, emission of radiation, rotation of the plane of the polarized light and diffraction phenomenon are involving interaction with the radiant energy. Physical properties encompass specific relations between the molecules and well defined forms of energy e.g. Half-cell potential, current voltage, electric conductivity, dielectric constant, heat of reaction, thermal conductivity or other yardsticks of measurements. By carefully associating specific physical properties with the chemical nature of closely related molecules. Conclusions can be drawn that (1) Describe the spatial arrangement of drug molecules (2) Provide evidence for the relative chemical or physical behavior of a molecule and (3) Suggest methods for quantitative and qualitative analysis of a particular pharmaceutical agents3,4.
Analytical methods, in a broad sense, can be classified into chemical methods and instrumental methods. Chemical methods are defined as those that depend on chemical operations in combination with the manipulation of simple instruments. In general, the measurement of mass, i.e. gravimetric and of volume, i.e., volumetric analysis falls in this class. An instrumental method encompasses the use of more complicated instrumentation based on analytical methods:
Although in recent years, spectrophotometric methods are extensively used, but it would be wrong to conclude that instrumental methods have totally replaced chemical methods. In fact, chemical steps are often an integral part of an instrumental method. The sampling, dissolution, change in oxidation state, removal of excess reagent, pH adjustment, addition of complexing agent, precipitation, concentration and the removal of interferences are the various chemical steps which are part of an instrumental method. In recent years HPLC5 (High Performance Liquid Chromatography) is extensively used, because HPLC is not limited by sample volatility or thermal stability. HPLC is able to separate macromolecules and ionic species, labile natural products, polymeric material and a wide variety of other high molecular weight poly-functional group because of the relatively high pressure necessary to perform this type of chromatography; a more elaborate experimental setup is required.
Because of the high cost of the instrument and costly analytical process, small-scale industries cannot afford to procure and use HPLC. So, in spite of other advantages of HPLC, we often select the spectrophotometric method of analysis. The variation of the color of a system with change in concentration of some component forms the basis of what the chemists commonly term as colorimetric analysis. The color is usually due to the formation of a colored compound by the addition of an appropriate reagent, or it may be inherent in the constituent itself. Colorimetry is concerned with the determination of the concentration of a substance by measurement of the relative absorption of light with respect to a known concentration of the substance. Colorimetric determinations are usually made with a simple instrument termed a colorimeter.
In spectrophotometric analysis a source of radiation is used that extend into the ultraviolet region of the spectrum. From this, definite wavelengths of radiation are chosen possessing a bandwidth of less than 1 nm. This process necessitates the use of a more complicated and consequently more expensive instrument. All atoms and molecules are capable of absorbing energy in accordance with certain restrictions; these limitations depend upon the structure of the substance. Energy may be furnished in the form of electromagnetic radiation (light). The kind and amount of radiation absorbed by a molecule depend upon the structure of the molecule, the amount of radiation absorbed also depend upon the number of molecules interacting with the radiation. The study of these dependencies is called absorption spectroscopy.

THEORY OF SPECTROPHOTOMETRY AND COLORIMETRY

Wavelength and Energy:
Absorption and emission of radiant energy by molecules and atoms is the basis for optical spectroscopy. By interpretation of these data both qualitative and quantitative information can be obtained. Qualitatively, the positions of the absorption and emission lines or bands, which occur in the electromagnetic spectrum, indicate the presence of a specific substance. Quantitatively, the intensities of the some absorption and emission lines or bands for the unknown and standards are measured. The concentrations of the unknown is then determined from these data6,7.
The absorption and the emission of energy in the electro-magnetic spectrum occur in discrete packets of photons. The relation between the energy of a photon and the frequency appropriate for the description of its propagation is
E = hv
Where E = Energy in ergs
v = Represents frequency in cycles per second
h = Plank's constant (6.6256 x 10-27 erg-sec)
The data obtained from a spectroscopic measurement are in the form of a plot of radiant absorbed or emitted as a function of position in the electromagnetic spectrum. This is known as a spectrum and the position of absorption or emission is measured in units of energy, wavelength or frequency8.

Beer-Lambert’s law:
Colorimetry is the determination of the light absorbing capacity of a system. A quantitative determination is therefore, carried out by subjecting a colored solution to those wavelengths of visible energy which are absorbed by that solution. UV and visible absorption bands are due to electronic transitions in the region of 200 nm to 780 nm. In case of organic molecules, the electronic transitions could be ascribed to a s, p or n electron transition from the ground state to an excited state (s*, p* or n*).There are four types of absorption bands that occur due to the electronic transition of a molecule9,10:
R - Bands: n ® p*, in compounds with C=O or NO2 group
k - Bands: p ® p*, in conjugated systems.
b - Bands (Benzenoid bands): Due to aromatic and heteroaromatic systems
E - Bands (ethylenic bands): In aromatic systems.
When light (monochromatic or heterogeneous) falls upon a homogeneous medium, a portion of the incident light is reflected, a portion is absorbed within the medium and the remainder is transmitted. If the intensity of the incident light is expressed by I, that of the absorbed light by Ia, that of the transmitted light by It, and that of the reflected light by Ir, then:
I = Ia + It + Ir .......................... (1)
Credit for investigating the change of absorption of light with the thickness of the medium is frequently given to Lambert; Beer later applied similar experiments to solutions of different concentrations and published his results. The two separate laws governing absorption are usually known as Lambert's law and Beer's law. In the form they are referred to as the
Beer-Lambert law. Mathematically, the radiation-concentration and radiation-path-length relation can be expressed by11
...................... (2)
The more familiar equation used in spectrometry
log (I/I) = Î cl .........................(3)
Where I is the intensity of the incident energy
I is the intensity of the emergent energy
c is the concentration
l is the thickness of the absorber (in cm)
andÎ is the molar absorbtivity for concentration in moles/L
, which is encountered less frequently in the literature, represents a concentration of 1% w/v and 1 cm cell thickness and is used primarily in the investigation of those substances of unknown or undetermined molecular weight. A typical UV absorption spectrum, shown in fig. 1, is the result of plotting wavelength v/s absorbtivity, Îmax is denoted by lmax.


PRINCIPLE OF QUANTITATIVE SPECTROPHOTOMETRIC ASSAY OF MEDICINAL SUBSTANCES:

The assay of an absorbing substance may be quickly carried out by preparing a solution in a transparent solvent and measuring its absorbance at a suitable wavelength. The concentration of the absorbing substance calculated from the measured absorbance using one of three principal procedures.

Use of a standard absorbtivity value:
This procedure is adopted by official compendia, e.g. British Pharmacopoeia, for substances such as methyl testosterone that has reasonably broad absorption variation of instrumental parameters e.g. slit width, scan speed.

Use of a calibration graph:
In this procedure the absorbances of a number (typically 4-6) of standard solutions of the reference substance at concentrations encompassing the sample concentrations are measured and a calibration graph is constructed. The concentration of the analyte in the sample solution is read from the graph as the concentration corresponding to the absorbance of the solution.

Single or double point standardization:
The single-point procedure involves the measurement of the absorbance of a sample solution and of a standard solution of the reference substance. The standard and sample solutions are prepared in a similar manner. Ideally, the concentration of the standard solution should be close to that of the sample solution. A 'two-point bracketing' standardization is therefore required to determine the concentration of the sample solutions. The concentration of one of the standard solutions is greater than that of the sample while the other standard solution has a lower concentration than the sample.

DIFFERENT SPECTROPHOTOMETRIC SIMULTANEOUS ESTIMATION METHODS FOR MULTICOMPONENT SAMPLES13
The spectrophotometric assay of drugs rarely involves the measurement of absorbance of sample containing only one absorbing component. The pharmaceutical analyst frequently encounters the situation where the concentration of one or more substances is required in samples known to contain other absorbing substances, which potentially interfere in the assay.
The basis of all the spectrophotometric techniques for multi-component samples is the property that all wavelengths:
(a)The absorbance of a solution is the sum of absorbances of the individual components; or
(bThe measured absorbance is the difference between the total absorbance of the solution in the sample cell and that of the solution in the reference (blank) cell.
In multi-component formulations the concentration of the absorbing substance is calculated from the measured absorbance using one of the following procedures:
(a)Assay as a single-component sample: The concentration of a component in a sample which contains other absorbing substances may be determined by a simple spectrophotometric measurement of absorbance, provided that the other components have a sufficient small absorbance at the wavelength of measurement.
(b) Assay using absorbance corrected for interference: If the identity, concentration and absorbtivity of the absorbing interferents are known, it is possible to calculate their contribution to the total absorbance of a mixture.
(c) Simultaneous equation method: If a sample contains two absorbing drugs (X and Y) each of which absorbs at the lmax of the other, it may be possible to determine both drugs by the technique of simultaneous equations (Vierodt's method).

Where:
a) The absorptivity of X at λ 1 and λ 2, ax1 and ax2 respectively.
b) The absorptivity of Y at λ 1 and λ 2, ay1 and ay2 respectively.
c) The absorbances of the diluted sample at λ 1 and λ 2, A1 and A2 respectively.
Criteria for obtaining maximum precision, based upon absorbance ratios that place limits on the relative concentrations of the components of the mixture.
The criteria are that the ratios

should lie outside the range 0.1- 2.0 for the precise determination of Y and X respectively. These criteria are satisfied only when the λ max of the two components are reasonably dissimilar. An additional criterion is that the two components do not interact chemically.
To reduce the random errors during measurements, sometimes instead of carrying out analysis of two components at two wavelengths, it is carried out at 3 or 4 wavelengths. The equations will no longer have a unique solution but the best solution can be find out by the least square criterion.
(d) Absorbance ratio method14: The absorbance ratio method is a modification of the simultaneous equations procedure. It depends on the property that, for a substance, which obeys Beer's law at all wavelengths. Q-analysis is based on the relationship between absorbance ratio value of a binary mixture and relative concentrations of such a mixture. The ratio of two absorbance determined on the same solution at two different wavelengths is constant. This constant was termed as “Hufner’s Quotient’ or Q-value which is independent of concentration and solution thickness e.g. two different dilutions of the same substances give the same absorbance ratio A1/ A2. in the USP this ratio is referred to as a Q value. In the quantitative assay of two components in admixture by the absorbance ratio method, absorbance are measured at two wavelengths, one being the λ max of one of the components (λ 2) and the other being a wavelength of equal absorptivity of the two components (λ 1), an iso-absorptive point.
Cx = Qm – Qy / Qx – Qy . A1/ ax1
Equation gives the concentration of X in terms of absorbance ratios, the absorbance of the mixture and the absorptivity of the compounds at the iso-absorptive wavelengths. Accurate dilutions of the sample solution and of the standard solutions of X and Y are necessary for the accurate measurement of A1 and A2 respectively.
(e) Geometric correction method: A number of mathematical correction procedures have been developed which reduce or eliminate the background irrelevant absorption that may be present in samples of biological origin. The simplest of this procedure is the three point geometric procedure, which may be applied if the irrelevant absorption is linear at the three wavelengths selected. If the wavelengths λ 1, λ 2 and λ 3 are selected to that the background absorbances B1, B2 and B3 are linear, then the corrected absorbance D of the drug may be calculated from the three absorbances A1, A2 and A3of the sample solution at λ 1, λ 2 and λ 3 respectively as follows
Let vD and wD be the absorbance of the drug alone in the sample solution at λ 1 and λ 3 respectively, i.e. v and w are the absorbance ratios vD/D and wD/D respectively.
B1 = A1 – vD, B2 = A2 –D and B3 = A3 –wD
Let y and z be the wavelengths intervals (λ 2 – λ 1) and (λ 3- λ 2) respectively
D= y(A2 -A3) + z(A2 – A1) / y (1-w) + z(1-v)
This is a general equation which may be applied in any situation where A1, A2 and A3 of the sample, the wavelength intervals y and z and the absorbance ratio v and w are known.
(f) Orthogonal polynomial method15: The technique of orthogonal polynomials is another mathematical correction procedure, which involves more complex calculations than the three-point correction procedure. The basis of the method is that an absorption spectrum may be represented in terms of orthogonal functions as follows
A(λ ) = p P (λ ) + p1 P1 (λ ) + p2 P2 (λ ) ….. pn Pn (λ )
Where A denotes the absorbance at wavelength λ belonging to a set of n+1 equally spaced wavelengths at which the orthogonal polynomials, P (λ ) , P1 (λ ), P2 (λ ) ….. Pn (λ ) are each defined.
The accuracy of the orthogonal functions procedure depends on the correct choice of the polynomial order and the set of the wavelengths. Usually, quadratic or cubic polynomials are selected depending on the shape of the absorption spectra of the drug and the irrelevant absorption. The set of the wavelengths is defined by the number of wavelengths, the interval and the mean wavelength of the set (λ m). Approximately linear irrelevant absorption is normally eliminated using six to eight wavelengths, although many more up to 20, wavelengths may be required if the irrelevant absorption contains high-frequency components. The wavelengths interval and λ m are best obtained from a convulated absorption curve. This is a plot of the absorptivity coefficient for a specified order of polynomial, a specified number of wavelengths and a specified wavelengths interval against the λ m of the set of wavelengths. The optimum set of wavelengths corresponds with a maximum or minimum in the convoluted curve of the analyte and with a coefficient of zero in the convoluted curve of the irrelevant absorption. In favorable circumstances the concentration of an absorbing drug in admixture with another may be calculated if the correct choice of polynomial parameters is made, thereby eliminating the contribution of the drug from the polynomial of the mixture.
(g)Difference spectrophotometry16-20: Difference spectroscopy provides a sensitive method for detecting small changes in the environment of a chromophore or it can be used to demonstrate ionization of a chromophore leading to identification and quantitation of various components in a mixture. The selectivity and accuracy of spectrophotometric analysis of samples containing absorbing interferents may be markedly improved by the technique of difference spectrophotometry. The essential feature of a difference spectrophotometric assay is that the measured value is the difference absorbance (Δ A) between two equimolar solutions of the analyte in different forms which exhibit different spectral characteristics.
The criteria for applying difference spectrophotometry to the assay of a substance in the presence of other absorbing substances are that:
A) Reproducible changes may be induced in the spectrum of the analyte by the addition of one or more reagents.
B) The absorbance of the interfering substances is not altered by the reagents.
The simplest and most commonly employed technique for altering the spectral properties of the analyte properties of the analyte is the adjustment of the pH by means of aqueous solutions of acid, alkali or buffers. The ultraviolet-visible absorption spectra of many substances containing ionisable functional groups e.g. phenols, aromatic carboxylic acids and amines, are dependent on the state of ionization of the functional groups and consequently on the pH of the solution.
If the individual absorbances, Aalk and Aacid are proportional to the concentration of the analyte and path length, the Δ A also obeys the Beer-Lambert law and a modified equation may be derived
Δ A = Δ abc
Where Δ a is the difference absorptivity of the substance at the wavelength of measurement.
If one or more other absorbing substances is present in the sample which at the analytical absorbance Ax in the alkaline and acidic solutions, its interference in the spectrophotometric measurement is eliminated
Δ A = (Aalk + Ax) – (Aacid + Ax)
The selectivity of the Δ A procedure depends on the correct choice of the pH values to induce the spectral change of the analyte without altering the absorbance of the interfering components of the sample. The use of 0.1M sodium hydroxide and 0.1M hydrochloric acid to induce the Δ A of the analyte is convenient and satisfactory when the irrelevant absorption arises from pH-insenstive substances. Unwanted absorption from pH-sensitive components of the sample may also be eliminated if the pKa values of the analyte and interferents differ by more than 4.
(h) Derivative spectrophotometry: Direct spectrophotometric determination of multicomponent formulation is often complicated by interference from formulation matrix and spectral overlapping; such interferences can be treated in many ways like solving two simultaneous equations, using absorbance ratios at certain wavelengths, but still may give erroneous results21. Other approaches include PH induced differential least squares22 and orthogonal function methods23. Also the compensation technique can be used to detect and eliminate unwanted or irrelevant absorption. Derivative spectrophotometry is a useful means of resolving two overlapping spectra and eliminating matrix interferences or interferences due to an indistinct shoulder on side of an absorption band24. Derivative spectrophotometry involves the conversion of a normal spectrum to its first, second or higher derivative spectrum. In the context of derivative spectrophotometry, the normal absorption spectrum is referred to as the fundamental, zeroth order or D spectrum. The absorbance of a sample is differentiated with respect to wavelength λ to generate first, second or higher order derivative
[A] = f (λ ): zero order
[dA/dλ ] = f (λ ): first order
[d2A/d λ 2] = f (λ ): second order
The first derivative spectrum of an absorption band is characterized by a maximum, a minimum, and a cross-over point at the λ max of the absorption band. The second derivative spectrum is characterized by two satellite maxima and an inverted band of which the minimum corresponds to the λ max of the fundamental band.
The spectral transformation confer two principal advantages on derivative spectrophotometry, firstly an even order spectrum is of narrower spectral bandwidth than its fundamental spectrum, secondly derivative spectrophotometry discriminates in favor of substances of narrow spectral bandwidth substances. This is because the derivative amplitude.
The enhanced resolution and bandwidth discrimination increases with increasing derivative order. However it is also found that the concomitant increase in electronic noise inherent in the generation of the higher order spectra, and the consequent reduction of the signal to noise ratio, place serious practical limitations on the higher order spectra.
The important features of derivative technique include enhanced information content, discrimination against back ground noise and greater selectivity in quantitative analysis. It can be used for detection and determination of impurities in drugs, chemicals and also in food additives and industrial wastes25.
(i)Least square approximation26-28: Occasionally one finds it advisable to admit that experimental measurements are not as accurate as might be desired, but are subject to random errors. An answer with higher probable accuracy can be obtained if excess experimental information is applied. Instead of carrying out analysis of two components at two wavelengths, it is carried out at three or four wavelengths. If it is carried out at three wavelengths the problem becomes the solution of three equations in two unknowns. This can not be carried out by any other method. The best solution is the least square criterion, which is found by multiplying by transponse of the absorptivity matrix. This now gives two equations in two unknowns, such that the solution to these two equations is also the optimum solution to the three original equations. The method yields a higher precision of determinations for systems whose absorption spectra are very similar. With increasing diversity of the absorption curves, the efficiency of the method of measurements taken at a large number of wavelengths i.e. the method of an over-determined system of linear equations, decreases and for systems with highly diversified curves it may even deteriorate the precision of the determination.
All the foregoing methods of calculating the content of individual component in a multicomponent analysis fail to use the entire information capacity of the spectrophotometric method of analysis. Only the method that stores the whole spectra of standard substance in the computer memory and uses the algorithm matching the absorption spectrum of the sample with the spectrum obtained mathematically by adding up the individual spectra of components makes a full use of the information load of the spectrophotometric method. This is also the operating principle of advanced design uv-vis spectrophotometers equipped with multicomponent analysis program.

SELECTION PARAMETERS OF AN ANALYTICAL METHOD

Once the problem is defined the following important factors are considered in choosing the analytical method. These are concentration range, required accuracy and sensitivity, selectivity time requirements and cost of analysis.

Concentration Range:
The ability to match the method to the optimum sample size is usually gained through experience and awareness of the different methods.
Sensitivity, as it applied to an analytical method, corresponds to the minimum concentration or lowest concentration of a substance that is detectable with a specified reliability. It is often expressed numerically as a detection limit or sensitivity. Different analytical methods will provide different sensitivities and the one chosen will depend on the sensitivity that is required to solve a particular problem. Accuracy refers to the correctness of the result achieved by the analytical method.

Selectivity:
Selectivity is an indication of the preference that a particular method shows for one substance over another.

Time and Cost:
Time and cost often go hand in hand usually are a reflection of the equipment, personnel and space required to complete a determination.

ANALYTICAL METHOD VALIDATION
Regulatory perspective: In the US, there was no mention of the word validation in the cGMP’s of 1971, and precision and accuracy were stated as laboratory controls. It was only in the cGMP guidelines of March 28, 1979, that the need for validation was implied. It was done in two sections:
i)Section 211.165, it was the word validation was used and
ii)Section 211.194, in which the proof of suitability,
Accuracy and reliability was made compulsory for regulatory submission. Subsequently a guideline was issued on 1st February, 1987, for submitting samples and analytical data for method validation. The world health organization (WHO) published a guideline under the title ‘validation of analytical procedures used in the examination of pharmaceutical materials’. It appeared in the 32nd report of the WHO expert committee on specifications for pharmaceutical preparations which was published in 1992.
The international conference on harmonization (ICH) which has been on the forefront of developing the harmonized tripartite guidelines under the titles ‘text on validation of analytical procedures (Q2A)’ and ‘ Validation of analytical procedures: methodology (Q2B)’. on 1st march,1999, the FDA published a final guideline on the validation of analytical procedures. The contents of this guideline were prepared under the auspices of the technical requirements for registration of pharmaceuticals for human use. According to section 501 of the federal food, drugs and cosmetics act, assays and specifications in monographs of the USP and the NF constitutes legal standards. As a result every analytical method should be validated according to the current pharmacopoeial standards.
The ability to provide timely, accurate, and reliable data is central to the role of analytical chemists and is especially true in the discovery, development, and manufacture of pharmaceuticals. Analytical data are used to screen potential drug candidates, aid in the development of drug syntheses, support formulation studies, monitor the stability of bulk pharmaceuticals and formulated products, and test final products for release. The quality of analytical data is a key factor in the success of a drug development program. The process of method development and validation has a direct impact on the quality of these data.
Although a thorough validation cannot rule out all potential problems, the process of method development and validation should address the most common ones. Examples of typical problems that can be minimized or avoided are synthesis impurities that co elute with the analyte peak in an HPLC assay; a particular type of column that no longer produces the separation needed because the supplier of the column has changed the manufacturing process; an assay method that is transferred to a second laboratory where they are unable to achieve the same detection limit; and a quality assurance audit of a validation report that finds no documentation on how the method was performed during the validation.
Problems increase as additional people, laboratories, and equipment are used to perform the method. When the method is used in the developer's laboratory, a small adjustment can usually be made to make the method work, but the flexibility to change it is lost once the method is transferred to other laboratories or used for official product testing. This is especially true in the pharmaceutical industry, where methods are submitted to regulatory agencies and changes may require formal approval before they can be implemented for official testing. The best way to minimize method problems is to perform adequate validation experiments during development.

Method Validation
Method validation is the process of proving that an analytical method is acceptable for its intended purpose. For pharmaceutical methods, guidelines from the United States Pharmacopeia (USP) 29, International Conference on Harmonization (ICH) 30, and the Food and Drug Administration (FDA) 31,32 provide a framework for performing such validations. In general, methods for regulatory submission must include studies on specificity, linearity, accuracy, precision, range, detection limit, quantitation limit, and robustness. Although there is general agreement about what type of studies should be done, there is great diversity in how they are performed 33. The literature contains diverse approaches to performing validations 34-35. Validation requirements are continually changing and vary widely, depending on the type of drug being tested, the stage of drug development, and the regulatory group that will review the drug application. In the early stages of drug development, it is usually not necessary to perform all of the various validation studies. Many researchers focus on specificity, linearity, accuracy, and precision studies for drugs in the preclinical through Phase II (preliminary efficacy) stages. The remaining studies are performed when the drug reaches the Phase III (efficacy) stage of development and has a higher probability of becoming a marketed product. The process of validating a method cannot be separated from the actual development of the method conditions, because the developer will not know whether the method conditions are acceptable until validation studies are performed. The development and validation of a new analytical method may therefore be an iterative process. Results of validation studies may indicate that a change in the procedure is necessary, which may then require revalidation. During each validation study, key method parameters are determined and then used for all subsequent validation steps. To minimize repetitious studies and ensure that the validation data are generated under conditions equivalent to the final procedure, we recommend the following sequence of studies.

Establish minimum criteria:
The first step in the method development and validation cycle should be to set minimum requirements, which are essentially acceptance specifications for the method. A complete list of criteria should be agreed on by the developer and the end users before the method is developed so that expectations are clear. For example, is it critical that method precision (RSD) be 2%? Does the method need to be accurate to within 2% of the target concentration? During the actual studies and in the final validation report, these criteria will allow clear judgment about the acceptability of the analytical method. The statistics generated for making comparisons are similar to what analysts will generate later in the routine use of the method and therefore can serve as a tool for evaluating later questionable data. More rigorous statistical evaluation techniques are available and should be used in some instances, but these may not allow as direct a comparison for method troubleshooting during routine use.

Specificity:
Specificity is the ability of the method to accurately measure the analyte response in the presence of all potential sample components. The response of the analyte in test mixtures containing the analyte and all potential sample components (placebo formulation, synthesis intermediates, excipients, degradation products, process impurities, etc.) is compared with the response of a solution containing only the analyte. If an analytical procedure is able to separate and resolve the various components of a mixture and detect the analyte quantitatively, the method is called selective. If the method determines or measures quantitatively the compound of interest in the sample matrix without separation it is said to be specific. Measuring a method’s specificity is extremely important during the validation of non-chromatographic methods because they do not contain a separation step that ensures non-interference from Excipients. They rely on intrinsic differences in chemical or physical properties to ensure their ability to accurately determine the concentration of analyte in complex sample mixture.

Linearity:
A linearity study verifies that the sample solutions are in a concentration range where analyte response is linearly proportional to concentration. For assay methods, this study is generally performed by preparing standard solutions at five concentration levels, from 50 to 150% of the target analyte concentration; at least six replicates per concentration must be used. The 50 to 150% range for this study is wider than what is required by the FDA guidelines. In the final method procedure, a tighter range of three standards is generally used, such as 80, 100, and 120% of target; and in some instances, a single standard concentration is used. Validating over a wider range provides confidence that the routine standard levels are well removed from nonlinear response concentrations, that the method covers a wide enough range to incorporate the limits of content uniformity testing, and that it allows quantitation of crude samples in support of process development. For impurity methods, linearity is determined by preparing standard solutions at five concentration levels over a range such as 0.05-2.5 wt%. Acceptability of linearity data is often judged by examining the correlation coefficient and y-intercept of the linear regression line for the response versus concentration plot. A correlation coefficient of > 0.999 is generally considered as evidence of acceptable fit of the data to the regression line. The y-intercept should be less than a few percent of the response obtained for the analyte at the target level. Although these are very practical ways of evaluating linearity data, they are not true measures of linearity 36-37. These parameters, by themselves, can be misleading and should not be used without a visual examination of the response versus concentration plot.

Accuracy:
The accuracy of a method is the closeness of the measured value to the true value for the sample. Accuracy is usually determined in one of four ways. First, accuracy can be assessed by analyzing a sample of known concentration and comparing the measured value to the true value. National Institute of Standards and Technology (NIST) reference standards are often used; however, such a well-characterized sample is usually not available for new drug-related analytes. The second approach is to compare test results from the new method with results from an existing alternate method that is known to be accurate. Again, for pharmaceutical studies, such an alternate method is usually not available. The third and fourth approaches are based on the recovery of known amounts of analyte spiked into sample matrix. The third approach, which is the most widely used recovery study, is performed by spiking analyte in blank matrices. For assay methods, spiked samples are prepared in triplicate at three levels over a range of 50--150% of the target concentration. If potential impurities have been isolated, they should be added to the matrix to mimic impure samples. For impurity methods, spiked samples are prepared in triplicate at three levels over a range that covers the expected impurity content of the sample, such as 0.1--2.5 wt%. The analyte levels in the spiked samples should be determined using the same quantitation procedure as will be used in the final method procedure (i.e., same number and levels of standards, same number of sample and standard injections, etc.). The percent recovery should then be calculated. The fourth approach is the technique of standard additions, which can also be used to determine recovery of spiked analyte. This approach is used if it is not possible to prepare a blank sample matrix without the presence of the analyte. This can occur, for example, with lyophilized material, in which the speciation in the lyophilized material is significantly different when the analyte is absent. An example of an accuracy criteria for an assay method is that the mean recovery will be 100 + 2% at each concentration over the range of 80--120% of the target concentration. For an impurity method, the mean recovery will be within 0.1% absolute of the theoretical concentration or 10% relative, whichever is greater, for impurities in the range of 0.1--2.5 wt%.

Determine the range:
The range of an analytical method is the concentration interval over which acceptable accuracy, linearity, and precision are obtained. In practice, the range is determined using data from the linearity and accuracy studies.

The following minimum specified ranges should be considered:
1) For the assay of an active substance or a finished product normally from 80-120% of the test concentration.
2) For the determination of an impurity; from reporting level of an impurity to 120% of the specification.
3) For content uniformity; covering minimum range is from 70-130%of the test concentration, unless a wider more range based on the nature of the dosage form.
4)For dissolution testing + 20% over the specified range.
5) If assay and purity are performed together as one test and only a 100 % standard is used, linearity should cover the range from reporting level of the impurities to 120% of the assay specification.

Determine precision:
The precision of an analytical method is the amount of scatter in the results obtained from multiple analyses of a homogeneous sample. To be meaningful, the precision study must be performed using the exact sample and standard preparation procedures that will be used in the final method.
The first type of precision study is instrument precision or repeatability. In this a method when reported by the same analyst, same test method and under same set of laboratory conditions within a short interval of time, also known as intra-assay precision.
Second type of precision is reproducibility; the measure of test methods variability when carried out by different analysts in different laboratories using different equipments, reagents and laboratory settings and on different days. It is assessed by means of an inter-laboratory crossover studies. Reproducibility should be considered in case of the standardization of an analytical procedure, for instance, for inclusion of procedures in pharmacopoeias.

Scope:
Once these validation studies are complete, the method developers should be confident in the ability of the method to provide good quantitation in their own laboratories. This result may be sufficient for many methods, especially in the early phases of drug development. The remaining studies should provide greater assurance that the method will work well in other laboratories, where different operators, instruments, and reagents are involved and where it will be used over much longer periods of time. This is a good time to begin accumulating data for two or more system suitability criteria, which are required prior to routine use of the method to ensure that it is performing appropriately.

Detection limit:
The detection limit of a method is the lowest analyte concentration that produces a response detectable above the noise level of the system, typically, three times the noise level. It is a limit test where concentrations below this may not be detected while concentrations above this limit are certainly detected in analysis. The detection limit should be estimated early in the method development-validation process and should be repeated using the specific wording of the final procedure if any changes have been made. It is important to test the method detection limit on different instruments, such as those used in the different laboratories to which the method will be transferred. An example of a detection limit criteria is that, at the 0.05% level, an impurity will have S/N = 3.

Quantitation limit:
The quantitation limit is the lowest level of analyte that can be accurately and precisely measured. This limit is required only for impurity methods and is determined by reducing the analyte concentration until a level is reached where the precision of the method is unacceptable. If not determined experimentally, the quantitation limit is often calculated as the analyte concentration that gives S/N = 10. An example of quantitation limit criteria is that the limit will be defined as the lowest concentration level for which an RSD „ 20% is obtained when an intra-assay precision study is performed.

Stability:
During the earlier validation studies, the method developer gained some information on the stability of reagents, standards, and sample solutions. For routine testing in which many samples are prepared and analyzed each day, it is often essential that solutions be stable enough to allow for delays such as instrument breakdowns or overnight analyses. At this point, the limits of stability should be tested. Samples and standards should be tested over at least a 48-h period, and quantitation of components should be determined by comparison to freshly prepared standards. If the solutions are not stable over 48 h, storage conditions or additives should be identified that can improve stability. An example of stability criteria for assay methods is that sample and standard solutions and the mobile phase will be stable for 48 h under defined storage conditions. Acceptable stability is 2% change in standard or sample response, relative to freshly prepared standards. The mobile phase is considered to have acceptable stability if aged mobile phase produces equivalent chromatography (capacity factors, resolution, or tailing factor) and assay results are within 2% of the value obtained with fresh mobile phase. In case of uv the absorption must be same.

Robustness:
The robustness of a method is its ability to remain unaffected by small changes in parameters such as percent organic content, pH of the solvent, buffer concentration and temperature. These method parameters may be evaluated one factor at a time or simultaneously as part of a factorial experiment 38.
The evaluation of the robustness should be considered during the development phase. Often such testing is not performed as a part of the official method validation during the transfer of the method to another laboratory.

CONCLUSION
We can conclude that the simultaneous spectrophotometric methods for quantitative estimation of pharmaceuticals are fast, less time consuming, reproducible and highly sensitive even microgram of compound can be measured. Performing a thorough method validation can be a tedious process, but the quality of data generated with the method is directly linked to the quality of this process. Time constraints often do not allow for sufficient method validations. Many researchers have experienced the consequences of invalid methods and realized that the amount of time and resources required to solve problems discovered later exceeds what would have been expended initially if the validation studies had been performed properly.