Depending on your industry, any from among the alphabet soup of standards or regulatory bodies—ICH, ASTM, USP, FDA, etc. —will guide validation of your organization’s analytical methods. The 2015 U.S. Food and Drug Administration guidance, Analytical Procedures and Methods Validation for Drugs and Biologics, is representative, defining method validation as “the process of demonstrating that an analytical procedure is suitable for its intended purpose.” The objective of validation, according to Agilent, is “to obtain consistent, reliable, and accurate data.”

FDA goes on to advise that properly validated methods should be described “in sufficient detail to allow a competent analyst to reproduce the necessary conditions and obtain results within the proposed acceptance criteria,” while noting “aspects of the analytical procedures that require special attention.”

The element of risk plays into how closely an analytical scientist adheres to these guidelines. Drugs meant for human consumption pose arguably the highest risk, and therefore demand the highest level of assurance that the method’s scope, apparatus, operating parameters, reagents, standards, sample preparation, calculations, data reporting, and overall procedures are reliable, and reproducible. Further, a validated method will have defined accuracy, precision, specificity, detection and quantitation limits, linearity, range, and robustness.

The notion of “intended purpose” is ever-present in validation, and is closely related to the application. The acceptable analytical precision or limits of quantitation for one administered drug may be radically different that that of another medicine, while the acceptable variation in the concentrations of binders or fillers will be tighter for drugs than for fertilizers.

Inadequate planning is the bane of method validation. Depending on the assay, a missed validation target could result in the inability to process samples, or may even delay product release. Lack of time, analyst expertise, or statistical confidence can negatively affect validation, as can blind adherence to protocol. It is possible, particularly at small to mid-sized organizations, to over- or under-validate.

Under-validation frequently occurs when laboratories pay insufficient attention to processes leading up to or “surrounding” the analysis to be validated, for example sample preparation, establishment of suitable standards, and sourcing of reagents and solvents.

One vendor’s role

Phenomenex specializes in chromatography consumables (including GC and LC columns) and sample-preparation products. Since method validation involves many of these products, Phenomenex supplies customers with supporting documentation, including batch certificates of analysis and test results for GC and HPLC columns.

Phenomenex batch records provide, for example, batch-specific data on the columns’ silica or bonded phase that may affect the results of validation studies. Column characteristics can vary from batch to batch within established specification limits for each material, “as they would in any other manufacturing situation,” says global industry manager Phil Koerner, Ph.D., and therefore could affect retention times for specific analytes in customer methods.

The significance of batch-to-batch variability could arise in situations where two columns give significantly different results, or even the same results. “One could ask, for example, how many different columns were tested to arrive at a particular validation result,” Koerner adds, and whether they came from the same production batch. “Part of good validation practice is to test your analytical testing conditions across several columns from several different batches or lots (tyically at least two, but preferrably three or more); as this will ensure that you have developed a robust test method. As a column manufacturer we want to show customers that we are controlling critical variables that are under our control, to ensure minimal or controllable variability.”

Customers may then decide that their method is particularly sensitive to certain parameters that require tighter control of their particular operating conditions (for example, mobile-phase pH, mobile-phase composition, and column temperature). “At least they’ll have that information, and be able to tell what’s different between lots.”

Watch your data

Data integrity has emerged as a fundamental issue in method validation, as illustrated by the U.S. Food and Drug Administration’s 2016 guidance Data Integrity and Compliance With CGMP. In this document, FDA defines data integrity as “the completeness, consistency, and accuracy of data,” which for full compliance needs to be “complete, consistent, accurate, attributable, legible, contemporaneously recorded, original or a true copy.”

Tim Rhines, Ph.D., senior director of quality control at PharMEDium Services, observes that data integrity must “ensure the validity of the data beyond reproach. If your data isn’t sound, everything else you use it for is suspect. Not just the validation part, but all laboratory data, all quality control work: Every analysis you run with that method would be suspect.”

Standards differ somewhat for paper and electronic data, but in both instances data systems should be instituted in such a way that no unauthorized manipulation of files or data occurs, and when it does it is easily detectable.

Breaches occur mainly due to lack of training or misunderstanding the core concept of data integrity. “It’s very rare that someone says, ‘I don’t like that result,’ and alters raw or supporting data to make results more desirable,” Rhines says.

Even systems that comply with CFR Part 11 can be less than secure if they are not set up to check and audit data regularly. Part 11 compliance is one step in the right direction, but by no means fail-safe insurance against integrity breaches. Despite widespread adherence to this directive, particularly by suppliers of analytic systems and software, FDA has increasingly observed CGMP violations involving data integrity during CGMP inspections. Data integrity-related CGMP violations have led to numerous regulatory actions, including warning letters, import alerts, and consent decrees.

FDA guidances, for example, require “exact and complete” backup data that are “secure from alteration, inadvertent erasures, or loss,” storage of data “to prevent deterioration or loss,” that certain activities “be documented at the time of performance,” employ scientifically sound laboratory controls, that “records be retained in some type of original form, and that analysts maintain a “complete record of all data,” and “all tests performed.”

Is there a way, in effect, to outsource data integrity to vendors of data systems? “Reputable vendors will be on top of the data integrity issue, but you can’t always rely solely on them,” Rhines explains. “If you’re not cognizant of the dangers, even a well-established data-management system can be set up with faulty audit trails. It may be fully Part 11 compliant in that you can’t delete files, and there’s traceability, but there could still be problems with data integrity. Integrity breaches should be easy to detect, but if the audit trail isn’t set up properly detecting a breach will be difficult.”