We use cookies to improve your experience. By continuing to browse this site, you accept our cookie policy.×
Conference ReportFree Access

The 10th GCC Closed Forum: rejected data, GCP in bioanalysis, extract stability, BAV, processed batch acceptance, matrix stability, critical reagents, ELN and data integrity and counteracting fraud

    , ,
    Corey Nehls

    PPD Laboratories, Madison, WI, USA

    , , ,
    Patrick Bennett

    PPD Laboratories, Madison, WI, USA

    ,
    James Hulse

    Eurofins Pharma Bioanalytics Services, Saint Charles, MO, USA

    ,
    Chris Beaver

    inVentiv Health, Princeton, NJ, USA

    ,
    Masood Khan

    Alliance Pharma, Malvern, PA, USA

    ,
    Shane Karnik

    Pyxant Labs, Colorado Springs, CO, USA

    , , ,
    Adriana Iordachescu

    3S – Pharmacological Consultation, Bucharest, Romania

    ,
    Luigi Silvestro

    3S – Pharmacological Consultation, Bucharest, Romania

    ,
    Rabab Tayyem

    ACDIMA Center for Bioequivalence & Pharmaceutical Studies, Amman, Jordan

    ,
    Ron Shoup

    AIT Bioscience, Indianapolis, IN, USA

    ,
    Stephanie Mowery

    Previously with AIT Bioscience, Indianapolis, IN, USA

    ,
    Anahita Keyhani

    Algorithme Pharma, Laval, Quebec, Canada

    ,
    Andrea Wakefield

    Previously with Algorithme Pharma, Laval, Quebec, Canada

    ,
    Yinghe Li

    Alliance Pharma, Malvern, PA, USA

    ,
    Jennifer Zimmer

    Alturas Analytics, Moscow, ID, USA

    ,
    Javier Torres

    Anapharm Europe, Barcelona, Spain

    , , , ,
    Nicola Hughes

    Bioanalytical Laboratory Services, Toronto, Ontario, Canada (currently BioPharma Services)

    , , ,
    Saadya Fatmi

    Biotrial Bioanalytical Services Inc., Laval, Quebec, Canada

    , ,
    Christina Satterwhite

    Charles River Laboratories, Reno, NV, USA

    ,
    Mathilde Yu

    CIRION Biopharma Research, Laval, Quebec, Canada

    ,
    Jenny Lin

    CMIC, Hoffman Estates, IL, USA

    , , , , ,
    John Kamerud

    Eurofins Pharma Bioanalytics Services, Saint Charles, MO, USA

    , , , , ,
    Nadine Boudreau

    inVentiv Health, Quebec City, Quebec, Canada

    ,
    Clark Williard

    inVentiv Health, Princeton, NJ, USA

    ,
    Yansheng Liu

    KCAS Bioanalytical & Biomarker Services, Shawnee Mission, KS, USA

    ,
    Dominic Warrino

    KCAS Bioanalytical & Biomarker Services, Shawnee Mission, KS, USA

    ,
    Prashant Kale

    Lambda Therapeutic Research, Ahmedabad, India

    , ,
    Radha Shekar

    Lotus Labs Pvt. Ltd., Bangalore, India

    ,
    Edward O'Connor

    Loveless Respiratory Research Institute, Albuquerque, NM, USA

    , , ,
    Roger Hayes

    MPI Research, Mattawan, MI, USA

    ,
    Mohammed Bouhajib

    Pharma Medica Research, Inc., Mississauga, Ontario, Canada

    ,
    Simona Rizea Savu

    Pharma Serv International, Bucharest, Romania

    ,
    Bruce Stouffer

    PPD Laboratories, Richmond, VA, USA

    ,
    Edward Tabler

    PPD Laboratories, Richmond, VA, USA

    ,
    Jing Tu

    PPD Laboratories, Richmond, VA, USA

    ,
    Chad Briscoe

    PRA Health Sciences, Lenexa, KS, USA

    ,
    Barry van der Strate

    PRA Health Sciences, Assen, The Netherlands

    , ,
    Phyllis Conliffe

    Smithers Avanza, Gaithersburg, MD, USA

    ,
    Ira DuBey

    Smithers Avanza, Gaithersburg, MD, USA

    , , ,
    Elizabeth Groeber

    WIL Research, Ashland, OH, USA (currently Charles River)

    ,
    Jenifer Vija

    WIL Research, Skokie, IL, USA (currently Charles River)

    ,
    Michele Malone

    Worldwide Clinical Trials, Cedar Park, TX, USA

    &
    Published Online:https://doi.org/10.4155/bio-2017-5000

    Abstract

    The 10th Global CRO Council (GCC) Closed Forum was held in Orlando, FL, USA on 18 April 2016. In attendance were decision makers from international CRO member companies offering bioanalytical services. The objective of this meeting was for GCC members to meet and discuss scientific and regulatory issues specific to bioanalysis. The issues discussed at this closed forum included reporting data from failed method validation runs, GCP for clinical sample bioanalysis, extracted sample stability, biomarker assay validation, processed batch acceptance criteria, electronic laboratory notebooks and data integrity, Health Canada's Notice regarding replicates in matrix stability evaluations, critical reagents and regulatory approaches to counteract fraud. In order to obtain the pharma perspectives on some of these topics, the first joint CRO–Pharma Scientific Interchange Meeting was held on 12 November 2016, in Denver, Colorado, USA. The five topics discussed at this Interchange meeting were reporting data from failed method validation runs, GCP for clinical sample bioanalysis, extracted sample stability, processed batch acceptance criteria and electronic laboratory notebooks and data integrity. The conclusions from the discussions of these topics at both meetings are included in this report.

    Figure 1. Best practice procedure for evaluating extract stability.

    CAL: Calibrant/standard; HQC: High quality control sample; LQC: Low quality control sample.

    First draft submitted: 26 January 2017; Accepted for publication: 3 February 2017; Published online: 22 March 2017

    The Global CRO Council in Bioanalysis (GCC), founded in 2010 to allow CRO scientific decision makers to share experiences and opinions related to scientific and regulatory issues in bioanalysis [1], held their 10th Closed Forum on 18 April 2016 in Orlando, FL, USA. Attendees of these Closed Forum meetings discuss several interesting subjects related to bioanalytical CROs. In order to advance industry thinking on the topics discussed, conference reports [2–7] and white papers [8–13] are available summarizing the opinions and unique challenges CROs face as well as recommendations for several approaches to a wide range of topics across the community.

    Steve Lowes chaired the 10th GCC Closed Forum with the official admonition statement [1], which instructs participants on the guidelines to follow during the discussions. Stephanie Cape, Rafiq Islam, Corey Nehls, John Allison, Afshin Safavi, Patrick Bennett, James Hulse, Chris Beaver, Masood Khan, Shane Karnik and Maria Cruz Caturla each led or co-led the discussions of topics on the agenda, and Wei Garofolo presented the financial update from 2013 to 2015.

    The topics discussed at the 10th GCC Closed Forum were as follows:

    • Reporting data from failed method validation runs;

    • GCP for clinical sample bioanalysis;

    • Extracted sample stability;

    • Biomarker assay validation;

    • Processed batch acceptance criteria;

    • Electronic laboratory notebooks (ELNs) and data integrity;

    • Health Canada's Notice regarding replicates in matrix stability evaluations;

    • Critical reagents;

    • Regulatory approaches to counteract fraud;

    • Reporting data from failed method validation runs;

    • GCP for clinical sample bioanalysis;

    • Extracted sample stability;

    • Biomarker assay validation;

    • Processed batch acceptance criteria;

    • Electronic laboratory notebooks and data integrity;

    • Health Canada's Notice regarding replicates in matrix stability evaluations;

    • Critical reagents;

    • Regulatory approaches to counteract fraud.

    Following this meeting, interest was expressed by pharmaceutical companies to further discuss several of these topics in an effort to understand industry experience across all types of bioanalytical laboratories. Therefore, the first joint CRO–Pharma Scientific Interchange Meeting was held on 12 November 2016, in Denver, Colorado, USA. Five surveys, which had previously been sent only to CROs, were submitted to pharmaceutical companies, and the results were compared and discussed. The topics at this joint meeting were as follows:

    • Reporting data from failed method validation runs;

    • GCP for clinical sample bioanalysis;

    • Extracted sample stability;

    • Processed batch acceptance criteria;

    • Electronic laboratory notebooks and data integrity.

    This report summarizes survey results and discussions from both meetings.

    Discussion topics

    Reporting data from failed method validation runs

    In order to ensure accuracy, reliability and transparency of data submitted to regulatory agencies, guidance has been provided outlining regulatory reporting expectations. The 2001 US FDA bioanalytical method validation (BMV) [14] does not reference the need to provide or discuss rejected data. However, the Crystal City III conference report [15], a resource for industry providing additional information from regulators and industry representatives on BMV, expands on the topic and states that data from failed runs should be included in the validation report as part of a summary table of all validation runs analyzed, along with the reason for failure. Furthermore, “quality control (QC) data from validation runs that only failed to meet QC acceptance criteria with no assignable cause for failure should be included in the precision and accuracy estimation” and “although drug concentration data from the rejected runs need not be included in the final report, a brief description of the reasons and a tabular listing of rejected runs should be provided.” Since this conference report was published, it has become industry standard to list the rejected validation batches within the final validation report, along with the reason for rejection. Typically, however, rejected evaluation data are not reported. This approach was supported by EMA in their BMV guidance document [16], which, consistent with the above, only refers to providing tabulated results of acceptable runs. The 2013 FDA draft BMV [17] expands on this approach, stating that tabulated data should be available for “all validation experiments with analysis dates, whether the experiments passed or failed and the reason for the failure.”

    In order to discern how companies in the bioanalytical industry report rejected data, a survey was distributed to the current GCC membership and bioanalytical pharma representatives. Fifty-two CRO responses were received, along with 26 responses from pharma companies. Of the respondents, 50% perform chromatographic assays only, 20% perform ligand binding assays only, and 30% perform both chromatographic and ligand binding assays in their laboratories. In almost all cases, the majority response from the CRO and pharma respondents was very similar.

    Results indicated that FDA has begun requesting, and expecting, tabulated rejected data within the validation reports (nearly 30% of respondents). This has been indicated during bioanalytical audits and deficiency letters; going so far as to issue Form 483s in some cases for failing to report all data from runs during method validation. Other agencies such as EMA, Germany's BfArM and France's ANSM have also occasionally requested the information.

    The survey responses indicated that reporting of rejected validation data for precision and accuracy is inconsistent and follows one of three approaches, in equal measure: first, precision and accuracy are always tabled, regardless of acceptance; second, precision and accuracy are tabled if standards meet acceptance criteria; and third, precision and accuracy are tabled only if standards and QC samples meet acceptance criteria.

    For validation evaluations other than precision and accuracy, the majority (60%) always table and report results which fail to meet acceptance criteria but are analyzed in a passing run, 20% never do, and another 20% sometimes do, depending on the reasons for failure or results of investigations.

    A majority of respondents (nearly 65%) only table and report standards data from acceptable calibration curves, while far fewer (∼25%) include standards from all batches, regardless of acceptability. The remaining approaches are varied, including separately tabulating rejected runs, reporting precision and accuracy of standards only, but not the curve parameters or simply not tabulating rejected data at all.

    Tabulation of QC data was equally divided between only tabulating and reporting QC samples from acceptable runs, and tabulating and reporting QC samples from all batches, regardless of acceptability (40% in either case). The remaining respondents use the same alternative approaches used for standards.

    Finally, when asked if all data within a validation should be tabled and reported, 25% each of CRO and pharma respondents agreed with the statement, regardless of standard or QC performance. Another 15% each of CRO and pharma respondents favor reporting of the data when the calibration curve is acceptable. However, 45% of CRO versus 25% of pharma respondents did not favor reporting data from failed runs.

    Discussions of these survey results indicated that this new approach to tabulating and reporting rejected data may be to facilitate remote review of data by regulatory agencies. Additionally, including rejected results may clarify anomalous data seen during sample analysis. Furthermore, the value of rejected data for antidrug antibody (ADA) methods is not clear, since cut-points can be miscalculated, resulting in increased false negatives.

    In conclusion, consensus was not reached on tabling of data from failing runs or on which runs to include in standard and QC tables. The GCC recommends that when tabling and the reporting method validate precision and accuracy data, the recommendation described in the Crystal City V report [18] of only tabling data where standards meet acceptance criteria should be followed. Additionally, all validation results from runs with acceptable standards and QC samples should be tabled and reported, even if the individual evaluation (e.g., stability) fails.

    GCP for clinical sample bioanalysis

    All aspects of clinical trials should comply with GCP. In the USA, the FDA has built the GCPs into their Code of Federal Regulations [19]. Other countries ascribe to the ICH Guidance for Industry E6 Good Clinical Practice [20]. Both are important to assure that studies are reliable and accurate, and, more importantly, that the safety of study subjects is not compromised. The bioanalysis of clinical study samples is one part of a clinical trial that is not clearly addressed within the GCPs. Unlike the GLPs [21,22], which are designed to support laboratory environments, there are no details of how the principles of GCP are to be applied to the bioanalysis of clinical study samples. To fill this gap, the EMA released a reflection paper in 2012 [23] that outlined the agency's current thinking on this topic. In order to determine if and how this reflection paper is being adopted by the industry, and if there are differences between European and US respondents, a survey was sent to GCC membership as well as to pharmaceutical companies. There were 45 CRO respondents, the majority of which (62%) were located in the USA; 29% were located in Europe. Of the pharma respondents (22 total), 64% were from the USA, and only 9% were European. The remaining respondents (four CROs and six pharma) came from Canada or Asia.

    Unanimously, all European respondents indicated that their laboratories have formal GCP training and standard operating procedures (SOPs) in place compared with only 60–70% of non-Europeans. Comments from those not training under GCP indicate that they apply the principles of GLP to the analysis of GCP samples.

    Informed consent is one of the core principles of GCP. When asked if the laboratory has a process for ensuring informed consent is in place prior to analyzing samples and if the process also manages withdrawal of consent, significant differences were noted between CRO and pharma companies, as well as between locations. In Europe, only about 47% of pharma companies ensure that informed consent is in place compared with almost 70% of CROs. The opposite was true in the USA, where almost 80% of pharma ensure that informed consent is in place versus only 35% of CROs. Outside of these locations, both 70–80% of pharma and CROs have a process in place. The difficulty to create a process for a procedure over which one has no control (as informed consent is typically obtained by an external clinical investigator site) was discussed. Contract language that ensures the sponsor has fulfilled its responsibility and informed consent is in place prior to analysis was suggested. Furthermore, in cases of informed consent withdrawal, it needs to be made clear to what extent the consent has been withdrawn (e.g., all consent for use of data, consent to continue analyzing collected samples or subject has simply terminated further involvement in the trial). Finally, the expiration of consent in the context of sample storage was discussed, without consensus. The majority of respondents, both CRO and pharma, regardless of location, indicated that they do not take into consideration the expiration of consent when deciding how long to store study samples. Many believed that consent never expires, while others believed that the informed consent expires upon finalization of the clinical trial report.

    The majority of all respondents, regardless of location, indicated that they had different sample handling processes for clinical versus nonclinical samples. Furthermore, the majority of respondents did not require all sample identification discrepancies to be resolved prior to analyzing the samples. Those that did resolve these issues suggested that unclear identification made it impossible to confirm consent, and created reporting problems if the sample identification was not reconciled. Most in Europe (70% CRO/95% pharma) had a policy on the protection of subject identifiers, but significantly less did in non-European countries (45% pharma/70% CRO).

    Study blinding and subject randomization were discussed. The majority of all respondents had a process in place for the handling of randomization codes. There was no consistency on whether the bioanalytical laboratory remained blinded to the treatment group; however, if staff are blinded, there needs to be procedures in place for how to unblind those needing to perform investigations on anomalous results.

    The final topics covered were reporting breaches and clinically significant results. The majority (>70%, depending on the location) of respondents had a process in place for managing GCP breaches. The exceptions were pharma respondents located in non-European or non-US countries. In those cases, only 45% of respondents had a formal process for managing GCP breaches.

    In Europe, the immediate reporting of clinically significant results is very well understood, with over 80% of all European respondents stating that they have these processes clearly outlined within the study documentation. This is contrasted by non-European countries, where less than 50% of all respondents include directives for immediate reporting of clinically significant results within the study documentation.

    Half of all respondents have been audited by a regulatory agency against the expectations outlined in the EMA GCP reflection paper [23]. Interestingly, despite the reflection paper being released by a European authority, in 11% of cases, the auditing agency was the FDA. Compliance expectations also appeared to be sponsor driven. Many European sponsors expected compliance whereas it was less of a focus for US sponsors. Therefore, in the interest of global submissions, it was recommended that industry should continue to evolve toward the adoption of the recommendations detailed in the reflection paper. Specifically, focus should be placed on the continuous advancement of processes and training regarding informed consent, reporting of GCP breaches, management and redaction of personally identifiable information and the protection of blinding of study teams.

    Extracted sample stability

    Maintaining stability of study samples at different points in the bioanalytical process is paramount for ensuring reliable and accurate results in chromatographic assays. Both the FDA and EMA guidance documents [14,16] state an expectation that the stability of processed samples at postextraction storage conditions, including during injection, should be evaluated. However, the approach that industry has taken over the years to meet these expectations has been varied. During the Crystal City V conference [18], regulators clarified that it was expected that stability samples were assayed against freshly prepared standards. Many in industry disagree with the need for this evaluation, arguing that the validation experiments should reflect how study samples will be treated and may be addressed using different validation evaluations [24]. Only 5% of respondents routinely assay stored study sample extracts against a freshly prepared calibration curve. Because the large majority of laboratories require that the study samples and standards be extracted and analyzed together, it has been recommended that assessing stability samples against stored extracts of standards is sufficient [25].

    Based on regulator feedback [26], three types of experiments are inconsistently performed. The first is reinjection reproducibility, which demonstrates the reproducibility of results obtained when previously injected extracts are reinjected (stored vs stored). The second is extract viability, which demonstrates the constancy of measured results over time relative to a calibration curve that is extracted and stored together with the sample extracts (stored vs stored). The final type is extract stability, which demonstrates absolute stability of the analyte when exposed to the processed sample extract matrix under defined time and temperature conditions (stored versus fresh).

    A survey was sent to GCC membership (41 respondents) as well as to pharma companies (28 respondents). When performing a method validation, the majority of respondents perform both reinjection reproducibility and extract stability. The selection of the type of experiment is not impacted by whether or not an analog or stable-isotope labeled (SIL) internal standard (IS) is used.

    For those performing reinjection reproducibility, the majority reinject a complete set of precision and accuracy QCs as opposed to just low and high QCs (LQC and HQC).

    Extract stability was overwhelmingly determined to be insufficient to also demonstrate reinjection reproducibility. For those who do evaluate extract stability, the majority (50%) do so only because it is a regulatory expectation. Twenty percent stated that they believed it was necessary for the scientific understanding of the method and should be performed during method development. Another 20% stated that they believe the experiment is necessary to ensure the validity of the data and should be performed during validation. Furthermore, it was agreed that assessing extract stability using only response values (as sometimes requested by regulators) as opposed to response ratios (where the IS is included to normalize any variation not due to stability) has very little added value.

    In conclusion, discussions during these forums indicated that laboratories have adjusted to regulatory expectations and have begun evaluating extract stability against fresh curves. However, it is recommended that the evaluation be assessed using LQC and HQC samples, in order to mimic the design of all the other matrix stability experiments. Furthermore, in order to obtain meaningful data, analyte/IS ratios should be compared to a priori acceptance criteria defined in an SOP. The majority of respondents indicated that the procedure in Figure 1 was the best practice for evaluating extract stability.

    Additionally, attendees have indicated that they will also continue to perform evaluations that mimic the processes utilized for sample analysis. For many, these include reinjection reproducibility or extract viability experiments.

    Biomarker assay validation

    Biomarker assays have gained increasing importance for drug marketing approvals. The endogenous presence of the analyte in matrix, often poorly characterized reference material and detecting appropriate upregulation or downregulation of biomarker concentration, exemplify challenges and differences from pharmacokinetic supporting bioanalysis. The current FDA and EMA BMV guidance documents [14,16], although used as a starting point for validating biomarker assays, do not include these in their scope. The draft FDA BMV guideline [17] references biomarkers; however, this document is undergoing additional changes prior to finalization and care should be taken when applying recommendations to industry processes. Based upon a prior GCC Closed Forum in 2012, some recommendations regarding the validation of biomarkers were made [27]. However, considering the extensive dialogue around biomarker bioanalysis that has happened since, it was decided to revisit those recommendations with the GCC membership; an extensive survey of 35 questions was sent.

    Respondents were asked what percentage of biomarker assays fell into one of four categories: absolute/definitive quantitative, relative quantitative, quasiquantitative and qualitative.

    Results were far less marked than the last survey, indicating that there may still be a misunderstanding of the vocabulary used. They show that 48% of respondents predominantly perform absolute/definitive quantitative assays, followed by 33% who perform mostly relative quantitative assays. The majority of respondents (70%) use a tiered approach to biomarker assays, but there remains an inconsistency in terminology used to describe the approach taken to demonstrate the robustness and reliability of a method. The term ‘validation’ is used 80% of the time, which is higher than the last survey, when only a third of respondents used this word. In 37% of cases, ‘qualification’ is used; however, some laboratories did not want to use this term because it also applies to clinical qualification of biomarkers and would prefer to move to a different vocabulary choice (e.g., scientific validation). ‘Fit-for-purpose’ was also used 43% of the time.

    The sourcing of properly characterized reference materials, particularly for large molecule biomarkers, is one of the primary reasons why method validation is challenging. Only 38% of respondents stated that they can obtain reference materials with a certificate of analysis more than 75% of the time, often using international/WHO standards. The materials are not sourced from any specific type of vendor, but are obtained from a variety of reagent or kit manufacturers, clients and reference standard material companies.

    Most respondents (68%) apply formal lot-to-lot bridging protocols when lots or suppliers of large molecule reference materials change. This is in line with the consensus achieved at the Crystal City VI meeting on biomarker validation [28]. Almost unanimously, respondents use QC samples for bridging experiments. Approximately a third also use incurred samples or calibration standards. Results of bridging studies are interpreted in a variety of ways, with only 30% using an a priori established standardized statistical approach. Other approaches included parallelism, percent difference or in-house approaches.

    Demonstrating the absolute stability of an endogenous biomarker is also a challenge. Almost all respondents (94%) use the biological matrix to perform stability; however, stability in surrogate matrix is also performed by 37%. When performing the evaluation, 83% use both spiked and endogenous QCs, and 71% apply conventional method validation criteria.

    Reporting biomarker assay results is dependent on the purpose of the assay. For category 1 assays, typically used for exploratory, internal decision making purposes, most use a spreadsheet or similar. The remaining use a full or abbreviated report, but without a quality assurance (QA) review. Category 2 assays, used to obtain data for primary or secondary end points, are typically reported using a QA audited bioanalytical report.

    Single replicate (singlicate) analysis is used for biomarker assays by 49% of respondents. Of these, an unexpected 52% use singlicate analysis for ELISA assays. Another surprise is that the actual biological matrix was used for standards and QC preparation whenever possible, whereas the last time this survey was conducted, surrogate matrix was frequently used. Typically if the endogenous level is below the LOQ of the assay, the actual matrix is used to prepare standards and QCs. However, when the endogenous levels are elevated, surrogate matrix is preferred for the standards, whereas QCs are prepared in either surrogate or biological matrix.

    Multiplex assays generally apply the same acceptance criteria used for single analyte assays. When one analyte fails, there is a wide variety of approaches used, including repeating only the failed assay, using wider acceptance criteria for the failed assay or excluding the analyte altogether.

    Finally, when asked if the concentrations of the biomarker in the normal population are studied, 64% of respondents stated that they do this over half of the time.

    In conclusion, biomarker method validation has evolved since the last GCC survey; however, there is still work to be done in harmonizing the standards and terminology across the industry.

    Processed batch acceptance criteria

    In 2015, the FDA contacted industry representatives to open a discussion on processed batch acceptance criteria. The regulatory concern is that when multiple 96-well plates are being used per run, an isolated error within one subset of samples, hereafter known as a batch, can be overlooked when using overall run acceptance criteria. This concern raises several discussion points.

    The first point that must be clarified is the definition of a batch. The assumption based on the concerns of regulators is that a batch represents one 96-well plate. However, samples within a complete run can be separated during processing by more than just by plate. The very process of preparing a 96-well plate involves a multitude of batch processes including test tube racks, mixing equipment and pipetting steps. Additional batch processes are common in the bioanalytical laboratory such as SPE manifolds, evaporator racks, and centrifuge equipment and liquid handler setups, all of which create subsets of samples if interpreted in accordance with the 96-well plate scenario.

    In order to clarify how the industry is approaching this topic, a survey was sent to CRO members and pharma companies. Answers from a total of 73 respondents, 49 CRO and 24 pharma, are available, with 78% coming from North America and 15% from Europe.

    As required by the FDA and EMA BMV guidelines [14,16], almost all respondents include a QC count of at least 5% of the total number of study samples in a run. When asked if they also apply this on a per batch basis, results varied between pharma and CROs. Pharma respondents applied this rule to batches in equal measure: 42% did and 42% did not. This contrasts with the CRO results, where 57% stated they add QC samples representing at least 5% of the amount of study samples per batch, and 27% do not.

    The majority of all respondents base run acceptance on a minimum of three QC concentration levels assayed in duplicate. There was no consensus, however, on if a minimum of singlet LQC, medium QC and HQC samples must be present in each 96-well batch, and if they were, what the acceptance criteria should be, although a minority requires that two-thirds of batch QCs must be acceptable.

    For those with three QC levels on each plate, either in singlet or in duplicate, when asked if QCs in one batch failed, should any calibration standards found on the same plate be rejected from the run's regression analysis, respondents were not harmonized. Only 41% said yes. The remaining either did not reject the standards (23%) or had some other approach (36%).

    Therefore, there has yet to be a consensus on the application of batch criteria in addition to overall run criteria. Most attendees at least run singlet QCs at each level on each 96-well plate; however, the existence of criteria is not harmonized. When asked if respondents felt that it was possible to implement a full set of QC samples in each batch of a multibatch run, 88% stated that it was feasible; however, 22% of those found it to be impractical.

    Additional questions were asked regarding rejection criteria based on matrix blank samples. Seventy-one percent of all respondents indicated that they had run acceptance criteria based on matrix blank samples, and 57% indicated they had batch criteria as well. The type of criteria varied widely, from runs rejected if blanks had responses greater than 20% of the LLOQ or greater than 100% of the LLOQ, or runs were rejected if 50% of blank samples did not meet the designated criteria. Further discussions are clearly needed around the multibatch topic in order to harmonize criteria across the industry.

    ELNs & data integrity

    Although technologies for ELNs have been around for some time, the implementation of such technology in the bioanalytical laboratory has not been as quick as anticipated [29]. The survey sent to CRO membership and pharma companies was designed to gauge experience with their use after several years of discussion within this forum [5,7].

    At this time, respondents indicate that only about 22% use ELNs. Improved quality and compliance was the overwhelming reason to use a bioanalytical suitable ELN (77%), followed by improved efficiencies (27%). Cost savings was only a minor reason (10%).

    The main reason for not moving toward these technologies is cost (51%), especially for pharma companies who prefer to outsource their assays rather than investing in their own bioanalytical laboratories. The second most important reason is a lack of off-the-shelf ELN solutions (35%). Finally, and surprisingly considering continued regulatory support for ELN use [26,29,30], 18% still do not use an ELN due to regulatory uncertainties.

    When an ELN is used, most defined the raw data as the electronic data (44%) although many believed that it was still the paper documents (33%). Almost all systems met 21 US Code of Federal Regulations Part 11 requirements [31]. However, only between 23% and 39% met the spirit of the EMA reflection papers on expectations for electronic source data and data transcribed to electronic data collection tools in clinical trials [32] and GCP compliance in relation to trial master files (paper and/or electronic) for management, audit and inspection of clinical trials [33], and the UK Medicines and Healthcare Products Regulatory Agency (MHRA) GMP Data Integrity Definitions and Guidance for Industry [34].

    Most respondents (67%) indicate that only a portion of the data is captured directly into the ELN. Their processes still require transcription of paper documents. In 58% of cases, study-specific audit trails easily generated, and 75% of respondents require audit trails to be reviewed by operations, QA or both. This audit trail review is a regulatory expectation [30]; however, discussion at these forums indicated that clarity was needed on what the expectations for the review were. At this time, users indicated that their reviewers focus on changes to entries or exception reports. FDA guidelines [35] indicate audit trail review should be performed prior to the approval of the record, scheduled based on the complexity of the system and its intended use. UK MHRA's guidance [34] agrees with the FDA approach, and adds that evidence should be available to confirm that the audit trail review took place.

    Both pharma and CRO respondents indicated that about 55% of the time, completed studies are ‘archived’ by converting the electronic data to read-only access within the system. It was noted that some attendees had a regulatory request to make access to these files permission-based for a limited time following a formal request, in a similar manner as the paper archive data, instead of maintained as available in a continuous, read-only format for all. There was no benefit determined for limiting this access, since data could not be changed either way, and it was supposed that this approach was more of a legacy issue. As an alternative approach, CROs transfer data to an archive server more often than pharma (39 vs 8%), but pharma companies download completed studies using an eArchiving-compliant tool more often than CROs (33 vs 8%). The remaining few respondents print out data for archiving.

    Health Canada's Notice regarding replicates in matrix stability evaluations

    Following the release of the Health Canada Notice on 8 October 2015 [36] and the addendum from March 2016 [37], the CRO bioanalytical industry came together to collate actual data and case studies on how bioanalytical laboratories are performing matrix-based stability evaluations. Although stability evaluations have always been a cornerstone in BMV, it appeared with the current Notice that the procedures that industry and regulators have taken for granted are perhaps not as straightforward as they appear.

    In an effort to determine how CROs perform these evaluations, a survey was sent to GCC membership. The survey received a high response rate from the GCC member companies (∼40 international CROs). Results demonstrate that although there does not appear to be complete harmonization across all companies on every question, the data do show clear positions on some aspects.

    Ninety-five percent of all respondents prepare bulk QC samples at multiple concentration levels, which are then subdivided into separate tubes for use in validation evaluations. However, there is no clear consensus on the volumes aliquoted into each tube, and therefore how many replicates are available in each.

    The crux of the issue is how many tubes of matrix need to undergo stress conditions for the assessment of analyte stability. About 70% of respondents exposed a single tube to the stability conditions and then assayed multiple aliquots from that tube in order to obtain the required number of replicates. Because the basis of this approach is that matrix pools are homogeneous, there is no scientific expectation that a significant difference between the results of the replicates can be attributed to stability variations between tubes. This was confirmed by 97% of the respondents, who have never seen any significant differences in results between different tubes. Comments were also made that for rare cases of apparent differences in results between tubes, it was hypothesized that this was likely due to lack of homogeneity rather than stability.

    The scientific opinion of 90% of the respondents align with the above, who conclude that the potential of stability-related tube to tube variability during stability testing is either none, too small to measure or not significant enough to measure with n = 3.

    It was recommended that in order to ensure regulatory compliance going forward, three tubes should be used to perform matrix stability evaluations. However, if it is able to get sufficient support from its membership, the GCC is considering consolidating actual stability data from a variety of compounds in order to support the opinion expressed by these results, with the hope that existing validation data may be sufficient to support marketing applications to Health Canada and other regulatory agencies.

    Critical reagents

    Some of the most important components of a ligand binding assay are the analyte capture, detection, and blocking reagents and buffer solutions. These are often referred to as the critical reagents of the assay but, as will be discussed below, other aspects of an assay may also have influence. Because of their significance to the success of the assay, there is a need to develop strategies for managing changes in critical reagents across lengthy studies. A case study was presented prior to discussing survey results. The assay discussed was a sandwich-based ELISA method using plates coated with a monoclonal capture antibody. During method validation, standards were loaded into the wells of the first two columns of the 96-well plate, with three levels of duplicate QC samples loaded into the third and last columns. Results demonstrated unacceptably low recovery for the QCs in the last column. An investigation determined that the issue was caused, not by assay drift, but by how the plates were coated by the manufacturer. This resulted in the need to test for plate uniformity in each batch of plates received prior to use. This example demonstrated that plates themselves could be considered a critical reagent.

    A survey of 13 questions was sent to CRO members, and 41 responses were received, primarily focused on ligand binding assays, although some added comments regarding MS assays as well. Most respondents stated that reagents are defined as critical within the method, or both in the method and in an SOP. Fewer define the critical reagent only in an SOP, and still fewer either do not define the critical reagents or define them in a validation plan.

    It was no surprise to note that the main reagents considered as critical for pharmacokinetic assays by over 85% of respondents were capture antibodies, coating reagents and conjugates. Specialty plates were identified by 56% of respondents as critical. Thirty-two percent of respondents specified the matrix, IS or blocking buffer also as critical reagents.

    Overwhelmingly, it was agreed that expiration/retest dates for critical reagents were set using a combination of associated certificates of analysis or spec sheets and an internal SOP defining default expiration times. Only a small number of representatives (7%) used internal testing or assay performance to define expiration/retest of critical reagents. It was noted during discussions that regulatory feedback indicated that expiry dates could not be assumed, but that substantiated data were needed to establish stability of critical reagents. Finally, there was an equal split between respondents if expiry dates of commercial reagents or kits could be extended past the date set by the manufacturer.

    Critical reagents are being qualified using assay performance by 93% of respondents, and most (76%) qualify for each method in which the reagent is used. Only 12% used specialty instruments like Biacore or NanoDrop and even less (7% each) use HPLC or LC–MS. The criteria used to qualify critical reagents were equal to that of the assay performance (±20%) in 78% of the responses, and more stringent criteria are used by only 12% of companies.

    Opinions regarding kit qualification were not harmonized. Fifty-one percent qualify kits when a kit lot number changes. Others (34%) only qualify when a lot number of a critical kit component changes. Still others (29%) qualify when the lot of reference material changes. Finally, 15% qualify when there is a change of antibody only. Some discussion about why these results varied were because some scientists take kits apart for the assay, and treat all components individually, while others use and treat each kit as a single entity.

    Similar questions were posed for ADA assays. In this case, critical reagents were considered to be negative controls, coating reagents and conjugates by more than 85% of respondents, and specialty plates and disks were considered critical by 51%. Again, assay performance was the main technique used to qualify critical reagents; however, the criteria used varied. Sixty-six percent of respondents used the signal readout for the negative control; 63% used the signal readout for the positive control; and 54% used the signal readout ratio between the positive and negative controls. When bridging between lots of labeled reagents in an ADA assay, 61% of responses indicate that as much titration as is required should be used. Twelve percent use no titration at all.

    When asked how the long-term stability of the surrogate positive control was evaluated, 46% indicated that they used the instrument response of stored samples versus freshly prepared positive control samples. Assay performance ranges were used by 37% of respondents, and 20% used either the comparison of titer control at a stability time point with that established during validation or they used literature sources.

    Regulatory approaches to counteract fraud

    A continued concern of regulatory bodies is fraud and data integrity. Several warning letters have been published by the FDA [38] describing fraudulent or, at least questionable, activities occurring within the industry. Therefore, measures being considered by authorities after recently identified CRO fraudulent behaviors were presented.

    The recent draft revision of the ICH GCPs [39] places increased responsibility of quality on the sponsor, even if they outsource the bioanalytical work. As a response, CROs are seeing an increased burden resulting from increased audits and monitoring. Some EU agencies are considering supporting joint sponsor audits, publishing sponsor audit reports, requiring monitoring of each study, either remotely or on-site, or auditing by sponsors prior to signing a study contract.

    There is concern that joint sponsor audits and the publication of audit reports may breach confidentiality and incur conflicts of interest. In fact, 58% of CRO respondents to a survey agree that joint audits are a potential problem, and 80% agree that publishing audit reports is equally controversial.

    On-site or remote monitoring of the bioanalytical portion of each study deemed challenging by roughly 60% of respondents and as lacking utility in improving GCP compliance by about 77% of respondents. It was suggested that periodic audits may be a feasible and viable alternative option, one already adopted by the majority of sponsors.

    Seventy-four percent of CROs indicated that an audit by a sponsor is accepted prior to signing a study contract; however, this represents additional requirements and operating costs for CROs.

    More than half of CRO respondents do not believe that these additional measures will eliminate fraudulent behaviors; however, 70% suggest that a GCP certification system would be a good measure. This program would be targeted toward any CRO worldwide that submits bioequivalence dossiers in Europe. Fees could be paid by inspected CROs, resulting in an autofinanced program. Employing a large team of inspectors from different countries would result in harmonized criteria and a fair and impartial system. Attendees were not sure, despite the positivity of the survey results, whether this type of accreditation could prevent fraud issues. Further discussion of this topic is required.

    Conclusion

    The GCC will continue to provide recommendations on hot topics of global interest in small and large molecule bioanalyses, biomarkers and immunogenicity, and expand its membership by coordinating its activities with the regional and international meetings held by the pharmaceutical industry. Additionally, CRO–Pharma Joint Scientific Interchange Meetings will continue in order to facilitate communication between the two. Please contact the GCC [40] for the exact date and time of future meetings, and for all membership information.

    Acknowledgements

    The GCC would like to thank S Lowes (Q2 Solutions) for chairing the 10th GCC Closed Forum; S Cape (Covance) for designing the survey, collecting answers, preparing survey results and chairing the sessions on ‘FDA 483/Observation: Tabling Data from Failed Validation Runs’, ‘Good Clinical Practice (GCP) for Clinical Sample Bioanalysis’ and ‘Extract Stability’; R Islam (Celerion) for designing the survey, collecting answers, preparing survey results and chairing the session on ‘Electronic Data Management and Integrity’; C Nehls (PPD) for designing the survey, collecting answers, preparing survey results and chairing the session on ‘Processed Batch Acceptance’; J Allison (LGC), A Safavi (BioAgilytix), P Bennett (PPD), J Hulse (Eurofins Pharma Bioanalytics Services, Inc.) for designing the survey, collecting answers, preparing survey results and chairing the sessions on ‘Global Harmonization of Biomarker Validations’; C Beaver (inVentive Health), M Khan (Alliance Pharma) and R Islam (Celerion) for designing the survey, collecting answers, preparing survey results and chairing the session on ‘Critical Reagents’; C Nehls (PPD), S Karnik (Pyxant Labs), S Cape (Covance) for designing the survey, collecting answers, preparing survey results and chairing the session on ‘Health Canada Oct. 8 Notice’; M Cruz Caturla (Anapharm Europe) for chairing the session on ‘Measures Being Considered by Authorities after Recent CRO Fraudulent Behaviors’; all the GCC member company and pharma company representatives who filled in the numerous surveys used to prepare the discussion of the 10th GCC Closed Forum; all the member company representatives who have sent comments and suggestions to complete this commentary; all the pharma company representatives who attended the first joint CRO–Pharma Scientific Interchange Meeting; J Sydor (Abbvie), P Cao (Alkermes), D Desai-Krieger (Allergan), F Garofolo (Angelini), D Wilson (Ardea Bio), J Duggan (Boehringer Ingelheim), L Lohr (ChemoCentryx), J Pav (Daiichi Sankyo), B Dean (Genentech), T Mencken (GlaxoSmithKline), J Cao (Ignyta), E Wickremsinhe (Lilly), E Woolf (Merck), O Kavetska (Pfizer), A Musuku (Pharmascience), M Baluom (Rigel), S Ho (Sanofi) and YL Chen (Sunovion); N Savoie (GCC) for taking the minutes of the 10th GCC Closed Forum and drafting the first version of this commentary; and W Garofolo (GCC) for organizing the logistics of the meeting and coordinating the review of this commentary.

    Financial & competing interests disclosure

    The authors have no relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. This includes employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties.

    No writing assistance was utilized in the production of this manuscript.

    References