We use cookies to improve your experience. By continuing to browse this site, you accept our cookie policy.×
White PaperFree Access

2015 White Paper on recent issues in bioanalysis: focus on new technologies and biomarkers (Part 2 – hybrid LBA/LCMS and input from regulatory agencies)

    Brad Ackermann

    Eli Lilly & Company, Indianapolis, IN, USA

    , ,
    Nicola Hughes

    Bioanalytical Laboratory Services a Division of LifeLabs LP, Toronto, ON, Canada

    ,
    Fabio Garofolo

    *Author for correspondence:

    E-mail Address: f.garofolo@angelini.it

    Angelini Pharma, Piazzale della Stazione, snc, 0040 S. Palomba Pomezia (RM), Italy

    ,
    Lee Abberley

    GlaxoSmithKline, King of Prussia, PA, USA

    ,
    Stephen C Alley

    Seattle Genetics, Bothell, WA, USA

    ,
    Patricia Brown-Augsburger

    Eli Lilly & Company, Indianapolis, IN, USA

    ,
    Mark Bustard

    Health Canada, Ottawa, ON, Canada

    ,
    Lin-zhi Chen

    Boehringer Ingelheim, Ridgefield, CT, USA

    ,
    Julia Heinrich

    Roche Innovation Center Penzberg, pRED Pharmaceutical Sciences, Germany

    , ,
    Surinder Kaur

    Genentech, South San Francisco, CA, USA

    , ,
    Omar F Laterza

    Merck & Co., Inc., Rahway, NJ, USA

    , ,
    Ann Lévesque

    inVentiv Health Clinical, Quebec City, QC, Canada

    , ,
    Timothy Olah

    Bristol-Myers Squibb, Princeton, NJ, USA

    , , ,
    Susan Spitz

    MedImmune, Gaithersburg, MD, USA

    ,
    Matthew Szapacs

    GlaxoSmithKline, King of Prussia, PA, USA

    , ,
    Jian Wang

    Bristol-Myers Squibb, Princeton, NJ, USA

    ,
    Jan Welink

    Dutch MEB, Utrecht, The Netherlands

    ,
    Jaap Wieling

    Antaeus Biopharma, Fochteloo, The Netherlands

    , , , &
    Published Online:https://doi.org/10.4155/bio.15.214

    Abstract

    The 2015 9th Workshop on Recent Issues in Bioanalysis (9th WRIB) took place in Miami, Florida with participation of over 600 professionals from pharmaceutical and biopharmaceutical companies, biotechnology companies, contract research organizations and regulatory agencies worldwide. It is once again a 5-day week long event – a full immersion bioanalytical week – specifically designed to facilitate sharing, reviewing, discussing and agreeing on approaches to address the most current issues of interest in bioanalysis. The topics covered included both small and large molecules, and involved LCMS, hybrid LBA/LCMS, LBA approaches including the focus on biomarkers and immunogenicity. This 2015 White Paper encompasses recommendations that emerged from the extensive discussions held during the workshop, and is aimed at providing the bioanalytical community with key information and practical solutions on topics and issues addressed, in an effort to advance scientific excellence, improve quality and deliver better regulatory compliance. Due to its length, the 2015 edition of this comprehensive White Paper has been divided into three parts. Part 2 covers the recommendations for hybrid LBA/LCMS and regulatory agencies’ inputs. Part 1 (small molecule bioanalysis using LCMS) and Part 3 (large molecule bioanalysis using LBA, biomarkers and immunogenicity) will be published in volume 7 of Bioanalysis, issues 22 and 24, respectively.

    The 9th Workshop on Recent Issues in Bioanalysis (9th WRIB) was held in Miami, Florida, on 13–17 April, 2015 with participation of over 600 professionals from pharmaceutical and biopharmaceutical companies, biotechnology companies, contract research organizations, and regulatory agencies worldwide. The workshop included three sequential core workshop days and six specialized training sessions that together spanned an entire week in order to allow exhaustive and thorough coverage of major issues in bioanalysis, biomarkers and immunogenicity. Like the previous workshops, the 9th WRIB was specifically designed to facilitate sharing, reviewing, discussing and agreeing on approaches to address the most current issues of interest in both small and large molecule bioanalysis using LCMS, hybrid LBA/LCMS and LBA approaches. An in depth focus was on biomarkers, immunogenicity and emerging technologies.

    The actively contributing chairs in the 2015 edition of the WRIB were Dr E Fluhler (Pfizer), Dr J Welink (EMA/Dutch MEB), Dr B Ackermann (Eli Lilly), Dr F Garofolo (Angelini Pharma), Dr A Song (Genentech), Dr T Thway (Amgen), Dr L Amaravadi (Biogen Idec) and Dr H Myler (Bristol-Myers Squibb).

    The numerous regulatory agency representatives who contributed to the 9th WRIB included Dr S Haidar (US FDA), Dr S Kirshner (US FDA), Dr B Booth (US FDA), Dr M Skelly (US FDA), Dr N Tampal (US FDA), Dr J Welink (EMA/Dutch MEB), Dr O Le Blaye (France ANSM), Ms E Whale (UK MHRA), Mr S Vinter (UK MHRA), Dr B Witte (German BfArM), Dr M Bustard (Health Canada), Mr G Mendes Lima Santos (Brazil ANVISA) and Dr N Katori (Japan MHLW-NIHS).

    Each of the three sequential core workshop days was designed to cover a wide range of bioanalytical topics suggested by members of the community, and included presentations from industry leaders and regulatory representatives, culminating in an open panel discussion between the presenters, regulators and attendees in order to determine the consensus items presented in this White Paper.

    As with prior WRIB editions [1–9], a significant number of topics were addressed during the workshop and condensed into a series of relevant recommendations. In the present White Paper, the exchanges, consensus and resulting recommendations on 34 recent issues (‘hot’ topics) in bioanalysis, biomarkers and immunogenicity are presented. These 34 topics are distributed within the following areas:

    • Small molecules by LCMS:

    • Innovations in small molecule bioanalysis (six topics);

    • Regulatory challenges in small molecule bioanalysis (six topics);

    • Hybrid LBA/LCMS (i.e., IA-LCMS):

    • Innovative method development for biotherapeutics, biomarkers and ADA (five topics);

    • Regulatory challenges (three topics);

    • Large molecules by LBA, biomarkers and immunogenicity:

    • LBA bioanalytical challenges (four topics);

    • Biomarkers (three topics);

    • Immunogenicity (seven topics);

    • Input from regulatory agencies:

    • Following the recommendations on the above topics, an additional section of this White Paper specifically focuses on several key inputs from regulatory agencies.

    Due to its length, the 2015 edition of this comprehensive White Paper has been divided into three parts for editorial reasons. This publication (Part 2) covers the recommendations for hybrid LBA/LCMS and regulatory agencies’ input. Part 1 (small molecule bioanalysis using LCMS) and Part 3 (large molecule bioanalysis using LBA, biomarkers and immunogenicity) will also be published in volume 7 of Bioanalysis, issues 22 and 24, respectively.

    Discussion topics

    Hybrid LBA/LCMS: innovative method development for biotherapeutics, biomarkers & ADA

    Quantification of cytokines, chemokines & growth factors by immunoaffinity (IA)-LCMS

    Are we capable of intact protein biomarker measurements by LCMS (i.e., without digestion)? What are the reagent challenges and opportunities for IA-LCMS for biomarkers? Are we taking advantage of the multiplexed quantification opportunities to obtain more information? How appropriate are alternative quantification methods for measurements without a standard curve by peak area ratios and reverse response curves? Can IA-LCMS measure cytokines, chemokines and growth factors with high specificity and sensitivity, thus avoiding cross-reactivity? Can LCMS techniques provide opportunities to characterize different sequence regions of the cytokine? Is LCMS able to analytically differentiate between closely related forms of the proteins (e.g., splice variants, pro form vs mature protein) and to understand the effect of soluble binding proteins?

    Immunogenicity by hybrid LBA/LCMS

    Does determination of the absolute amount of ADA provide any additional value in comparison to a traditional ADA immunoassay approach? How often are problems encountered with conventional ADA assays where false-positive or false-negative results render ADA data inconclusive? Do you consider using LCMS as an alternate platform for ADA analysis? Can LCMS be used to determine ADA in the presence of high circulating concentrations of drug to overcome the issue of drug tolerance? Can LCMS be used for immunoglobulin allotyping/isotyping by specific immunocapture: IgG, IgM and other immunoglobulin subclasses and/or by specific signature tryptic peptides as a surrogate for the measurement of intact ADA?

    Understanding the biotransformation of biotherapeutics by hybrid LBA/LCMS

    What types of large molecule biotransformation have you investigated using either LCMS or LBA? What types of large molecules have you investigated for biotransformation using either LCMS or LBA? Have the number of requests for the investigation of biotransformation of large molecules increased? What platforms (LCMS and/or LBA) have you used to investigate biotransformation?

    Using LBA knowledge of critical reagents to develop efficient IA enrichment for LCMS

    Given the increase in sensitivity of recent mass spectrometry methods, the quantification of low abundant peptides in complex biological matrices is now possible. Should mass spectrometry methods become the standard for the quantification of biological peptides? What are the pros and cons (i.e., cost, mass spectrometry assays being too limiting and not detecting biologically active peptides)? Can approaches based on existing LBA workflows be used in LCMS? When? How can LBA capacity and dynamic range workflows for LCMS enrichment utilization be increased? Are the conversion and optimization of commercial LBA reagents for IA enrichment possible? How are binding interferences in IA enrichment due to circulating receptors/soluble targets, ligands, non-specific binding and ADA overcome?

    A new insight: does the use of the ‘Universal Peptide’ approach for analysis for Fc-containing biotherapeutics in preclinical really work?

    Propose a risk/benefit discussion on the ‘Universal Surrogate Peptide’ strategy for the quantification of human antibody Fc region-containing therapeutic protein candidates in biological matrix samples from all nonclinical species. Does its cost–effectiveness in early candidate selection outweigh the higher quality analysis that is gained from a multiple peptide bioanalytical method? How will the data generated by the ‘Universal Peptide’ method be translated to clinical trials which will need to be conducted using a new alternative method since the ‘Universal Peptide’ would not be an appropriate method for the analysis of human samples?

    Hybrid LBA/LCMS: regulatory challenges

    Unresolved issues on validation (BMV) of hybrid LBA/LCMS methods for biotherapeutics from 2014 White Paper in bioanalysis

    There are no regulatory guidelines regarding hybrid LBA/LCMS methods, and consensus was not reached on several issues. Quantification of multiple signature peptides: Why, when and how should they be used? Validate all or just some? How will you use the results? Should ‘monitoring’ signature peptides be validated as the ‘primary’ signature peptide? If yes, how are multiple data coming from the same molecule in validation that generate multiple PK measurements handled? If both LBA and LCMS were used in a program to provide pivotal data, should cross-validation between LBA/LCMS always be performed for regulatory submissions? What is done if LCMS results do not agree with LBA results? What is done if LCMS and LBA are measuring different analyte species? How can it be ensured in validation that the LCMS assays are not impacted by competitive binding (circulating ligands or ADA) that may be present in the matrix at different levels and/or time during a study, including the impact of analyte species on binding? Should QCs be fortified with available potential binding molecules (ADA, shed/soluble target, and circulating ligand)? Should parallelism recommendations also be applied to LCMS assays? Are the 2014 White Paper Recommendations still valid after a year of progress? What is the regulators’ current experience with hybrid LBA/LCMS methods/studies?

    Using LCMS and/or LBA: ‘When? How? Why?’

    Do you need to cross validate between large molecule methods/platforms where there is ELISA versus straight digestion (i.e., with no immunocapture)? How? Which validation parameters should be evaluated as part of method development? When does method development just become validation and where are method development experiments being captured? Are these suitable for regulators? Do you select an assay platform by construct (i.e., are certain molecule types always done by LBA)? If not, how do you select what assay platform to use? Do you ever use both LBA and LCMS during method development? Do you ever use both LBA and LCMS during sample analysis?

    Common LBA community concerns on analysis of biotherapeutics by hybrid LBA/LCMS: ‘Let's discuss together!’

    Do(es) the signature peptide(s) accurately represent the biotherapeutic since LBA has potentially better prediction of bioactivity? Do specificity and selectivity for LBA versus LCMS have different meanings? How can we harmonize between the two groups? LCMS scientists believe that the sentence in the 2013 FDA Draft Guidance [10] discussing comparison of LBA methods to a validated reference method (such as LCMS) should remain in the FDA Guidance, whereas LBA scientists want to remove it. What are the pros and cons? Is LCMS really able to overcome LBA issues as a complementary technique (e.g., lack of specificity/cross-reactivity; matrix interference in diseased population)? Should LCMS be used as a complementary technique to LBA to evaluate the impact of ADA on PK assays? Are wide linear ranges using hybrid LBA/LCMS really an advantage over LBA (LBAs often have a three- to four-fold linear range which is able to accommodate the full PK profile)? IA faces the same challenges as LBA: interaction of binding reagents to the biotherapeutic. How are these challenges addressed in LCMS assays? How are reagents selected in IA for LCMS? Should LCMS assays demonstrate as much reagent characterization, selectivity and specificity testing as in LBAs? Should LCMS assays using IA enrichment consider critical reagent sourcing, resupply strategies, and long-term management? What would be a good example where a LCMS assay (including hybrid LBA/LCMS versions) would be advantageous over a LBA? Future perspective on LCMS: what sensitivity is reachable at the moment – where will the technology be two years from now? Which instruments are currently the best instruments to use for hybrid LBA/LCMS assays for which sensitivity is key (and which have the potential for high throughput and outsourcing)? What other technology (apart from LCMS and LBA) do we see as technology for large molecule bioanalysis?

    Discussions, consensus & conclusions

    Hybrid LBA/LCMS: innovative method development for biotherapeutics, biomarkers & ADA

    Quantification of cytokines, chemokines & growth factors by IA-LCMS

    Cytokines, chemokines, and growth factors represent an important class of proteins responsible for cell growth, cell proliferation, and differentiation/maturation. Many of these proteins are used as biomarkers and, importantly, can be targets for therapeutic intervention. Commercial LBA kits are often employed for measuring cytokines, chemokines, and growth factors in biological matrices. However, antibody cross-reactivity between cytokines and interfering factors in serum from patients with autoimmune and inflammatory conditions remain key challenges complicating LBA analysis.

    The first part of the discussion focused on an area insufficiently investigated to date, namely, the ability to analytically differentiate between closely related forms of the proteins (splice variants, pro form vs mature [11]). There is also the need to understand the effect of soluble binding proteins, such as shed receptors that can modulate the activity of a cytokine/growth factor, or affect the assay's ability to measure the protein with confidence. In response to this need, there was common agreement that IA-LCMS can measure endogenous protein biomarkers and targets with high specificity and sensitivity while avoiding cross-reactivity. Indeed, there was consensus that LCMS provides unique opportunities to characterize different sequence regions of the cytokine/growth factors and to understand the ‘microenvironment’ of the target protein using co-immunoprecipitation, i.e., the effect of binding partners on the measurement. It was agreed that although LCMS opens new doors for these types of analyses, there are still challenges to overcome. Assay sensitivity still remains a challenge, although examples with low pg/ml or sub-pg/ml for these analytes are being reported [12]. In general, large peptides and small proteins (<10 kDa) can be analyzed intact with low to mid pg/ml sensitivity, although large sample volumes (e.g., >500 μl) are still needed and these methods are still several fold less sensitive compared with digestion, even when using nano-LC with HRMS. In general for the measurement of protein biomarkers, it was recommended that the active molecular species of the intact target peptide be carefully selected with consideration given to the presence of post-translational modifications or analytical artifacts, such as oxidation of intact proteins (e.g., methionine), since these forms have different masses. It was agreed that although these challenges are present, they are worth addressing since the ability to assay intact protein has the potential to become the key method for peptides and small proteins owing to several advantages. First, sample preparation is more efficient since a digestion step is not required. Furthermore, avoiding digestion prevents a potential loss of information; for example, when active and inactive species of a protein biomarker differ only by one or a few amino acids and an appropriate digestion strategy to yield all peptide regions of interest may not be immediately available [13].

    It was highlighted that reagent needs in hybrid LBA/LCMS assays are different than those needed for typical LBAs. While high affinity (low Kd) capture reagents are needed for both methods, LCMS can make use of capture antibodies with lower specificity. There was consensus that there are multiple examples where ‘bad’ reagents for LBA become ‘good’ reagents for IA-LCMS methods. In many cases, IA-LCMS methods make use of this distinction to measure several protein isoforms in a single assay. Despite this difference, it was noted that there is a limitation to the specificity provided by MS detection and that the quality of the capture antibody is critical for assay sensitivity and for limiting interferences. Finally, it was mentioned that SIL protein IS are ‘nice to have’, but are not considered essential.

    There was agreement that more information about the protein analytes can be obtained by realizing multiplexed quantification opportunities, but it is more useful in early stages (discovery) rather than in later stages of drug discovery and development. In later stages, the additional costs associated with multiplexed assays are considerable and there is additional risk for assay failure. For protein biotherapeutics, the ability to monitor multiple domains is quite useful and in some cases necessary. While it is permissible (and often advisable) to have a single signature peptide for quantification, monitoring peptides for quantitative confirmation or providing qualitative information (e.g., biotransformation) are often used. As specified by agency representatives, the purpose of using each monitoring peptide should be clearly defined, as the intended use of the data will define the level of validation needed. The ability to follow multiple domains of a protein biomarker was a particular case of note, and it can be expected that, in some cases, quantitative data from different peptides may yield different results, each supplying biologically relevant information. In this case, the peptides should undergo similar validation depending on how the data are to be used.

    The application of alternative methods for quantification without the use of a standard curve, for example by peak area ratios to a SIL peptide and reverse response curves, was discussed. It was concluded that these techniques may not be suitable for biotherapeutics, but may be useful for biomarkers in cases where no appropriate standards are available. Moreover, biomarker quantification is often relative in nature (with the goal to determine the levels of relative increase or decrease) and there may be cases where alternative methods need to be considered. In LCMS, the use of single-point quantification/reverse response curves can be appropriate due to the linearity of the method and could also be used in fit-for-purpose and/or screening assays. Finally, it was concluded that more case studies and comparisons are needed as the IA-LCMS technology continues to impact biomarker investigations and it will be important to understand how these data are treated by the regulatory agencies.

    Immunogenicity by hybrid LBA/LCMS

    ADA may result from exposure of animals or humans to a therapeutic agent, and can reduce or eliminate drug efficacy, even causing adverse reactions in some cases. Therefore, it is critical to monitor and analyze ADA in biotherapeutic development. However, analyte binding to ADA can interfere with PK assays, causing drug interference. Traditionally, ADA screening, confirmatory, tittering and neutralizing assays are used to measure ADA responses. The discussion focused on the use of LCMS as an orthogonal tool for ADA determination. ADA or drug can be used as an immunocapture reagent for the LCMS assay. It was agreed that LCMS assays provide absolute amounts of ADAs; although they can be more quantitative than a titer approach, the determination is still relative because an ADA standard is not available. However, using a surrogate calibration standard such as IgG in buffer solution to prepare a calibration curve, LCMS assays can yield semi-quantitative absolute amounts of ADA, which can then be used to compare ADA levels between batches in the same study and between studies. Similar to traditional ADA assays, assay cut-points could be established for LCMS assays. Additional experience and discussions are needed to determine if the signal-to-noise ratio can be used instead of cut-points for these assays, or if separate screening and confirmation assays, similar to traditional ADA assays, are still needed. It was also discussed that in situations when an ADA assay based on LBA suffers from interference by circulating target or polyreactive proteins, a LCMS-based ADA assay can overcome this interference, resulting in confidence that an ADA signal is real. It was also mentioned that LCMS has been used in preclinical toxicology assessments and demonstrated an advantage for measuring ADA in the presence of high circulating concentrations of drug to overcome the issue of drug interference impacting LBAs. Cases where hybrid LBA/LCMS approaches have been used preclinically have been shown to be drug tolerant [14]. It was concluded that data show LCMS can be used for immunoglobulin allotyping/isotyping by specific immunocapture of IgG, IgM and other immunoglobulin subclasses and/or by specific signature tryptic peptides instead of intact ADA [15]. The significance of these findings awaits further study and additional examples will be useful to assess the utility of the LCMS tool for ADA measurement.

    Understanding the biotransformation of biotherapeutics by hybrid LBA/LCMS

    Biotherapeutic proteins have complex structures and the final drug product is often a mixture of various forms of a molecule due to cellular processing during drug manufacturing. In addition, biotransformation of the drug upon dosing can impact the activity of the molecule in vivo by altering the clearance, target binding or function of the molecule. In general, LBA techniques allow the quantification of one specific form of the drug, depending on the binding epitope, and different LBA assays are needed to measure different forms. Depending on the assay format and reagent binding epitopes, LBA may or may not measure the in vivo modified form(s) of the target proteins or peptides. The discussion focused on alternative approaches to LBA such as LCMS. It was agreed that LCMS provides a versatile platform offering a more complete picture of the various proteoforms of a given protein therapeutic or protein biomarker. This information can be used to provide a more complete understanding of the PK and catabolism of the various molecular variants (e.g., a parent compound, deamidated, oxidized and glycosylated forms) as well as an understanding of the protein biomarker isoforms present in patient populations [16,17]. Some examples of large molecule biotransformation, including in vivo de-Pegylation, de-amidation, oxidation, glycation and proteolysis using either LCMS or LBA were discussed. Investigations of bi-specific Ab, ADC, fusion proteins, peptides and conjugated peptides were also possible. It was recommended that monitoring biotranformation in the CDRs of mAbs is critical since biotransformation in the CDRs can compromise target binding which could impact drug efficacy as well as LBA PK assay measurements. LCMS investigation of in vivo biotransformation can be performed on intact molecules or peptides following enzymatic digestion or chemical breakdown of the intact molecules. In either case, isolation and purification of analytes via immunopurification from biological matrices such as serum is likely needed. The unanimous consensus was that requests for investigation of the biotransformation of large molecules are increasing, especially when unexpected PK results are observed. Moreover, regulatory agencies appear to be requesting more characterization of biotherapeutic biotransformations. The increased demand may also be attributed to the trend toward novel protein constructs (e.g., bi-specific antibodies or fusion proteins). Both LCMS and LBA techniques are routinely used to understand biotherapeutic biotransformation in a complementary manner. It was concluded that the structural elucidation capability of LCMS is essential for defining protein biotransformation which can assist the understanding of activity.

    Using LBA knowledge of critical reagents to develop efficient IA enrichment for LCMS

    It was stated that, given the increase in sensitivity of recent LCMS methods, the quantification of low abundant peptides in complex biological matrices is now possible. Since it is known that biological peptides may be difficult to measure reliably by LBA due to assay interferences (e.g., glucagon LBA methods are known to be nonspecific [18–21]), it was concluded that LCMS methods offer a unique opportunity in this field to become the standard for the quantification of biological peptides. However, caution should be exercised when comparing LBA and LCMS data for bioactive peptides as the two platforms may show differences. It was recommended that the appropriate assay, such as LBA, LCMS or a hybrid LBA/LCMS, should be selected based on the information sought. For example, if a potency assay is needed, then a LBA method may be preferred. LCMS may be selected, as discussed above, to investigate biotransformation. Hybrid LBA/LCMS can be selected to verify the performance of a peptide or protein LBA assay. Factors such as sensitivity, cost and sample volume should be considered when selecting the bioanalytical platform. It was agreed that each platform has pros and cons and should be carefully considered. During the discussion, attendees were very concerned with how agencies might review and consider new LCMS data on bioactive peptides that differ from historical data generated using a LBA. Although there is a growing list of published examples of the application of immunopurification LCMS for biologics [22,23], a review of recent FDA approvals showed that, at least through 2013, bioanalysis of a majority of biotherapeutical products in regulatory submissions to FDA was supported by non-LCMS-based assays. Therefore, regulatory experience with LCMS assays for biotherapeutics is emerging but is currently still in its infancy.

    It was agreed that existing LBA workflows can be utilized in hybrid LBA/LCMS method development. ELISA tools, expertise and pre-existing knowledge of the proteins and associated antibody reagents are all useful considerations in the development of robust IA-LCMS methods. LBA-based extractions with binding target or receptors may also provide potential approaches for affinity extraction. Commercial LBA reagents can often be used as well as plate-based immunocapture depending on the required sample volume. In many cases, the same capture reagents used in LBA plate-based assays can be applied to bead-based or other affinity column workflows used with LCMS. All these are simply examples of how experience with LBA methods can aid in the development of hybrid LBA/LCMS assays. One possible disadvantage in the application of LBA methodology comes from the potential for binding interferences due to circulating receptors/soluble target molecules, ligands, nonspecific binding and ADA. However, in hybrid LBA/LCMS, aggressive sample treatment (such as acidification, extraction with organic or high salt or detergent solutions) prior to IA is only used for sample clean-up, hence these interferences may no longer be present as LCMS measures unique peptide(s) following enzymatic digestion or chemical breakdown of ‘free’ drug as well as any drug complexes. In addition, as mentioned above, there is a greater degree of flexibility with IA-LCMS to avoid binding interferences using a more aggressive pre-treatment step beyond traditional acid dissociation used with LBA. High salt or other buffer conditions can also reduce the interference from binding partners, although sometimes it is important to retain binding partners to study the associated biology. Overall, it was concluded that previous knowledge from LBA may be utilized to generate the optimum LCMS method.

    A new insight: does the use of the ‘universal peptide’ approach for analysis for Fc-containing biotherapeutics in preclinical really work?

    The term ‘universal signature peptide’ was first coined in a 2012 publication [24,33] describing a tryptic peptide in the Fc region of most human antibody Fc-containing therapeutic protein candidates. Since this peptide was not found in many animal species, the proposed strategy was to develop a generic LCMS method based on detection of this peptide that was capable of supporting the bioanalysis of diverse human Fc region-containing therapeutic protein candidates in plasma/serum samples in preclinical studies. The notion was that a generic bioanalytical approach capable of quantifying these structural classes in commonly used animal species would be of great value, particularly during early drug discovery wherein multiple structural variants of a drug candidate may need to be evaluated in various animal species to enable prioritization for further development. The ‘universal peptide’ identified was VVSVLTVLHQDWLNGK. Other universal peptides were also identified, which could be more sensitive depending on the drug constructs. However, over the past few years as the development of LCMS-based methods for protein quantification has rapidly evolved, the utility and enthusiasm of a generic method using a single peptide for preclinical evaluation of human-derived therapeutic proteins has waned overall. However, some companies use this as the primary nonclinical assay approach. Method development strategies employing in silico protein characterization, coupled with LCMS software developed from proteomics for MS parameters, added to improvements in multiple component detection by LCMS, have provided bioanalysts with the tools to quickly develop highly sensitive and complex methods for quantitative analysis of therapeutic proteins using multiple peptides from different regions of the molecule. The characteristics of ‘universal signature peptides’ include commonality, selectivity and sensitivity. Sequences are generated by trypsin digestion of the human Fc region mAbs or fusion proteins. These are favorable for LCMS ionization, fragmentation and chromatography. Conversely, ‘specific signature peptides’ have a unique sequence without interference from biological matrix, no chemically reactive residues, no unstable sequences and no sequences which generate incomplete digestion. Sequences typically between 6 and 15 amino acids in length can be identified in the CDR or variable regions of biotherapeutic IgGs.

    Recently, a SIL IgG has become commercially available which can be used as an IS for the ‘universal peptide’ assay. This IS can be added to test samples at the very beginning of sample preparation to minimize variability of digestion among samples. The IS has multiple labels in different positions along its amino acid sequence, and hence offers the flexibility of using different peptides as the quantification peptides.

    Additionally, it was agreed that the generic methods developed for preclinical evaluations using the ‘universal peptide’ approach could not be used in clinical studies due to the presence of endogenous human Fc peptide, therefore a new alternative method must be developed for the clinical phase of a program using a more specific ‘signature peptide’. Therefore, the need to develop a strategy for bridging preclinical and clinical results in such cases was identified. However, the same is true for LBA; a generic format can be used for preclinical studies and a specific format for clinical studies. In some cases, it is possible to use framework peptides specific to the drug to bridge preclinical and clinical studies. Finally, it was acknowledged that the ‘universal peptide’ approach has financial advantages for outsourcing due to its commercial availability as well as its SIL. In addition, many molecules in nonclinical studies fail to enter clinical development and thus the assay cost can be gated by using a readily available ‘universal’ assay.

    The discussion focused on the fact that the use of these approaches is not mutually exclusive. Some companies switched from general ELISA assays to ‘universal peptides’ for nonclinical studies due to their advantages, such as its cost–effectiveness in early candidate selection. However, other companies prefer to use specific peptides to understand as much as possible about the drug candidate. ‘Universal peptide’ assays are a good tool to use together with the information obtained from multiple peptides. Access to both approaches was deemed important.

    Hybrid LBA/LCMS: regulatory challenges

    Unresolved issues on validation (BMV) of hybrid LBA/LCMS methods for biotherapeutics from the 2014 White Paper in bioanalysis

    There is no regulatory guidance available for the validation of hybrid LBA/LCMS methods for biotherapeutics, and as mentioned in the previous paragraphs, regulatory experience is still limited in spite of the growing number of publications on this topic. When questioned, the regulators in attendance indicated that 10–12 ADC products have been submitted to FDA that utilize hybrid LBA/LCMS assays and they have not reviewed submissions of any other biotherapeutics using hybrid LBA/LCMS techniques. It seems that ADCs have adopted this technique faster than others. Efforts were made last year to discuss and provide recommendations to industry on issues using these types of methods. One year later, the recommendations provided in the 2014 White Paper in Bioanalysis Part 2 [8] are still valid; however, several issues remained unresolved. These were brought back to the 9th WRIB for further discussion after a year of additional experience aiming to reach consensus.

    Quantification of multiple signature peptides: Typically during method development, 3–4 signature peptides, with MRM transitions, may be monitored until one can be selected as the most appropriate for the molecule of interest. Some companies are monitoring more than one signature peptide during assay development, validation and sample analysis to verify that the digestion was properly executed and to obtain additional information about the integrity of the molecule. However, an appropriate IS molecule can serve the same function. An example of a validated nonclinical method was discussed, where it was considered appropriate to monitor both N- and C-terminal signature peptides, as both domains of the molecules were deemed necessary for biological function. It was also noted that for the ADCs, multiple signature peptides may need to be monitored due to the ADC complexity. In the particular case of random conjugated ADCs at Lys, signature peptides not containing the Lys residue are preferred.

    Cross-validation of LBA and LCMS pivotal data: It was concluded that cross-validation would only be required when both LBA and LCMS platforms are used within a program. If the same assay platform is used throughout a program, no cross-validation is required. In case of a platform change, it is appropriate to evaluate the comparability of the methods. In some instances, it may indeed be demonstrated that the methods generate different results. In this case, the differences in quantification should be explained, and thus, quantitative differences do not necessarily stop transition to a more suitable bioanalytical platform. When cross-validation is appropriate and possible, an understanding of the analyte species being quantified by both platforms is essential for comparing the results. It is more likely to demonstrate comparability if the same capture reagent is used; however, discrepancies can also manifest from the detection step: secondary Ab or MS selectivity issues. It should be cautioned that the two assays may not measure exactly the same species. If completely different capture reagents or a direct digestion preparation (i.e., no immunocapture) is used, then cross-validation is likely to be more difficult because the same target analyte is not being measured. Moreover, the ability to perform a cross-validation between ELISA and direct digestion methods/platforms was discussed. It was concluded that since different target analytes are assayed in each method/platform, the demonstration of analytical comparability is somewhat unlikely.

    Competitive binding: To ensure that the validation of a LCMS assay is not impacted by competitive binding of circulating species such as ligands, shed/soluble target or pre-existing ADA that may be present in the matrix at different concentrations, particularly in disease state, and/or in a temporal manner during the course of the study, an interference check should be performed. Interference checks should be performed for hybrid LBA/LCMS assays if/when the necessary reagents are available as an assay performance QC check. The influence of known binding partners should be assessed. Acid pre-treatment can also be used to estimate the magnitude of the effect.

    Fortification of QC samples with potential binding molecules: If there is a possibility that the assay may be impacted by the presence of potential binding molecules (ADA, shed/soluble target and circulating ligands), then such molecules should be included in the assay performance QC.

    Parallelism in LCMS: It was agreed that for PK assays, parallelism is not needed unless sample dilution is involved. Nonetheless, parallelism studies are essential and highly recommended for biomarker assays involving the use of a surrogate matrix.

    Using LCMS and/or LBA: ‘When? How? Why?’

    It was stated that characterizing the PK and pharmacodynamics of a biotherapeutic is required to gain an understanding of the safety profile and pharmacology of the drug during development. LBAs have traditionally been used to quantify large molecule biotherapeutics and biomarkers; however, improvements in LCMS instrumentation have led to the growth of this technique as a viable alternative to LBA methods. It was agreed that one of the distinct advantages of the LCMS methodology is that it is less susceptible to the matrix interference than LBA. Another advantage is multiplexing to measure therapeutic target and ligands simultaneously. Method development was discussed at length, both the selection of the assay platform as well as the overlap and redundancies between method development and method validation. There was consensus that assay platform selection is dependent upon the question to be answered by the experimental design and subsequent assay data (i.e., free or total analyte). If using multiple platforms is appropriate, then selection should be based on practical considerations such as reagent availability and supply. It is essential from the scientific and regulatory perspective that the critical assay development steps, data and decisions leading to assay validation, parameters and criteria are adequately documented allowing for full reconstruction of the assay. Good science should be used to identify the critical steps to include in validation, which might be beyond the basic industry recommendations. Potential impact to assay performance can arise due to variability among different lots of reagents, different sample sources (patient vs healthy subjects, male vs female, elderly vs juvenile, Asian vs Caucasians etc.), and should be considered in method validation.

    Common LBA community concerns on analysis of biotherapeutics by hybrid LBA/LCMS: ‘Let's discuss together!’

    Before starting the discussion, it was stated and agreed that the ‘bioanalytical question’ should drive the selection of the most appropriate technology. While LBAs are able to answer many biological questions, such as the integrity and biological activity of the analyte, conventional LCMS assays (without immunoextraction) can often be developed faster since these assays can be developed without the generation of specific biologic reagents. It was agreed that hybrid LBA/LCMS methods offer the possibility to combine the strengths of both technologies for the quantification of selected isoforms of a biologic. Although presently LBAs still remain the most common platform for large molecule PK assays, hybrid LBA/LCMS assays are becoming more widely used as a routine PK assay platform and to address specific questions in both nonclinical and clinical studies.

    Specificity and selectivity experiments are fundamentals of bioanalytical method validation and are often confused or used interchangeably between LBA and LCMS scientists. A way to harmonize the meanings of these validation parameters between LBA and LCMS is desired. However, the definitions are different between the two groups because the objectives of the assays are fundamentally different. Much less matrix interference and cross-reactivity are associated with the hybrid LBA/LCMS assay and impact due to potential immunopurification and digestion variability among samples are less significant. It was recommended that the wider LBA acceptance criteria can be applied for accepting hybrid LBA/LCMS data because the assay can only be as good as the least precise step.

    One statement included in the 2013 FDA Draft Guidance [10] has arguably generated the most debate in the bioanalytical community. LCMS scientists believe that the sentence ‘When possible, the LBA should be compared with a validated reference method (such as LCMS) using incurred samples and predetermined criteria to assess the accuracy of the method’ should remain in the Guidance, whereas LBA scientists want to remove it. The difference in opinions arises due to the interpretation that LCMS methods are the ‘gold standard’ to which LBA methods should aspire. Obviously, this is not the case and was likely not the intent of the statement. It was agreed and recommended that the sentence should be rephrased in order to clarify in what cases this would be possible, applicable and necessary. In fact, LCMS methods can typically be developed quickly with less specificity issues, cross-reactivity or matrix interference in diseased populations. These issues, however, may affect LBA methods; therefore, bioanalysts often consider LCMS as an alternative to overcome these issues during method development. Neither methodology is free from bioanalytical issues; the most appropriate assay platform should be chosen. An example of where hybrid LBA/LCMS was selected over LBA due to a lipemic serum issue was discussed. Indeed, lipids associating with plasma components cause abnormal morphology of the alpha-2-globulin in a dose-dependent manner, which significantly interfere with LBA by blocking the binding sites on antibodies [25].

    Due to the large number of LCMS instrument options, there was no consensus on the best instruments to use for hybrid LBA/LCMS assays. However, sensitivity, selectivity and throughput were identified as key attributes. For therapeutic antibodies, hybrid LBA/LCMS assays showed sensitivity (LLOQ) of approximately 0.1–10 ng/ml, whereas for mid-sized proteins it was possible to reach 20 pg/ml and lower LLOQs in the single digit pg/ml range have also been demonstrated. With hybrid LBA/LCMS, a greater amount of capture reagent as well as magnetic beads are used, which makes this approach more costly than LCMS alone (i.e., without the IA step), especially for clinical programs. While assay execution costs are expected to decrease with emerging precedent, the added data confidence due to the assay's high measurement specificity and confidence may warrant the additional cost. Other technologies (apart from LCMS and LBA) for large molecule bioanalysis such as capillary electrophoresis and micellar electrokinetic chromatography may provide the analytical scientist with even more options for assay design, although a sensitivity gap still exists for these techniques.

    LBA PK assays can be impacted by ADAs. Hence, LCMS assays can be complementary to LBA because non-IA-LCMS ‘reagent-free’ assays (i.e., LCMS methods which do not use a LBA cleanup/enrichment step) measure total drug, which can be compared with LBA results to determine if the clearance is ADA-mediated. On the other hand, in many cases, ADA does not impact IA-LCMS assays when an appropriate capture reagent is used. For example, pre-existing ADA that interferes with LBA may not interfere with IA-LCMS assays at all. Another characteristic of hybrid LBA/LCMS assays is their wider linear ranges. Again, immunocapture is used for sample cleanup, and a wide linear range can be warranted as long as sufficient amounts of capture reagent and beads are used. However, an IA-LCMS assay with a large linear range consumes more reagents and beads which, as mentioned above, can be expensive, and therefore the right balance between linear range and cost should be considered. IA may face the same challenges as LBA regarding the interaction of various binding partners to the biotherapeutic. However, as already mentioned in this White Paper, it should be stated that IA is used only as a cleanup/enrichment step for the hybrid LBA/LCMS method and thus provides more flexibility when dealing with such interferences. Extensive discussion occurred on how to select IA reagents for hybrid LBA/LCMS, and if the same level of reagent characterization, selectivity and specificity as LBA was required. Based on this discussion, it was recommended that LCMS assays using IA enrichment should adopt strategies for critical reagent sourcing, resupply and long-term management similar to LBA assays.

    Key input from regulatory agencies

    The 9th WRIB was also the occasion for numerous regulatory agency representatives, including US FDA, Brazil ANVISA, German BfArM, Health Canada and the UK MHRA, to share their views on multiple topics of interest for the global bioanalytical community attending this event, in order to provide some clarification on unresolved issues or imprecise expectations. Also in 2015, the discussion focused on the main, overarching theme universally accepted by all regulators that the primary objective of any study, regardless of guidance available, should be the pursuit of good science and good documentation.

    An update was shared in relation to the reorganization of the FDA Center for Drug Evaluation and Research (CDER). Prior to January 2015, the Division of Bioequivalence and GLP Compliance was under the responsibility of the Office of Compliance. Following the reorganization, this Division fell under the Office of Translational Science and became the Office of Study Integrity and Surveillance, which was further subdivided into the Division of Generic Drug BE Evaluation and the Division of New Drug BE Evaluation. This structural difference represents the evolution of the inspectional procedures within the FDA, which has moved to mimic the GLP monitoring program already in place.

    For domestic and foreign bioequivalence testing sites, the goal is to ensure an inspection interval of 2 years. Site audits will be conducted using one or more studies in order to assess activities during the time since the last inspection. It is possible that only parts of studies may be audited in order to assess a range of activities at a site. This is to help minimize the risk of altered records. Provisions also require self-identification of in vivo BE clinical and bioanalytical sites and in vitro BE analytical sites. There is no fee for identifying BE sites; however, there are fees for manufacturing sites.

    The Crystal City V conference report [26] was discussed and it was clarified that it should not be treated as a guidance document, and any decisions regarding industry implementation of the discussions within should take that into account. Furthermore, industry SOPs should not rely on criteria in the draft BMV guidance until it is finalized.

    The issues surrounding post extraction stability continue to be a hot topic for discussion, and regulators addressed this topic by reiterating that the objective of the evaluation is to determine the stability of the analyte(s) and the IS post extraction. Reinjection reproducibility tests achieve different objectives. The evaluation must account for temporary storage of samples either at room temperature or in the refrigerator, as well as the resident time on the autosampler. Various approaches that have been considered by regulators include comparison of peak areas versus analyte/IS ratios, and the use of freshly prepared standards and QC samples versus comparing stability results to nominal values. Two approaches were discussed. The first approach requires that freshly prepared stability QC samples and standards be prepared and injected, then stored under the appropriate testing conditions. Following the desired duration, the stability QC samples are reinjected along with freshly prepared standards. Analyte/IS ratios are then evaluated to determine if they meet acceptance criteria. The advantages of this method are that the use of ratios addresses possible differences in ionization, processing and other sources of variability and that having the time zero data of the initial injection provides information on possible degradation and nonspecific binding. The second approach also requires that stability QC samples and standards are extracted. These are then stored under the appropriate testing conditions. They are injected along with freshly prepared standards. QC sample acceptance criteria are based on analyte/IS ratios or peak areas. Some have suggested comparing stability QC sample results to nominal concentrations at time 0; however, regulators are not convinced that this approach answers the question of extract stability.

    Regarding ISS, FDA has not yet seen elevated rates of ISR failure attributed to analyte instability to warrant additional ISS requirements. Similarly for ANVISA, ISS (and ISR) is not a requirement for submission in Brazil. However, for Health Canada, ISS can be considered part of a sound scientific practice and should be thoroughly addressed as part of method development and validation procedures. The evaluation is not mandatory as part of submission to Health Canada or FDA, however, may be requested if deemed important during the review of the data, using a risk-based approach relevant to the case at hand. Cases were cited where endogenous and dietary compounds failed ISS but passed ISR testing. It was thought that the approach discussed as part of the GCC White Paper on ISS [27] was an appropriate way to approach this issue.

    Some clarification regarding reserve samples for in vitro BE studies was provided. Examples of these include locally acting inhalation products, phosphate-binding agents and bile acid-binding products. Often, these studies are conducted at manufacturing sites, and the investigators may be unclear which regulations should apply regarding reserve samples. Either the reserve samples are not retained or they are retained per GMP regulations. From the FDA perspective, the use of the data and type of study determine which regulations apply. In vitro BE studies are subject to 21 CFR 320.63. More recently, the agency has exercised regulatory discretion with respect to quantity of reserve samples for certain drug products, for example, metered-dose inhalers.

    Extensive evaluation is required of the bioanalytical portion of generic submissions and of new drug submissions where the comparative bioavailability data are pivotal to the approval of the product because the validity of the data, which is only as good as the analytical method, is directly linked to the study outcome. There are challenges with the analysis of endogenous compounds that can potentially affect the accuracy and sensitivity of an analytical method. For example, fluctuations in circadian rhythms and dietary intake can prevent the reliable estimation of background levels of analytes. Another challenge can be selecting appropriate surrogate matrices for preparation of standards for quantitating the analyte that is already present endogenously in biological samples. BE of endogenous compounds is generally based on baseline corrected data, where PK parameters should be calculated for exogenous drug and then baseline data, both subject and period specific, should be subtracted. Any resulting negative concentrations should be adjusted to zero. Both baseline adjusted and unadjusted concentrations should be submitted.

    As specified in the 2013 draft FDA BMV guidance [10], the accuracy and precision of an assay should be validated from QC samples prepared in the same matrix as the study samples. QCs should be prepared at concentration ranges expected for subject samples. Nominal concentrations should be calculated using the endogenous level and adding any spiked concentration to obtain the final concentration of the standard or QC sample. The endogenous level can be measured using replicate analysis, either directly from a stripped or surrogate matrix or estimated from the intercept using the standard addition method. The latter method using the same matrix as study samples is advantageous because it rules out any potential matrix effects from the estimate. Whenever possible, standards should be prepared from stripped biological matrix matching the study samples, and QCs should be prepared in the same biological matrix as the study samples. Suitability of the stripped matrix should be demonstrated by proving that there is no measurable endogenous analyte present and no matrix effects or interference when compared with the biological matrix. Cases were cited where findings were given because the matrix used to validate the method was not the same as the matrix used for the study samples. Preparation of standards in surrogate matrix is not recommended unless no other option is available. If used, the selection of an appropriate surrogate is dependent on the analyte binding to components such as proteins and lipids as well as the solubility and extractability of the analyte, which should be demonstrated as being similar to the biological matrix. Matrix effects of the surrogate matrix should also be validated.

    The ANVISA new Technical Note [28] of validation requirements of methods in the presence of any concomitant medications, which has been effective since October 2014, was discussed. This includes studies where subjects may have taken occasional medication, possible interferents like nicotine or caffeine or taken as a coadministered compound. It is necessary to validate the matrix effect with and without possible interfering substances, including in 2 lipemic and 2 hemolyzed lots of matrix. Selectivity should also be validated in these cases, taking into account the Cmax of possible interfering substances as well as any potential metabolites. It was clarified that the Brazilian resolutions are considered law; however, the requisites included in the resolutions will be minimum requirements. The technical notes, such as the one discussed above, will be the agency's current thinking on a topic and are more flexible in their approach.

    The German BfArM provided regulatory input on the observations identified during their inspection program. Triggers for bioanalytical inspections included absent or negative inspection histories, abnormalities in the data and first applications for a specific group of generics. Five main deficiencies were discussed. The first was regarding the rejection of calibration standards. Both cases were presented, where standards were removed that were within acceptance criteria or left in that did not meet acceptance. Furthermore, SOPs did not describe the order of rejecting multiple standards. It was reiterated that including and rejecting standards must be defined in a SOP and must be applied consistently. The second deficiency involved QC sample results that were considered globally even though multiple extracted batches were included in the run. When results were examined per extracted batch, the QC samples failed and the affected subjects needed to be repeated. Lack of investigations into variations of IS response was also cited. SOPs need to contain a process of IS review. Use of an appropriately validated matrix when different from the study sample matrix was previously discussed above. Finally, some additional reporting issues, such as results not reported or not submitted with the dossier, discrepancies between report and source data and reasons for repeated or rejected runs not discussed in the reports, were presented.

    The UK MHRA expectations in relation to inspections of ELNs were discussed as a follow-up of the 2014 White Paper recommendations [8]. The expectation for inspections of ELNs included direct access to the system itself and the system administrators. It is necessary to provide a detailed overview of how the system interacts with other systems and instruments. Because inspectors may not be fluent in the specific system they are inspecting, there is an expectation that reasonable training will be provided. Additionally, demonstration sessions by actual users are likely to be requested. As with all software systems, validation documentation and audit trails will also be inspected. In fact, there is an expectation that documented reviews by quality control or quality assurance groups are built into the process of using the ELN. Data integrity for ELN systems requires data processing and transfer to other systems which can be reconstructed back to the source data collection. The archiving requirements of e-records are the same as those for paper records and should ensure that data can be extracted from the archive in the future when evolution of electronic data storage technology may have changed; the storage media used must be given due consideration. Some of the findings related to electronic systems included incomplete documentation related to the system build, validation and operational use raising doubts about the accuracy of the contents of the system itself and how it was used. Ensuring metadata is included with any electronic data is crucial. For example, in cases where PDF files were uploaded to the electronic system, scans of the documents were of poor quality, contained no metadata and were generated from a process that had not been validated. Therefore, the source and accuracy of the records could not be verified. Incomplete audit trail data provided during inspection was also an issue because, although changes to the study data were recorded, there was no supporting metadata. Three references were provided that provide guidance for industry regarding regulatory expectations for electronic data [29–31].

    Health Canada's input was focused on improving the quality of submissions. The case where a significant number of subject samples are significantly above the ULOQ during sample analysis was discussed in particular. Although it was not defined how many constituted a significant number of samples or how far above the ULOQ was significant, one should consider the extent to which the drug concentration-time profiles are affected by these samples. The EMA guidance [32] outlines regulatory expectations in cases where the calibration curve range is extended and, at a minimum, new concentrations of QC samples are to be added, or if required, to conduct method revalidation. Extended long-term, bench-top and freeze-thaw stability may be requested. As outlined in the EMA Guideline for bioanalytical validation [32], a certificate of analysis for the IS is not required, and this approach was confirmed by the regulators. However, a consistent performance of the IS must be demonstrated due to its criticality in generating reliable, reproducible data. For methods using MS detection, use of a SIL IS is recommended whenever possible. This SIL IS should have the highest isotope purity and there should be no isotopic exchange reactions present. The guidance further states that inter-batch precision and accuracy should be calculated for each QC concentration in a sample analysis study, but intra-assay precision and accuracy is not mentioned. It was clarified that the calculation of intra-assay precision and accuracy should be presented when three or more replicate QC samples are analyzed within a run.

    Regulatory input into the treatment of aberrant subject sample values indicated that there was an expectation that data should be continuously monitored for such values, but investigations should be performed on a case-by-case basis, using sound scientific judgment. Reanalysis of samples may be necessary. However, results of the reanalysis should help identify possible reasons for the abnormality in order to prevent further recurrence of similar problems. Regulatory review of the data will be looking for systemic issues and trends, so investigations should be designed to identify the root cause of the anomalous value and should be properly documented. A completion of a checklist was not considered acceptable documentation. The repeat results tables must include both the initial value, when available, and the repeat value of a sample, even if the original value is not considered valid according to reasons predefined in SOP (e.g., the original value was extrapolated). Minimally, an explanation as to how the value was judged invalid should be included, with reference to the applicable section of the SOP.

    Presentation of metabolite testing could be in the method development report or the validation report. The extent to which testing results are presented in the latter is case dependent, but accurate reference to the location of any available data in a file improves the efficiency of review. Reporting the relative concentrations of metabolites and parent analytes measured in matrix following drug administration in humans is a critical step in demonstrating the selectivity of the analytical method. This information may be available in relevant literature, but in cases where it is not available there is an expectation that the clinical metabolite profiling data will be submitted. The selectivity of the analytical method used in a pivotal study is critical, therefore data demonstrating a lack of interference between the analyte, its metabolites and the IS are needed.

    Manual reintegration of chromatograms is not acceptable, however, reintegration using a software based on documented procedures in a SOP may be acceptable. Any reintegration should be reported in the analytical report along with a thorough explanation of the procedures that were followed and why the change was necessary. The reintegration SOP should be provided. It was also recommended to provide both the original and modified chromatograms at the time of filing, since it is likely that they will be requested during the review.

    An in-depth discussion focused on the FDA's recent inclusion of biomarkers in the 2013 draft BMV guidance [10] and additional regulatory input was provided. Biomarkers are used for safety, efficacy and patient selection and treatment, therefore the data for these compounds are at least as important, if not more important, than PK data for a new drug. Challenges of using biomarker methods can impact data reliability. Therefore, method validation should be fit-for-purpose based on the objectives of the study. The level of risk for pre-Phase I studies is different than the level of risk during Phase II, therefore method validation requirements should be modified accordingly. When deciding how much validation is required, the analyte development platform should be considered, as well as the purpose of the assay. In general, as drug development proceeds through the typical phases, the level of validation needed increases accordingly. Early phase studies, thus low-risk data, may only need proof that the analyte being measured is the desired analyte. Therefore, only selectivity and specificity data may be required. In the subsequent phase, the risk may be slightly higher, so it may be necessary to add testing to determine the variability of the method; precision and accuracy testing may be necessary. A further elevated risk may require an evaluation to determine the measurement limits, therefore testing to prove sensitivity and curve range may be needed. For cases where the risk of data integrity is high, it may be necessary to prove stability of the biomarker over the sample handling conditions. Each case must be evaluated independently and scientifically sound fit-for-purpose testing selected. However, it was clear that highly reliable, i.e., validated data were required when biomarkers are used for decision-making in studies used for market approval or patient instructions. Biomarker assays used for exploratory purposes can use less reliable data. In these cases, it is recommended to get regulatory feedback early on. Submitting incorrectly validated NDA or BLA data limits the options for addressing any potential issues.

    Recommendations

    Below is a summary of the recommendations made during the hybrid LBA/LCMS discussions at the 9th WRIB.

    Hybrid LBA/LCMS: innovative method development for biotherapeutics, biomarkers & ADA

    • Intact peptide analysis should be targeted, whenever it provides the needed measurement specificity and is technically feasible, due to sample preparation inefficiencies and the potential loss of information associated with digestions. However, for large peptides and proteins, monitoring of peptide(s) resulting from digestion is often used in consideration of sensitivity and specificity. SIL protein IS, although not essential, can be used for hybrid LBA/LCMS assays. The use of multiplexed quantification is more useful in early drug development stages than in later stages and can help identify essential analytes for clinical development. Alternative methods for quantification by using peak area ratios or reverse response curves may be applicable for biomarkers in cases where no standards are available.

    • LCMS is best used as an orthogonal tool to assist when needed for ADA determination, for example, to overcome the lack of drug tolerance or other analytical interferences. LCMS can be used for immunoglobulin allotyping/isotyping by specific immunocapture of IgG, IgM and other immunoglobulin subclasses and/or by analysis of specific signature tryptic peptides reflecting immunoglobin subclasses.

    • Both LCMS and LBA techniques should be used in a complementary manner to understand biotherapeutic biotransformation: LCMS and hybrid LBA/LCMS for defining protein biotransformation and cell-based assays for understanding activity.

    • Existing LBA workflows can be used in hybrid LBA/LCMS method development. Commercial LBA reagents can also be used. Binding interferences can often be removed by using an acid pretreatment step. High salt or other buffer conditions can also reduce the interference from binding partners. Capture with target, receptors and other reagents was proposed.

    • Universal peptide assays are a useful tool to have together with the information obtained from multiple specific peptides. Access to both approaches was deemed important. It is necessary to develop a strategy to bridge preclinical data obtained using the ‘universal peptide’ method with the clinical data. In some cases, it is possible to use framework peptides specific to the drug to translate between preclinical and clinical studies.

    Hybrid LBA/LCMS: regulatory challenges

    • Unresolved issues in hybrid LBA/LCMS validation from the 2014 White Paper in Bioanalysis

      • Cross-validation of LBA and LCMS pivotal data is only needed if platforms are switched during a program.

      • If cross-validation of LBA and LCMS data is required, then it is recommended to use the same capture reagent.

      • Perform an interference check to ensure that validation of a LCMS assay is not impacted by competitive binding.

      • Potential binding molecules should be evaluated as part of an assay performance QC if it is judged that the assay could be impacted by the presence of these molecules.

      • Parallelism is not required for biotherapeutics unless sample dilution is involved.

      • Parallelism is essential for biomarker assays that use a surrogate matrix.

    • The assay platform – LBA, LCMS or hybrid LBA/LCMS – should be selected based on the question to be answered by the assay data or based on practical considerations, if multiple platforms are appropriate. The most appropriate one should be selected unless otherwise justified. The critical steps identified and evaluated should be reconstructable. Good science should be used to identify the critical steps to include in validation, which might go beyond the basic recommendations.

    • In hybrid LBA/LCMS, it is recommended to perform the immunocapture upfront in order for the signature peptide(s) to accurately represent the biotherapeutic. Hybrid LBA/LCMS assay method validation must prove that the method is specific at all steps using the wider LBA criteria. Moreover, the wider LBA acceptance criteria can be applied for accepting hybrid LBA/LCMS data. LCMS assays using IA enrichment should consider critical reagent sourcing, resupply strategies and long-term management.

    Abbreviations
    AbbreviationDefinition
    ADAAntidrug antibody
    ADCAntibody drug conjugate
    BEBioequivalence
    BMVBioanalytical method validation
    CDRComplementarity determining region
    ELISAEnzyme-linked immunosorbent assay
    ELNElectronic laboratory notebook
    GDUFAGeneric Drug User Fee Amendment
    HRMSHigh-resolution mass spectrometry
    IAImmunoaffinity
    ISInternal standard
    ISSIncurred sample stability
    LBALigand binding assay
    LCMSLiquid chromatography mass spectrometry
    mAbMonoclonal antibody
    MRDMinimum required dilution
    MRMMultiple Reaction Monitoring
    PKPharmacokinetic
    QCQuality control
    SILStable-isotope label
    SOPStandard operating procedure
    ULOQUpper limit of quantitation
    WRIBWorkshop on Recent Issues in Bioanalysis

    Key terms

    Biomarker: A measurable indicator of some biological state or condition.

    Hybrid LBA/LCMS: It is used in this article as equivalent to IA-LCMS.

    Biotherapeutic: Therapeutic agent prepared using biological means.

    Reverse response curve: Varying amounts of labeled peptide and a constant amount of unlabeled peptide, which is typically the endogenous peptide, are used to generate a response curve. In this way, the same matrix can be used for calibration as for sample analysis. This approach is commonly used in SISCAPA assays.

    Cytokines: Category of small (˜5–20 kDa) signaling proteins secreted by cells.

    Chemokines: A family of small cytokines with the ability to induce directed chemotaxis in nearby cells.

    Biotransformation: The chemical modification made by an organism on a chemical compound.

    Proteoforms: A specific molecular form of a protein product arising from a specific gene.

    Disclosure

    The views expressed in this article are those of the authors and do not reflect official policy of the US FDA, Europe EMA, Health Canada, France ANSM, The Netherlands MEB, Germany BfArM, Brazil ANVISA, Japan MHWL and UK MHRA. No official endorsement by the FDA, EMA, Health Canada, ANSM, MEB, BfArM, ANVISA, MHWL or MHRA is intended or should be inferred.

    Acknowledgements

    The authors acknowledge the US FDA, Europe EMA, France ANSM, The Netherlands MEB, UK MHRA, Germany BfArM, Brazil ANVISA, Health Canada and Japan MHWL for supporting this workshop. E Fluhler (Pfizer), J Welink (EMA/Dutch MEB), B Ackermann (Eli Lilly), N Hughes (Life Labs), F Garofolo (Angelini Pharma), A Song (Genentech), T Thway (Amgen), L Amaravadi (Biogen Idec) and H Myler (Bristol-Myers Squibb) for chairing the workshop and/or the White Paper discussions. All the workshop attendees and members of the bioanalytical community who have sent comments and suggestions to complete this White Paper. W Garofolo, L Lu, X Wang, M Losauro, N Savoie, A Hernandezand, S Schonert, D Cohen, K Kalaydjian, P de Souza for the assistance in the organization of the event. Future Science Group as a trusted partner.

    Financial & competing interests disclosure

    The authors have no relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. This includes employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties.

    No writing assistance was utilized in the production of this manuscript.

    References

    • 1 Savoie N, Booth BP, Bradley T et al. 2008 White Paper: the 2nd Calibration and Validation Group Workshop on recent issues in good laboratory practice bioanalysis. Bioanalysis 1(1), 19–30 (2009).
    • 2 Savoie N, Garofolo F, van Amsterdam P et al. 2009 White Paper on recent issues in regulated bioanalysis from the 3rd Calibration and Validation Group Workshop. Bioanalysis 2(1), 53–68 (2010).
    • 3 Savoie N, Garofolo F, van Amsterdam P et al. 2010 White Paper on recent issues in regulated bioanalysis and global harmonization of bioanalytical guidance. Bioanalysis 2(12), 1945–1960 (2010).
    • 4 Garofolo F, Rocci M, Dumont I et al. 2011 White Paper on recent issues in bioanalysis and regulatory findings from audits and inspections. Bioanalysis 3(18), 2081–2096 (2011).
    • 5 DeSilva B, Garofolo F, Rocci M et al. 2012 White Paper on recent issues in bioanalysis and alignment of multiple guidelines. Bioanalysis 4(18), 2213–2226 (2012).
    • 6 Stevenson L, Rocci M, Garofolo F et al. 2013 White Paper on recent issues in bioanalysis: “Hybrid” - the best of LBA & LC/MS. Bioanalysis 5(23), 2903–2918 (2013).
    • 7 Fluhler E, Hayes R, Garofolo F et al. 2014 White Paper on recent issues in bioanalysis: a full immersion in bioanalysis (Part 1 – Small molecules by LCMS). Bioanalysis 6(22), 3039–3049 (2014).
    • 8 Dufield D, Neubert H, Garofolo F et al. 2014 White Paper on recent issues in bioanalysis: a full immersion in bioanalysis (Part 2 – Hybrid LBA/LCMS, ELN & regulatory agencies’ input). Bioanalysis 6(23), 3237–3249 (2014).
    • 9 Stevenson L, Amaravadi L, Myler H et al. 2014 White Paper on recent issues in bioanalysis: a full immersion in bioanalysis (Part 3 – LBA and immunogenicity). Bioanalysis 6(24), 3355–3368 (2014).
    • 10 US Department of Health and Human Services, US FDA, Center for Drug Evaluation and Research, Center for Veterinary Medicine. Draft Guidance for Industry, Bioanalytical Method Validation, Rockville, MD, USA (2013). www.fda.gov/downloads/drugs/guidancecomplianceregulatoryinformation/guidances/ucm368107.pdf.
    • 11 Berna M, Ott L, Engle S, Watson D, Solter P, Ackermann B. Quantification of NTproBNP in rat serum using immunoprecipitation and LC/MS/MS: a biomarker of drug-induced cardiac hypertrophy. Anal. Chem. 80(3), 561–566 (2008).
    • 12 Palandra J, Finelli A, Zhu M, Masferrer J, Neubert H. Highly specific and sensitive measurements of human and monkey interleukin 21 using sequential protein and tryptic peptide immunoaffinity LC-MS/MS. Anal. Chem. 85(11), 5522–5529 (2013).
    • 13 Wang W, Choi BK, Li W et al. Quantification of intact and truncated stromal cell-derived factor-1α in circulation by immunoaffinity enrichment and tandem mass spectrometry. J. Am. Soc. Mass Spectrom. 25(4), 614–625 (2014).
    • 14 Neubert H, Grace C, Rumpel K, James I. Assessing immunogenicity in the presence of excess protein therapeutic using immunoprecipitation and quantitative mass spectrometry. Anal. Chem. 80(18), 6907–6914 (2008).
    • 15 Jiang H, Xu W, Titsch CA et al. Innovative use of LC-MS/MS for simultaneous quantitation of neutralizing antibody, residual drug, and human immunoglobulin G in immunogenicity assay development. Anal. Chem. 86(5), 2673–2680 (2014).
    • 16 Bowen C, Kehler J, Mencken T, Orr B, Szapacs M. Utilizing LC-MS/MS to provide adaptable clinical bioanalytical support for an extended half-life bioactive peptide fused to an albumin-binding domain antibody. Anal. Methods 7, 237–243 (2015).
    • 17 O'Connor-Semmes RL, Lin J, Hodge RJ et al. SK2374697, a novel albumin-binding domain antibody (AlbudAb), extends systemic exposure of exendin-4: first study in humans--PK/PD and safety. Clin. Pharmacol. Ther. 96(6), 704–712 (2014).
    • 18 Sloan JH, Siegel WR, Ivanova-Cox YT, Watson DE, Deeg MA, Konrad RJ. A novel high-sensitivity eletrochemiluminescence (ECL) sandwich immunoassay for the specific quantitative measurement of plasma glucagon. Clin. Biochem. 45(18), 1640–1644 (2012).
    • 19 Yabe D, Watanabe K, Sugawara K et al. Comparison of incretin immunoassays with or without plasma extraction: incretin secretion in Japanese patients with type 2 diabetes. J. Diabetes Investig. 3, 70–79 (2012).
    • 20 Bak MJ, Albrechtsen NW, Pedersen J et al. Specificity and sensitivity of commercially available assays for glucagon and oxyntomodulin measurement in humans. Eur. J. Endocrinol. 170, 529–538 (2014).
    • 21 Wewer Albrechtsen NJ, Hartmann B, Veedfald S et al. Hyperglucagonaemia analysed by glucagon sandwich ELISA: nonspecific interference or truly elevated levels? Diabetologia 57, 1919–1926 (2014).
    • 22 van den Broek I, Niessen W, van Dongen W. Bioanalytical LC–MS/MS of protein-based biopharmaceuticals. J. Chromatogr. B 929, 161–179 (2013).
    • 23 Chappell D, Lassman ME, McAvoy T, Lin M, Spellman DS, Laterza OF. Quantitation of human peptides and proteins via MS: review of analytically validated assays. Bioanalysis 6(13), 1843–1857 (2014).
    • 24 Furlong M, Zheng O, Wu S et al. A universal surrogate peptide to enable LC-MS/MS bioanalysis of a diversity of human monoclonal antibody and human Fc-fusion protein drug candidates in pre-clinical animal studies. Biomed. Chromatogr. 26(8), 1024–1032 (2012).
    • 25 Abberley L. The Goldilocks paradigm: Finding the large molecule assay that's “just right”. Presented at: 9th Workshop on Recent Issues in Bioanalysis. Miami, FL, USA, 14–16 April 2015.
    • 26 Booth B, Arnold A, De Silva B et al. Workshop report: Crystal City V–quantitative bioanalytical method validation and implementation: the 2013 revised FDA guidance. AAPS J. 17(2), 277–288 (2015).
    • 27 Lowes S, LeLacheur R, Shoup R et al. Recommendations on incurred sample stability (ISS) by GCC. Bioanalysis 6(18), 2385–2390 (2014).
    • 28 ANVISA Technical Note N° 04/2014 – Guidelines regarding Article 7 of Resolution RDC No. 27 of May 17, 2012: tests to be carried out when there is concomitant administration of drugs during conduct of BE/BA study. Brasilia, Brazil (2014). http://portal.anvisa.gov.br/wps/wcm/connect/ba6a800045afa5e0a624afa9166895f7/NOTA+004+2014.PDF?MOD=AJPERES.
    • 29 Reflection paper on expectations for electronic source data and data transcribed to electronic data collection tools in clinical trials. European Medicines Agency, Good Clinical Practice Inspectors Working Group (GCP IWG), London, UK (2010). www.ema.europa.eu/docs/en_GB/document_library/Regulatory_and_procedural_guideline/2010/08/WC500095754.pdf.
    • 30 Reflection paper on GCP compliance in relation to trial master files (paper and/or electronic) for management, audit and inspection of clinical trials. European Medicines Agency, Good Clinical Practice Inspectors Working Group (GCP IWG), London, UK (2013). www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2013/02/WC500138893.pdf.
    • 31 MHRA GMP Data Integrity Definitions and Guidance for Industry, Medicines and Healthcare Products Regulatory Agency, London, UK (2015). www.gov.uk/government/publications/good-manufacturing-practice-data-integrity-definitions.
    • 32 Guideline on bioanalytical method validation. European Medicines Agency, Committee for Medicinal Products for Human Use (CHMP), London, UK (2011). www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2011/08/WC500109686.pdf.
    • 33 GENENTEC, INC.: US8679767 (2014).