Latest News

FDA Public Workshop Sheds Light On NGS Test Guidances

By Brian Krueger

October 14, 2016 | On Friday, September 23, 2016, the FDA convened a public workshop on Adapting Regulatory Oversight of NGS-Based Tests, for discussion and comment on its recently released draft guidances for next generation sequencing (NGS) based in-vitro diagnostic tests (IVDs). These draft documents were the culmination of a number of workshops held by the FDA in 2015 and early 2016 seeking community input on how the FDA should go about providing regulatory oversight for NGS-based IVDs. A webcast of the daylong event is available.

The public workshop was intended to be a focused community discussion on the two documents that were released July 8th of this year. The day was kicked off by a short intro from the FDA commissioner, Robert Califf, as to why the FDA was pursuing regulating NGS-based IVDs. He said that this was really the result of the presidential push into precision medicine and the FDA wants to be sure that they get the oversight right so that we can pursue the individualized treatments promised by the precision medicine initiative (PMI). However, to realize the goal of precision treatments, we need robust, population level data that is created using safe and effective tests. This is where Califf believes the FDA can contribute the most to the PMI, by providing regulatory oversight for both the tests and the databases used to guide treatment decisions in the clinic.

Califf’s introduction was followed by a one hour overview of the first draft guidance, Use of Standards in FDA Regulatory Oversight of Next Generation Sequencing (NGS)-Based In Vitro Diagnostics (IVDs) Used for Diagnosing Germline Diseases, by David Litwack. Litwack provided a more in-depth discussion of the document than has been given, seemingly motivated by feedback the FDA has gotten since the initial release of the document in July. The theme running throughout the day: the FDA is proposing draft guidelines, these are not mandatory and the FDA is looking for constructive commentary about how they can make them better.

Litwack spent the first half of his talk highlighting the challenges of regulating a technology like NGS because not only are the techniques rapidly evolving, but so are the analysis methods. Because of this reality, Litwack indicated that the FDA needs to approach regulation of these types of tests very differently, with policy that will be dynamic and flexible. Litwack attempted to support this assertion by highlighting the continuum of how they are envisioning regulating NGS tests, which includes both specific standards (things like GIAB, coverage and analysis metrics, etc.) and design concept standards (how the test is designed and validated). The FDA believes that standardizing both the process and the metrics will clean up some of the uncertainty surrounding NGS-based IVDs, particularly the problems surrounding discordant results which could be attributed to a wide range of differences among NGS-based IVDs, including process, reagent, and analysis methods. Litwack spent the remainder of the presentation skimming through the high points of the guidance document, ending nearly every slide by reiterating the fact that this was still a draft and they are open to commentary. The document has been criticized for being oddly specific to a single NGS technology and its particular analysis methodology. This draft guidance does not apply to whole genome sequencing, somatic (tumor/oncology) sequencing, Sanger sequencing, PacBio sequencing, digitalPCR, Real time PCR, microarrays, or any other high throughput genotyping technology. There was a question later during the panel discussion indicating that the FDA was wondering how wide reaching these draft guidance documents could be, but there was no explicit mention of regulation surrounding any technology other than cluster-based NGS.

After Litwack’s presentation, Zivana Tezak, FDA staff, led a panel discussion featuring representatives across both industry and academia: Sherri Bale, PhD, GeneDx; Joe Devaney, PhD, Children’s National Health System; Birgit Funke, PhD, Laboratory of Molecular Medicine, Harvard Medical School; John Pfeifer, MD, PhD, Washington University School of Medicine; Erasmus Schneider, PhD, Wadsworth Center, NY State Department of Health; and Lin Wu, PhD, Roche

The industry representatives appeared to be more apprehensive about FDA oversight of NGS-based tests than the academics. While all agreed that regulation of NGS-based tests was valuable, they disagreed on the details. Bale and Funke argued that regulatory oversight of NGS-based laboratory developed tests (LDTs) is already covered by CAP/CLIA and standard lab practices. (Tezak had stressed at the beginning of the panel that the discussion should focus on tests regulated by FDA, not LDTs.) Schneider disagreed, encouraging the FDA’s involvement in national regulation because it might afford patients across the country the same protections that the people in his state have by virtue of their residency. Pfeifer supported a similar sentiment, saying he was encouraged by the FDA’s progress and openness and thought that we shouldn’t wait for perfect regulation before implementing it. Finally, Wu cautioned against strict standards that would need to change as different technologies and techniques are brought into the diagnostic sequencing market.

The Database Question

The afternoon session started with a brief 30 minute overview of the second guidance document, Use of Public Human Genetic Variant Databases to Support Clinical Validity for Next Generation Sequencing (NGS) Based In-Vitro Diagnostics, presented by Laura Koontz, personalized medicine staff at FDA. While calling variants accurately is important, understanding how those variants can inform treatment is key for PMI success, she said. Koontz compared NGS testing to that of glucose monitoring to drive home the point that analysis of the analyte is only half the equation when we’re talking about translating data into a treatment.

Confusingly, the draft guidance only covers the regulation of public databases, but that doesn’t preclude private databases from being used to support NGS assays as long as those databases can prove clinical validity. For clarity, she compared the FDA approved BRCA test from Myriad to the FDA approved cystic fibrosis (CF) test from Illumina. The Myriad test uses its own internal database for clinical validity, while the Illumina CF assay uses the publicly available CFTR2 database. Koontz stressed that databases need to include both high quality variant calls and genotype/phenotype relationships. In addition, for approval of a database by the FDA, the database will be evaluated in 4 main areas: aggregation methods, curation methods, interpretation methods, and how assertions (genotype/phenotype relationships) are made.

After initial approval by the FDA, the agency hopes database oversight will be a passive activity where the FDA can evaluate the ongoing validity of the database as a resource by making yearly inspections without any interaction with the database administrators. The FDA has outlined the approval process, but Koontz made it a point to mention that the process will be flexible and will evolve with the technology. She went into further details, indicating that the FDA was concerned with data preservation and longevity. This was not limited to privacy, but also included provisions for preserving the data source so it was available long term. However, she stated that the databases must include the most up to date information available, so it’s not clear how legacy databases will be maintained and managed if they’re still used despite having questionable clinical validity and no active custodian.

Koontz’s summary of the database document transitioned into a panel discussion about the use of public databases in clinical decision making. This panel included representatives from academia, industry, and patient advocacy groups: Peggy Carter, PhD, Novartis; Andrea Ferris, Lungevity; Madhuri Hegde, PhD, Emory University School of Medicine; Christa Martin, PhD, Geisinger Health System; Louis Staudt, MD, PhD, National Cancer Institute; and Mya Thomae, Illumina.

Once again the panel was very supportive of using public databases for clinical decision making, but, as was a major point of contention in the first discussion, the details matter. The panel agreed that the general framework was sound, but also confusing as it related to most types of databases. The guidance document for databases places a lot of the responsibility on database administrators, so it’s hard to understand how public databases with multiple submitters will be regulated, such as ClinGen and ClinVar, where hundreds of labs provide genetic data of varying quality and provenance.

Christa Martin wondered aloud on this point whether there should be different requirements for these types of databases that guide both administrators and data submitters. This led to a discussion about the variable quality of data contained in databases with all panelists stating that this was a huge problem in the field because questions about quality and phenotypic data can change the variant assertions significantly. Additionally, these types of problems can lead to discordant calls when different databases are used. Some of these are innocuous when talking about the ACMG pathogenicity guidelines, while others are much more serious such as those where one database calls the variant pathogenic while another calls it benign. Discordance is a major worry for Andrea Ferris, who said that from a patient advocacy standpoint it’s hard for the patient to understand what the right diagnosis is. The same is true for physicians; there are even recent lawsuits circulating where a variant of unknown significance was detected but not reported and then later discovered to be pathogenic after further investigation. This stresses the importance of not only database curation but also constant re-evaluation of assertions.

Madhuri Hedge highlighted this multiple times, strongly voicing the opinion that public databases are just one of many tools required for genetic analysis. This work also requires the knowledge of experts and the use of high quality private databases that, unfortunately, are paywalled. And finally, Mya Thomae asked how vendors and developers of IVDs are supposed to respond when databases change or the clinical validity of variants contained in their panels changes. It was generally accepted that the FDA would be flexible with the hope that a resubmission of the IVD would not be required for the vendor to make such changes to their assay. However, there are many complexities there that still need to be discussed.

The most important criticism of the FDA draft guidances came during the Q&A and public comment sessions from Patrick Allen of Melagen. One item that was lacking from the FDA documents was the inclusion of ethnically stratified data sources. Unfortunately, it is widely known that the vast majority of the genetic data we have available to us is from white Europeans. Because of compensatory mutations in ethnic populations, mutations that are important causes of disease in one ethnicity may not be disease causing in another. So a test that asserts causality needs to also ethnically stratify that call.

This is very hard—if not impossible—using the genetic data sources that we currently have. The simple solution to this problem is to include more minority populations in sequencing studies and Allen advocated for a significant push from the PMI in this direction which was later echoed by Elizabeth Mansfield, Director of Personalized Medicine at FDA, as an area of concern for the PMI. Now, this isn’t to say that this information isn’t used when it can be, and this is exemplified by the inclusion of approximately 20 ethnically common variants on Illumina’s CF139 assay; however, even in this panel it is clear the Europeans have significantly more variants covered because we know much more about their genetics. Hopefully the field can work toward a time when precision diagnostics and treatments are accessible to everyone.

Brian Krueger, PhD is a technical director in research and development at Laboratory Corporation of America where he develops and validates high throughput sequencing assays for oncology and inherited disease.  He can be found on Twitter @h2so4hurts.