Lance Miller, Ph.D., co-director, Cancer Genomics Core, Comprehensive Cancer Center at Wake Forest Baptist Medical Center, defines precision medicine and who is it for and describes how genomics testing is administered. Miller also presents evidence that precision medicine works and discusses the future of precision medicine.
View Doctor Profile LANCE MILLER: Really, the definition that I think is sort of optimal for this talk and the way that I want to present on it is the following. As a high content genomic diagnostic to explore alternative treatment strategies for patients. So the term has been around for a while, precision medicine. But it's only recently been sort of the buzz word as Ed mentioned. Obama's talking about it these days. The NIH is saying, hey, we want to fund more precision medicine research. And so I think the reason-- I wanted to try to capture this for you at the beginning of the talk-- is that it sort of rubs against the paradigm of cancer treatment. I think that's where really the interest is and it's what's making cancer patients look at and go, hey, wait is that for me? And what I mean by that is historically cancer has been treated by anatomical site. Clinical trials are set up that way. If there's a drug that's designed against a pathway, it's usually evaluated, historically it's evaluated, in a particular cancer context. But here, we think about it in a different way now. Because we know the pharmaceutical companies these days, more than ever, are leaning away from sort of general types of treatments liking chemotherapies and next stage chemotherapies. And they're really focusing their sights on genes and pathways. So for the patient, obviously, when one has failed frontline therapy and the only option is palliation, we need more options. So in the context of genomic testing as a definition for precision medicine, there are a few leaders in the field that have commercially moved into this market. A few of them were mentioned earlier in terms of genetic testing. But when it comes to finding potential therapies to be applied adjuvantly, Foundation Medicine is one of the leaders. So Foundation Medicine is all about genomic testing. And they use next generation sequencing technologies to do so. And in 2012, they released a test, their first test, called Foundation One. So Foundation One is a test I'll tell you about largely because they're not only the leaders, but at Wake Forest Baptist, we have a partnership with Foundation Medicine and some insurance companies to fund this test for patients. And so the Foundation One test is a pan-cancer assay to be given to multiple cancer types. Although they do have a separate heme-based assay. So mostly solid tumors for this particular assay. It interrogates 315 genes, including 28 genes that are commonly rearranged in solid tumors. And in 2013, they published a paper in Nature Biotechnology that described this technology platform, sort of validated it in a clinical sense. And one of their findings was that for each and every gene on the assay, they have a greater than 99% sensitivity and specificity for correctly identifying that mutation for that particular gene. And so what's interesting about this assay is where the gold standard typically has been if one is looking for a particular mutation that occurs in a particular cancer, one can have a single test done on a single gene. But this is saying-- and this is now an FDA approved technology platform-- that says that one can now look across hundreds of genes potentially as in this case with the same type of sensitivity and specificity for identifying those mutations. And importantly, on this particular assay are at least about 50 actionable mutations. Actionable mutation being ones where there is a drug that's already been developed, is either FDA approved, or it's emerging in clinical trials. So I wanted to take you through the clinical workflow and the technological workflow for working up a sample. And the very first step of this process is really processing of the sample itself. And so what is needed? So the tissue can either be a tumor fragment taken at surgery or it can be a core needle biopsy. It's generally formal and fixed and paraffin embedded. Usually that process does not damage DNA too terribly, particularly when one is doing sequencing of the DNA. The target volume of tumor tissue is about one cubic millimeter or more. And usually that equates with about eight to 10 micron sections from the FFPE block where the surface area of the tumor is about 25 square millimeters. Typically target when we say tumor tissue in terms of tumor cellularity, it should be at least 20% tumor cellularity. And Foundation Medicine has purported that once a tumor sample meets these criteria, that about 85% to 90% of the samples that they receive for analysis are sufficient. So as we get more molecular with this process, we get to step two, which is the deep sequencing. Deep sequencing requires DNA that's a pretty good quality and quantity. It needs to be at least 50 nanograms of intact double-stranded DNA. And then the sequencing procedure which I want to walk you through consists of several steps. First, the DNA gets fragmented into very manageable sizes using some shearing methods. Then the next step is to take this DNA fragment. And we want to isolate some of it. We're not trying to sequence the entire coding region of all human genes in this study. We're focusing on a panel of 300 genes. How do you do that? And so we do it with these paired primers that are often referred to as exon baits. And what these primers do is to allow us to separate out-- or we use the term pull down-- to pull down specific regions of genes. In this image, you can see that these baits are designed against the exons, which are the coding regions of genes where the particular mutation of interests lie in this case. And those baits pull down then the specific regions of genes that we want to study among those 315 on the assay. Once hybridization capture has taken place, then the fragmented of DNA of interest is then ligated to these adapters. These adapters have a couple of functions. And on the one hand, they act as barcodes to help us later once we have the data sequence to go back and identify their origins. The other is to help us to anchor the DNA onto what's called the sequencing flow cell where the DNA actually gets sequenced. And so we're able to sequence many hundreds of millions of fragments of DNA simultaneously with next generation sequencing technologies. So after they've attached to the flow cell, there's a process that takes place that's called bridge amplification. It's literally where there's a physical change in the DNA where it's actually looped over into an arcing structure. And what that allows us to do, it's hybridized by another adapter that's on the glass, allows the sequence to be sequenced from both ends. We call this a paired-end sequencing of DNA which pretty much doubles the type of coverage that we can get when we're sequencing DNA. So each of these fragments, just the ends of it are getting sequence. In this case, about 50 base pairs are what are sequenced in the Foundation One test. For research purposes, we're now up to around 150 on average that we could sequence from either end. And for each sample that's getting sequenced, we want to make sure that these genes are covered with at least 500x on average, whereby 99% percent of the time we get 100x coverage. So what is x coverage mean? It's actually very important for identifying mutations in cancers. The coverage means that for each and every gene sequence that we're interested in, we want to ask is there a mutation in that gene? We don't just look at one sequence representing that gene. We try to look at least 100 if not 500 or more sequences. The reason it's important is because tumors are heterogeneous. And as they grow and progress, there are clones that sometimes grow out of these tumors that are the ones that are drug resistant. The mutation that is driving drug resistance falls within a population of tumor cells that's only, say, 10%. But if you're not sequencing deeply enough with enough coverage, then you're going to miss that mutation. And so by sequencing deeply, we can understand the heterogeneity of the cancer better and identify those rare cells that might be the most important ones that we need to think about drug resistance or different types of drugs to treat the patients for. So the third step of this process involves an analysis pipeline that tries to make sense of all this data. So for each tumor that we sequence, we're going to get back somewhere between I think on average probably about 7 to 10 million sequence reads from these 315 genes that we're interested in. And so those sequences, fortunately we have algorithms now that do this for us in an automated way. But they get mapped onto a human genome template looking at the base pair sequence, aligning it back to the human genome. And from that perspective then, a variant calling algorithm could come in and look at those sequences and say, is there a mutation or not? Now, the types of mutations that could be identified on this test can be point mutations, single nucleotide point mutations. They can be insertions or deletions. They can be chromosomal amplifications such as the case for HER2 for example. Or deletion, such as the case for PTEN. Or they could be gene rearrangements that represent fusion proteins that show up in a number of cancers. In order to understand if they are in fact mutations, there has to be a further level of data curation that usually utilizes different databases to date. So for example, there are such things as rare germline variants that could be a mutation or maybe it's a germline variant. We're not sure. There are databases that help to answer that question including the 1,000 Genomes Project database. There are certain mutations that are oncogenic drivers. And the biological annotation of a gene is an oncogenic driver is done very well for many years by the Sanger Institute who created this Cosmic Database they call it. So that type of database has access to help annotate the information. Of course, drug gene interaction databases have to be studied. Because at the heart of this is being able to offer a personalized treatment for the patient that utilizes maybe a drug that's not commonly used for patients with that particular type of cancer. So how drugs are designed specifically to genes and pathways is well recorded in certain databases. And then clinical trials databases have to be accessed as well to give clinicians and patients an understanding of how they might get treatment by those particular types of drugs. So this is all kind of the front end part of the reporting. The back end part of the reporting looks like this. So the Foundation One report is usually about a 20-page document. And the front page always looks like this. This is a sample taken from a patient with lung adenocarcinoma. And so it's really broken down on the first page into two categories-- patient result and therapeutic implications. And so I've kind of zoomed in here for you to see this a little bit better. And you can see how intuitive this is for a patient to understand. It simply says there are eight genomic alterations found in this sample of long adenocarcinoma. Those mutations with their gene names are listed here. There's additional information such as if there are disease-relevant genes, in the case of lung adenocarcinoma, ALK and EGFR are very disease-relevant genes for how these patients are treated. But the report says these were negative. We didn't find any mutations in these particular genes that are otherwise relevant to your cancer type. In terms of the therapeutic implications for patients, those mutations have a summary that follows. These top three genes I'm showing you because there is an indication that might be useful to the patient. In the first case, MET amplification, there's a drug that's FDA approved for treatment in that patient's tumor type, in lung adenocarcinoma. But also it reports that two other genes were identified, AKT2 and KRAS. And in both cases, while there are not drugs approved yet for treatment in that patient's tumor type, they are approved by the FDA for other cancer types. And there's a column that gives information on the potential of clinical trials that these patients might be eligible for. And the remaining document-- as I mentioned, it was about 20-page document-- goes into more detail in terms of the clinical trials potential for these patients. OK. So as I mentioned before, Wake Forest Baptist is now offering precision medicine alternatives for patients. There was a program launched and commercially marketed that began in about March of this year. And so, of course, we are targeting patients-- not all patients-- we are targeting patients who have failed frontline treatment where essentially their only options are palliative treatment. And in the past six months, I just heard a report that there have been over 300 tests that have been performed. And we now know-- and this was actually vetted well before the release of this-- that most insurance companies are now covering this, including MedCost, which is the insurance plan that we have at our medical center. How are decisions made in terms of how to treat patients? And so at Wake Forest, we have a molecular tumor board. These are essentially clinical leaders from surgeons to medical oncologists who have a particular understanding of the genetic implications of treatment who deliberate on each and every case to come up with a consensus for how that patient should be treated or not based on the results of that test. One thing that we're interested in is-- and being a cancer researcher, I have a particular interest in this-- is trying to understand what is the benefit to patients for precision medicine. Just because we're giving a patient a new and different drug that might be indicated based on how that drug was designed doesn't mean it's working for patients at all. And there's actually been quite a bit of debate as to how quickly we as a society are rushing into applications for this type of precision medicine. And so what we're doing at Wake Forest is we're creating a database that integrates this genomic information-- which, by the way, can be many hundreds of gigabytes per patient-- trying to integrate that with the medical records so that we can track that information over time and learn from it. Some of the questions we want to ask are-- of all the patients that we are giving this test to, how many actually have actionable mutations that we're discovering? And of that fraction of patients, with what frequency are the clinicians actually assigning a personalized medicine, a personalized treatment based on those results from the test? And then we'd love to know what is the survival and benefit to the patients who are actually on those types of treatments? Of course, it's going to take a huge database before we can do it and understand that question in a cancer-specific context. But we could begin ask that question across all patients who meet the initial criteria for having the test done. And so I had looked at the literature recently to see if there was anything yet out there on that. And I was surprised that a couple days ago, I discovered when I searched again there was a paper out there that I had missed. But it's still from 2015, published in the GCO. And it describes a large meta analysis of multiple phase two clinical trials where either protein-based or DNA-based biomarkers are being used in a clinical trial's context to look at treatment responses where the decision to treat with a certain drug is based on that biomarker. And I take a quote from the paper which addressed in particular the genomic biomarker class in their analysis. And what they say is that across both pooled analysis and meta analysis, personalized arms, the treatment arms that are personalized, that use a genomic biomarker had higher response rates and prolonged progression-free survival and overall survival that was statistically significant. And when I read more into the paper, it was easy to deduce that the response rates and the progression-free survival particular were doubled or more or better in the pooled analysis of these patients. So I take this as statistical evidence that when you put it all together and look at a big picture and you're not breaking it out on different cancer types, that there is some benefit going to patients here that could be looked at as a doubling of responses that are favorable clinical responses. So it's encouraging that this rubbing the paradigm sort of wrong is actually translating into benefits for patients. But I'm not here to say that precision medicine is the perfect way to go for everybody. It certainly can be argued differently. And there was an interesting journal article published by this journalist of Massachusetts not long ago who talked about the problem with precision medicine is it's not quite precise, at least not yet. I think he hit the head of the nail perfectly here. Because it is true. We talk in terms of precision medicine as an end all, be all. It's absolutely not. It's only part of the information that we need to really determine what is the best treatment for a patient. It doesn't always tell us exactly what we need to know. What do we need to know? Well, if we have a drug that's design against usually not just a gene, but rather a specific protein within a pathway, and we know that a lot of these mutations converge on similar pathways. So sometimes we need more information to really give us a better understanding of the biology. A good example of this I think is, again, back to breast cancer. You think about HER2 analysis. Historically, immunohistochemical testing to determine the protein levels and to give us the 1 plus, 2 plus, 3 plus categories is based on the expression level at the protein level. And yes, it's predictive of herceptin responsiveness. But in more recent years, there's been studies to look at Fluorescence In Situ Hybridization and HER2 status. And what that means is they're not looking at the protein level. They're looking at the chromosomal amplification of that gene. Because that's how it gets started in breast cancer in virtually all cases. And when you combine that information together, one can enhance one's diagnostic certainty that indeed this population of patients that's both 3 plus at the protein level, for example, or borderline 2 plus, but also with FISH positive for amplification, are likely to be the patients who are most likely to benefit to herceptin. And so it's this type of thinking that I think is in the future of precision medicine. And what I'm showing you here is that I like the idea. I'm a believer in omics technologies and what looking at hundreds and thousands of genes simultaneously can tell us, as long as we are capable of harnessing and interpreting that information. But what the Foundation One test addresses is only a portion of the information that is potentially available. So if you look at this funny looking figure, what you're looking at are slices of the genome. This is the way I look at it. So on the one slice, we can look at sequence level mutations. On another slice called copy number, one could look at chromosomal amplifications and deletions. The Foundation One test covers genes in terms of characterizing these types of alterations. But gene expression is not addressed in that. So we can look at the transcription of genes to see if they're on or off. In fact, Oncotype DX breast cancer does just that and is really the poster child for what are emerging as these multi-gene class predictors that are based solely on gene expression levels of prognostic genes. The methylation status of the genome is becoming more important, particularly as HDAC inhibitors are being developed which directly modulate cancer-mediated alterations in methylation status of the genome. There's an as yet largely clinically unexplored field of microRNAs which we know regulate genes by the hundreds at a time. There are such things now, proteomics approaches. RPPA represents a proteomics approach looking at many hundreds of proteins, looking at their phosphorylation status. Why? Because phosphorylation status, among other types of modifications to the protein, tell us if those proteins are on or off functionally. And so it has been the goal of what's called the cancer genome atlas, the TCGA, which is a billion dollar pilot study funded by the NCI-- it's been going on for about 10 years-- to really understand how these different planes of genomic and proteomic information can teach us about cancer in general. So they've been targeting in a very systematic way 10, now I think they're up to around 15 to 20, different types of cancer, hundreds of tumors at a time, trying to get to at least 500 to 1,000 tumors from different institutions and then doing these full on profiling of the genome try to make sense of it. And when you're trying to make sense of it, you have to integrate that information. And the integration of that information has taught us that what I said before is true, that a lot of this information identifies different genes across the genome, but they converge their mutation, their activation or inactivation, they converge on the activity of really a handful of pathways which has been estimated at like 12 to 15. And so by understanding-- again, you think about the HER2 example. We can look at the protein level. We can look at the copy number level. Well, there's different ways to characterize for functionality with these different omics technologies. And I think the future very much holds the integration of these technology platforms to better inform the activity of these pathways and identifying drugs that are targeting the pathway specifically. So in summary, precision medicine today is largely driven by these multiplex genomic tests that take advantage of next-generation sequencing which is highly parallelized. We can simultaneously look at many hundreds of genes with very fine resolution. We are offering this testing to treatment refractory patients whose only other option is palliation. And early evidence now suggests that there is significant benefit for patients who have genome-guided tests provided. The personalized treatments seem to be helping them as a whole. But yet, I caution that precision medicine is still very early. There's a lot of development to be done. But it's going to grow through integration with other types of biomarkers that better infer the functional status of the pathways that they are operative within. And so with that, I'm happy to take any questions. Thanks.