AJMC®:
What is your view on personalized medicine, and what does the term encompass?
Dickson:
Personalized medicine is an evolution of precision medicine. Personalized medicine cannot exist without precision medicine as the starting point. Precision medicine is the idea that we can accurately identify what makes patients or their disease distinct and how they interact with each other so that we can come up with a very specific treatment that will lead to a specific outcome in those patients. Personalized medicine, to me, is trying to reduce the amount of waste in treatments that do not have a chance of working or improving quality of life. We need to reduce waste, improve quality of life, and keep people out of the hospital. Personalized medicine’s goal is to improve healthcare at a lower cost than is currently being realized by avoidance of waste and avoidance of unnecessary treatments.
AJMC®:
How are registries important in gathering data to support personalized medicine?
Dickson:
The field of precision diagnostics is the core of precision medicine. The problem we run into right now is that the core of precision medicine has rapidly expanded over the past 15 years, expanding much quicker than the evidence associated with the individual diagnostics. We have powerful tools, but we don’t know how to use them. Registries have 2 important functions. One, registries can require that the testing that is taking place has undergone some type of standardization. One of the problems that we run into in advanced diagnostics is that everyone does them a little differently, such as next-generation sequencing. Most people can identify the certain types of alterations no matter what they use, but there are more specific alterations that are very differently identified by different laboratories. Registries can provide a standardization of the testing so everyone is speaking the same language. Two, registries can keep track of outcomes associated with the testing-associated treatments so that we can show that it is beneficial and learn which individual patients with what individual diagnoses with what individual treatments can lead to specific outcomes. Registries standardize and collect outcomes data that we can then use to build upon a body of evidence that will allow us to build real personalized medicines. The registries are a key portion in the development/evolution of precision medicine to personalized medicines.
AJMC®:
What is the role of CURE-1 specifically in advancing
personalized medicine?
Dickson:
CURE-1 was built to act as a catalyst to bring the payers and the technology and the laboratories together in an area where technology can be introduced and payers could understand the quality of that testing and the benefits of the testing in such a way that the testing can continue to be improved upon. For example: We use next-generation sequencing because next-generation sequencing is such a key tool in precision medicine. We have established a registry that allows a payer to come in and cover the testing and say to the laboratory, “We’ll pay for the testing, but we need you to submit your data to this registry,” and for the clinicians using this test, “Please tell us how you used this test, what treatments you gave the patient, and how the patient responded.” By going through and standardizing the testing, which we do through medical oversight committees, we are basically asking, “Laboratory, are you doing a reasonable job of validating your tests? And here are the standards we are looking for.” Once the laboratory has met the requirements, we are working to get payers to require that they use the standardization and the registry to allow the testing to take place. The clinicians need to report to the registries so that we can learn how the testing is being used, how it’s useful, and how we can improve it.
AJMC®:
How do you manage the large amount of data generated through next-generation sequencing? Do you generally use the results in terms of specific biomarkers, or are you using results for individual patients who use the entire result?
Dickson:
All of the above. Every time we do next-generation sequencing, we do DNA prep steps. The DNA prep step is different in each lab because of the different libraries. We capture what type of laboratories were used and how they did it in their general forms. Then we capture what type of instruments they use and what kind of sequencing that they are doing (eg, whole exome, targeted genetic regions, comprehensive analysis). This is called metadata. We also collect the raw data file that comes off the instrument. We collect those, then we collect the files that come off the bioinformatics pipeline that interpret the raw data. Then we collect the file report in a standardized fashion. In this process, we are collecting granular genomic data that no one else in the world is doing. We are doing this because there are so many different steps that are required to run a next-generation sequencing test that it requires an understanding of what is happening in each step. We have hundreds of thousands of permutations on how you can do next-generation sequencing. Some people have different laboratory preps, some people have different bioinformatics pipeline tools, and there are other differences between laboratories. There are papers that show if you alter any of these aspects, you will get different results in certain specific queries. Our group is saying that we are going to collect each one of those data files. What do we do with each of those data files? The files are gigabytes in size—approximately 20 to 30 gigabytes in size. Whole exome sequences are even larger—approximately 1 terabyte in size. This information is put into storage because we recognize that those files are going to help once we start collecting outcomes. Then we can drill down into specific files and analyze what happened with each patient. For example, I get this patient and the laboratory says that they are EGFR [epidermal growth factor receptor] positive. We do this for their final report, which is kilobytes in size. The kilobyte-sized file says that they have an EGFR mutation. If the patient does not respond to anti-EGFR therapy, we can go back and analyze whether the problem was that the bioinformatics failed to pick up something that it should have picked up, or was the problem the biology of the tumor? The only way you know that is by going through each step backward to determine was it really there, and that is where you’ll need the raw data files. I genuinely believe that next-generation sequencing is much more sensitive than any other technologies, right to the final operations, so a question is that sensitivity translates to utility, and that is where registries come in. With registries, we identify how many patients do respond to certain types of treatments. We find ways to improve certain processes to get better clinical results. We can go back to patients’ data files. Rather than looking at all the data to find the truth, the problem is that you need good data points to look back at. For example: I want to look at EGFR-altered patients, and I want to look at files generated from next-generation sequencing. I want to look at the patient’s tree and look at their response rates. That’s where our registries are moving this granular genomic information into high-level clinical outcomes. Payers are interested because it helps determine whether the tests are leading to appropriate treatments and appropriate outcomes. Our job is to identify those benefits and make them better. We need these registries to do general research. When we started this nonprofit organization, we did not have good standards from lab to lab; we did not have good data that are shared (data are not collected or locked up in electronic health records). We need nonprofit groups that can come in and say we will collect data in a precompetitive space and allow everyone to have access.
AJMC®:
How are new methods of next-generation sequencing continuing to reduce the cost and change the value proposition for these diagnostics platforms?
Dickson:
We have to be careful to note that there is a human component in any part of this. We are looking at such complicated technology, data, and methods to examine genes that we need people to verify that what we are getting is what we think we are getting. At some point, we can reduce the cost of the sequencing to pennies on a dollar, but we still need people like bioinformatics experts, bioinformatics mathematicians, and molecular pathologists to see what the results mean. We are going to lower the overall sequencing costs, but unless we get better in informatics, we are going to still see a personnel cost that is not going to decrease under a certain level for quite some time. However, with that said, testing that is done with the best individuals in the nation is still lower than the $2000 to $3000 range. You compare that to a hospitalization of a patient or a drug that does not work. These tests are important in saving costs once you get treatments that do not work or toxicities that can be identified or stopped from those patients who do not benefit.
AJMC®:
How are these tests such as next-generation sequencing enabling the selection of currently available agents more effectively for subgroups of patients? Are there any important findings that have come out that have changed practice or have changed considerations in anything that is already performed?
Dickson:
It is a reality of medicine that, for a pharmaceutical company, you want to identify a group of patients who respond to a therapy, so you are going to appropriately enrich your patient population to show a benefit. To do that, you find a companion diagnostic, and you lock it down, and you do the test with those patients. We have done this with HER2 patients. HER2/HER3-positive patients or HER2 FISH—amplified patients will benefit from therapy. However, is it possible that patients with a lower level of HER2 will also respond? Probably, because the testing is not perfect. If you get better testing, we are going to identity patients with tests other than the companion diagnostics that the drug was approved with. Can we identify other patients who would respond that we know will have a benefit in those patients? Next-generation sequencing can allow us to find patients who will be missed. For example: EGFR and lung cancer. Original testing looked at very specific alterations such as exon 19 and some other areas. We know that we are missing patients who may have large translocations, insertions, and deletions that just were not picked up in the diagnostics that might be picked up in a more precise test such as next-generation sequencing. You can identify patients and put them on therapy that would not have been traditionally included, because they did not fit into that nice area that was determined by the companion diagnostics. Once we identify these patients who are not seen in traditional means, they also will benefit from therapy.
AJMC®:
Currently, many payers limit coverage of genetic testing to companion diagnostics that are approved with medications. How is this likely to change as next-generation sequencing becomes less costly over time and potentially can offer, as you mentioned, a potentially more sensitive diagnosis in precision medicine?
Dickson:
We can never talk about sensitivity without talking about specificity of testing. Sensitivity means that I can identify a patient who has a biomarker that will respond to a specific patient so I can personalize the medication to that patient. Specificity means how many patients do I test who are truly false-positives. The problem with false-positives is that we put people on very expensive therapies that will not benefit them because the testing is not good enough to be 100% sensitive and specific. One-hundred percent sensitive means you identify everyone who has the alterations that will respond to the treatment. One-hundred percent specific means that everyone negative is truly negative and that you do not miss anyone who can be treated. What we do not know in precision medicine is what the clinical specificity and sensitivity is for these patients. And so, with payers, we recognize that we have to find a balance. The testing will identify patients who will benefit at lower cost and lower hospitalization rates.
Treat those patients with broad-based chemotherapy. There is also a concern that some patients are false-positives. We put them on targeted therapy that is costly and not approved, and we increase costs. We need to find the right balance between the 2 groups—between sensitivity and specificity. The only way this is done is by collecting data and through registries.
AJMC®:
How does precision medicine play into the shifting paradigm of drug review and potentially approval using real-world data, registries, and big data analytics to make real change happen in the field of medicine?
Dickson:
I always tell people, you can completely digitize the complete works of Shakespeare, and you can analyze them in any way that you want and you will never find an answer to treat the common cold or common tumor. The size of the data does not matter. What matters is the quality of the data, the granularity of the data, and how well you can access it. When we start looking at big data, the hope is that we have enough granularity in that data that we can identify individual tests for individual patients. The problem that we are running into is that we do not have standardization of next-generation sequencing, so you have different sensitivities and specificity levels for next-generation sequencing. It is possible that you can look through 100 patients who were tested by next-generation sequencing by various methods, and you may say you didn’t see a substantial benefit for that testing—not because the testing didn’t work but because there were so many different variables in play that you didn’t understand the granularity of those variables to determine the benefit. What the FDA, payers, and laboratories are looking for is: “Tell us if it works,” and the only way we can do that is with this level of granularity. That means being able to look at the test and look at the patient and look at the treatments. As time goes on, registries are going to be great adjuncts to drug approvals, and registries can come back and answer questions that the pharmaceutical companies will not ask initially when they had to choose specific tests, like who are patients whom we are missing from therapy: Bristol-Myers Squibbs’ argument for lung cancer. They decided they were going to do PD-L1 [programmed death ligand 1] testing at such a low level that they did not meet significance. None of us really believed that it didn’t work—just that they chose the wrong biomarker and the wrong level. We could potentially identify patients whom, although the FDA approves with a companion diagnostic, we can also go through and find those other patients. Here is a crucial point about lung cancer, though: The problem we run into right now is that there are so many different tests you have to run on a lung cancer patient. When the patient has metastatic lung cancer disease, the patient has fine needle aspiration biopsy results. We estimate you can get 3 to 4 companion diagnostic tests out of 1 biopsy, and so anytime we rebiopsy a patient, we end up costing the payer significant money. By doing next-generation sequencing off tissue specimens up front, we can replace many of these companion diagnostics. This saves money for the payers by avoiding these rebiopsies. Or if it’s later, where you can do a liquid biopsy on these patients, then you can save money by avoiding a biopsy altogether. It is not so much the diagnostic testing as it is the procedure to get the tissue that could be saved initially, particularly in the lung cancer arena.
AJMC®:
How does next-generation sequencing help address some of the heterogeneous features of individual cancers, in which cancer cells have multiple clones, each different mutations, and with clones evolving over time?
Dickson:
This is the great question. If I have a tumor that has 5 different subtypes, and we identified 1 clone out of those subtypes, and that clone is only 3% of the tumor burden, for that specific patient, that really doesn’t matter. And so there is some elegance that we should be doing multiple biopsies to determine if we are checking for different types of clones. The initial biopsy may have showed only one type of biomarker. Maybe the initial biopsy did not show any biomarker because we missed the area that is now proliferating. The problem is I don’t know any data that show that being able to do sequential biomarkers other than in the lung cancer patients who fail anti-EGFR therapy after being diagnosed with an EGFR alteration are now developing P70M mutations. I think that’s the best example, where serial testing can have benefit but it may not require next-generation sequencing to show that 1 very specific alteration. As time goes on, we may begin to figure out how we need to be able to use this testing. Maybe we do need to be looking at several different biopsies or liquid biopsies in patients, but until we get more data on how that testing leads to treatments and outcomes, saying, “Let’s do multiple biopsies, but we aren’t sure if it’s going to make any difference to the patient,” is going to be something hard for the payers to swallow.
AJMC®:
As a practicing clinician, how have you seen precision medicine already change in the way care is delivered, and what do you see for the future?
Dickson:
One case report: A 45-year-old woman comes in with metastatic small cell lung cancer. All the major labs say that she is EGFR-negative. We order a test that wasn’t even paid for by her insurance company at the time and find that she had an EGFR-mutated lung cancer. We have given her 6 months of cytotoxic chemotherapy with no benefit so extensive that she was unable to work or do anything except come into the office, get treatment, and go home with the adverse effects of chemotherapy. Once we found that she had the EGFR alteration, and now 4 years out, she is still working and still feeling great. And so, from a quality standpoint, better testing led to better diagnosis and better treatment. We are seeing this all the time. When I started my career 17 years ago, almost 20 years from now, we were treating as if everyone were an average person. Now we are getting to the molecular era, where we are recognizing the start of agents that target estrogen receptors. Patients with tumors that tested positive for estrogen receptors are treated with drugs like tamoxifen; we can save them from additional treatment. First, selective estrogen receptor modulators, then HER2-targeting agents, and, now in the most recent 5 to 10 years, we have started to define malignancy based not on the tissue origin but on the molecular basis of that disease. As time goes on, I expect that it will not matter where the tumor comes from. For example, in pancreatic cancer, I might say, “This cancer is subtype 573,” and state the mutation burden with it. Also important, if anyone thinks sequencing is the endgame here, it is not. It is actually the starting point. We have to add proteins, we have to understand transcription, the epigenetics, the epibiome; we have to understand all these things together. We need to put them together to help identify what is going on in the patients. And so truly as molecular and precision medicine evolves, we have to be able to lock down areas that we don’t know. Now I know what sequencing does, and I can standardize that. Now let’s go down and look at proteins and see what we can learn by adding proteins to sequencing. Now let’s look at immune function. We have to be able to dissect the tumor from the molecular basis to really determine what we can do with that. We need to start to build standardized registries with testing that has been standardized or with outcomes that have been associated with that testing, to allow us to understand what that testing is doing and how to prove that testing. If we don’t use those standards to add further layers of precision medicine or precision diagnostics, we are likely to get a lot of information or a lot of big data, and we are never going to know what to do with it because it is way too complicated and none of it can be matched with other big data that’s out there.
AJMC®:
Finally, what would you say would be the most important takeaways for payers in terms of understanding the future of personalized medicine and how their coverage policies are ultimately going to change as a result of everything that is happening in personalized medicine over the next few years?
Dickson:
Payers have more control than they will ever know about how medicine evolves in this nation. And truly, payers have the ability to embrace precision medicine that ultimately can lead to decreased cost and better quality of care. But what it is going to require is that payers really take an active and not a passive role in determining the evolution of precision medicine. Payers can look at their policies and think about introducing precision medicine sooner rather than later in certain areas where we know it is beneficial, such as in lung cancer patients, but if they are going to do that, they need to make sure that the laboratory is a good-quality laboratory and that they are participating in further data-collection efforts. Without those further data efforts, we are never going to get further answers. The payers need to support not only precision medicine but also support standards and data collection—not because it’s experimental but because it’s important as part of driving the information that will be necessary to get where we want to go. And so the payers need to be a key portion of that and be involved in helping drive for those standards and the data collection that is going to be so important to reduce cost and improve care for patients.