26 July 2013
The current culture surrounding clinical trials hinders our ability to learn from our failures. Currently available technology and a slightly different mindset can fix this.
One of the biggest problems that exists today with anti-cancer clinical trials general is publication bias. And by that, I mean the non-publication of trial failures. For instance, groups A, B, C and D could all be testing drug X on breast cancer, but if only group B gets a positive result, all researchers like Yours Truly would see in the literature would be results from group B. And all the results would be positive. Yours Truly would have no knowledge of the failed trials from groups A, C, and D, and if he did, he might think twice about pursuing further trials with drug X.
In the last six months the All Trials campaign has been gaining momentum to get all clinical trials published, regardless of their outcome.
Publication of negative results could prevent other research groups from wasting their time, or learning from the failures of previous trials… and that’s where a “big data” approach could revolutionize cancer treatment.
I recently read “Big Data: A Revolution That Will Transform How We Live, Work, and Think” by Viktor Mayer-Schonberger and Kenneth Cukier
Schonberger and Cukier define “Big Data” as “the ability of society to harness information in novel ways to produce useful insights or goods and services of significant value.” In particular, this is driven by two things things:
1) The cost of storing data has plummeted, making it feasible to keep data around that might not yet have a clearly defined use.
2) An altered mindset, where more emphasis is placed on “what” and less on “why.” Less emphasis on causation and more on correlation. For instance, Wal Mart did not care why pop-tarts became an especially popular item right before hurricanes. In fact, it was irrelevant to them. They went ahead and ordered extra pop-tarts before impending natural disasters at their stores and boosted their sales. They were able to do this because data storage is so cheap these days, and they were able to keep a record of what every customer bought, when, and that else was in their basket at checkout. From this macro view of buying trends they discovered something that they did not expect or could not anticipate based on their common knowledge.
In terms of cancer, there something else that has plummetted: sequencing costs. Sequencing of entire exomes can be done for about $500 these days! That was almost unthinkable just a few years ago. Pretty soon sequencing of entire genomes might be available for comparable cost.
So, here’s a vision I have: A center that runs many cancer clinical trials is partnered with a center that can do exome or genome sequencing for every patient enrolled in a cancer clinical trial, and this data would be published* regardless of the success or failure of the trial. Sequencing would be done on primary tumor mass from surgery or before treatment. If possible, sequencing would be done on tumor(s) post-clinical trial or even post-mortem.
Regardless of success or failure, the results from clinical trials in cancer would be published, and this data would be available (though to whom exactly and in what capacity, what security, and other ethical concerns considered) for future analysis to find trends not otherwise comprehensible without such a macro view. Items that could be investigated could include:
1) What genetic profiles predict response to therapy or non-response?
2) Are there similarities between cancer types?
3) Do mechanisms of resistance correlate between types of cancer and types of drugs?
4) Do these insights correlate with other defined risk factors?
5) Questions or correlations that have yet to be considered(!)
In summary, recent plummeting costs of both sequencing and data storage can allow us to learn much more from our failures (as well as our successes) by enabling a macro view of the changing genes correlating with cancer therapy response or non-response.