Скачать презентацию Steps on the Road to Predictive Medicine Richard Скачать презентацию Steps on the Road to Predictive Medicine Richard

d14eb353015d745d8cb78c1f9629d3f8.ppt

  • Количество слайдов: 58

Steps on the Road to Predictive Medicine Richard Simon, D. Sc. Chief, Biometric Research Steps on the Road to Predictive Medicine Richard Simon, D. Sc. Chief, Biometric Research Branch National Cancer Institute http: //brb. nci. nih. gov

BRB Website brb. nci. nih. gov Powerpoint presentations n Reprints & Technical Reports n BRB Website brb. nci. nih. gov Powerpoint presentations n Reprints & Technical Reports n BRB-Array. Tools software n Web based Sample Size Planning n Clinical Trials using predictive biomarkers n Development of gene expression based predictive classifiers n

n Many cancer treatments benefit only a minority of patients to whom they are n Many cancer treatments benefit only a minority of patients to whom they are administered n n Particularly true for molecularly targeted drugs Being able to predict which patients are likely to benefit would n n n save patients from unnecessary toxicity, and enhance their chance of receiving a drug that helps them Help control medical costs Improve the success rate of clinical drug development

n “Hypertension is not one single entity, neither is schizophrenia. It is likely that n “Hypertension is not one single entity, neither is schizophrenia. It is likely that we will find 10 if we are lucky, or 50, if we are not very lucky, different disorders masquerading under the umbrella of hypertension. I don’t see how once we have that knowledge, we are not going to use it to genotype individuals and try to tailor therapies, because if they are that different, then they’re likely fundamentally … different problems…” n George Poste

Biomarkers n Prognostic n n Measured before treatment to indicate long-term outcome for patients Biomarkers n Prognostic n n Measured before treatment to indicate long-term outcome for patients untreated or receiving standard treatment Predictive n Measured before treatment to select good patient candidates for a particular treatment

Prognostic and Predictive Biomarkers in Oncology n Single gene or protein measurement e. g. Prognostic and Predictive Biomarkers in Oncology n Single gene or protein measurement e. g. HER 2 protein staining 2+ or 3+ n HER 2 amplification n KRAS mutation n n Scalar index or classifier that summarizes contributions of multiple genes/proteins n Empirically determined based on genome-wide correlating gene expression to patient outcome after treatment

Prognostic Factors in Oncology Most prognostic factors are not used because they are not Prognostic Factors in Oncology Most prognostic factors are not used because they are not therapeutically relevant n Most prognostic factor studies do not have a clear medical objective n They use a convenience sample of patients for whom tissue is available. n Generally the patients are too heterogeneous to support therapeutically relevant conclusions n

Prognostic Biomarkers Can be Therapeutically Relevant <10% of node negative ER+ breast cancer patients Prognostic Biomarkers Can be Therapeutically Relevant <10% of node negative ER+ breast cancer patients require or benefit from the cytotoxic chemotherapy that they receive n Oncotype. Dx n n 21 gene RTPCR assay for FFPE tissue

Predictive Biomarkers n In the past often studied as un-focused post-hoc subset analyses of Predictive Biomarkers n In the past often studied as un-focused post-hoc subset analyses of RCTs. Numerous subsets examined n Same data used to define subsets for analysis and for comparing treatments within subsets n No control of type I error n

n Statisticians have taught physicians not to trust subset analysis unless the overall treatment n Statisticians have taught physicians not to trust subset analysis unless the overall treatment effect is significant This was good advice for post-hoc data dredging subset analysis n For many molecularly targeted cancer being developed, the subset analysis will be an essential component of the primary analysis and analysis of the subsets will not be contingent on demonstrating that the overall effect is significant n

Prospective Co-Development of Drugs and Companion Diagnostics 1. 2. 3. Develop a completely specified Prospective Co-Development of Drugs and Companion Diagnostics 1. 2. 3. Develop a completely specified genomic classifier of the patients likely to benefit from a new drug Establish analytical validity of the classifier Use the completely specified classifier to design and analyze a new clinical trial to evaluate effectiveness of the new treatment with a pre-defined analysis plan that preserves the overall type-I error of the study.

Guiding Principle n The data used to develop the classifier must be distinct from Guiding Principle n The data used to develop the classifier must be distinct from the data used to test hypotheses about treatment effect in subsets determined by the classifier Developmental studies can be exploratory n Studies on which treatment effectiveness claims are to be based should be definitive studies that test a treatment hypothesis in a patient population completely pre-specified by the classifier n

New Drug Developmental Strategy I n Restrict entry to the phase III trial based New Drug Developmental Strategy I n Restrict entry to the phase III trial based on the binary predictive classifier, i. e. targeted design

Using phase II data, develop predictor of response to new drug Develop Predictor of Using phase II data, develop predictor of response to new drug Develop Predictor of Response to New Drug Patient Predicted Responsive Patient Predicted Non-Responsive Off Study New Drug Control

Applicability of Design I n Primarily for settings where the classifier is based on Applicability of Design I n Primarily for settings where the classifier is based on a single gene whose protein product is the target of the drug n n eg Herceptin With substantial biological basis for the classifier, it may be unacceptable ethically to expose classifier negative patients to the new drug

Evaluating the Efficiency of Strategy (I) n n Simon R and Maitnourim A. Evaluating Evaluating the Efficiency of Strategy (I) n n Simon R and Maitnourim A. Evaluating the efficiency of targeted designs for randomized clinical trials. Clinical Cancer Research 10: 6759 -63, 2004; Correction and supplement 12: 3229, 2006 Maitnourim A and Simon R. On the efficiency of targeted clinical trials. Statistics in Medicine 24: 329 -339, 2005

n Relative efficiency of targeted design depends on n n proportion of patients test n Relative efficiency of targeted design depends on n n proportion of patients test positive effectiveness of new drug (compared to control) for test negative patients When less than half of patients are test positive and the drug has little or no benefit for test negative patients, the targeted design requires dramatically fewer randomized patients The targeted design may require fewer or more screened patients than the standard design

Trastuzumab Herceptin n n Metastatic breast cancer 234 randomized patients per arm 90% power Trastuzumab Herceptin n n Metastatic breast cancer 234 randomized patients per arm 90% power for 13. 5% improvement in 1 -year survival over 67% baseline at 2 -sided. 05 level If benefit were limited to the 25% test + patients, overall improvement in survival would have been 3. 375% n 4025 patients/arm would have been required

Web Based Software for Comparing Sample Size Requirements n http: //brb. nci. nih. gov Web Based Software for Comparing Sample Size Requirements n http: //brb. nci. nih. gov

Developmental Strategy (II) Develop Predictor of Response to New Rx Predicted Responsive To New Developmental Strategy (II) Develop Predictor of Response to New Rx Predicted Responsive To New Rx New RX Predicted Nonresponsive to New Rx New RX Control

Developmental Strategy (II) n n n Do not use the test to restrict eligibility, Developmental Strategy (II) n n n Do not use the test to restrict eligibility, but to structure a prospective analysis plan Having a prospective analysis plan is essential “Stratifying” (balancing) the randomization is useful to ensure that all randomized patients have tissue available but is not a substitute for a prospective analysis plan The purpose of the study is to evaluate the new treatment overall and for the pre-defined subsets; not to modify or refine the classifier The purpose is not to demonstrate that repeating the classifier development process on independent data results in the same classifier

Analysis Plan A n Compare the new drug to the control for classifier positive Analysis Plan A n Compare the new drug to the control for classifier positive patients If p+>0. 05 make no claim of effectiveness n If p+ 0. 05 claim effectiveness for the classifier positive patients and n n Compare new drug to control for classifier negative patients using 0. 05 threshold of significance

Analysis Plan B (Limited confidence in test) n Compare the new drug to the Analysis Plan B (Limited confidence in test) n Compare the new drug to the control overall for all patients ignoring the classifier. n n If poverall 0. 03 claim effectiveness for the eligible population as a whole Otherwise perform a single subset analysis evaluating the new drug in the classifier + patients n If psubset 0. 02 claim effectiveness for the classifier + patients.

Analysis Plan C Test for difference (interaction) between treatment effect in test positive patients Analysis Plan C Test for difference (interaction) between treatment effect in test positive patients and treatment effect in test negative patients n If interaction is significant at level int then compare treatments separately for test positive patients and test negative patients n Otherwise, compare treatments overall n

Sample Size Planning for Analysis Plan C 88 events in test + patients needed Sample Size Planning for Analysis Plan C 88 events in test + patients needed to detect 50% reduction in hazard at 5% two-sided significance level with 90% power n If 25% of patients are positive, when there are 88 events in positive patients there will be about 264 events in negative patients n n 264 events provides 90% power for detecting 33% reduction in hazard at 5% two-sided significance level

Biomarker Adaptive Threshold Design Wenyu Jiang, Boris Freidlin & Richard Simon JNCI 99: 1036 Biomarker Adaptive Threshold Design Wenyu Jiang, Boris Freidlin & Richard Simon JNCI 99: 1036 -43, 2007

Biomarker Adaptive Threshold Design Randomized phase III trial comparing new treatment E to control Biomarker Adaptive Threshold Design Randomized phase III trial comparing new treatment E to control C n Survival or DFS endpoint n

Biomarker Adaptive Threshold Design Have identified a predictive index B thought to be predictive Biomarker Adaptive Threshold Design Have identified a predictive index B thought to be predictive of patients likely to benefit from E relative to C n Eligibility not restricted by biomarker n No threshold for biomarker determined n

Analysis Plan n n S(b)=log likelihood ratio statistic for treatment versus control comparison in Analysis Plan n n S(b)=log likelihood ratio statistic for treatment versus control comparison in subset of patients with B b Compute S(b) for all possible threshold values Determine T=max{S(b)} Compute null distribution of T by permuting treatment labels n n Permute the labels of which patients are in which treatment group Re-analyze to determine T for permuted data Repeat for 10, 000 permutations Compute point and bootstrap confidence interval estimates of the threshold b

DNA Microarray Technology n n Powerful tool for understanding mechanisms and enabling predictive medicine DNA Microarray Technology n n Powerful tool for understanding mechanisms and enabling predictive medicine Challenges the ability of biomedical scientists to analyze data Challenges statisticians with new problems for which existing analysis paradigms are often inapplicable Excessive hype and skepticism

Good microarray studies have clear objectives, but not generally gene specific mechanistic hypotheses n Good microarray studies have clear objectives, but not generally gene specific mechanistic hypotheses n Design and analysis methods should be tailored to study objectives n

Class Prediction Predict which tumors will respond to a particular treatment n Predict survival Class Prediction Predict which tumors will respond to a particular treatment n Predict survival or relapse-free survival risk group n

Class Prediction ≠ Class Comparison Prediction is not Inference n The criteria for gene Class Prediction ≠ Class Comparison Prediction is not Inference n The criteria for gene selection for class prediction and for class comparison are different n n n For class comparison false discovery rate is important For class prediction, predictive accuracy is important Most statistical methods were not developed for p>>n prediction problems

Evaluating a Classifier n “Prediction is difficult, especially the future. ” n n Neils Evaluating a Classifier n “Prediction is difficult, especially the future. ” n n Neils Bohr But easier than “understanding”

Validating a Predictive Classifier n n n Goodness of fit is no evidence of Validating a Predictive Classifier n n n Goodness of fit is no evidence of prediction accuracy for independent data Demonstrating statistical significance of prognostic factors is not the same as demonstrating predictive accuracy Demonstrating stability of selected genes is not demonstrating predictive accuracy of a model for independent data

Types of Validation for Prognostic and Predictive Biomarkers n Analytical validation n When there Types of Validation for Prognostic and Predictive Biomarkers n Analytical validation n When there is a gold standard n n No gold standard n n Reproducibility and robustness Clinical validation n n Sensitivity, specificity Does the biomarker predict what it’s supposed to predict for independent data Clinical utility n n Does use of the biomarker result in patient benefit Depends on available treatments and practice standards

Internal Clinical Validation of a Predictive Classifier n Split sample validation n Training-set n Internal Clinical Validation of a Predictive Classifier n Split sample validation n Training-set n n Test set n n Used to select features, select model type, fit all parameters including cut-off thresholds and tuning parameters Count errors for single completely pre-specified model Cross-validation n n Omit one sample Build completely specified classifier from scratch in the training set of n-1 samples Classify the omitted sample Repeat Total number of classification errors

n n Cross validation is only valid if the test set is not used n n Cross validation is only valid if the test set is not used in any way in the development of the model. Using the complete set of samples to select genes violates this assumption and invalidates cross-validation The cross-validated estimate of misclassification error is an estimate of the prediction error for model fit using specified algorithm to full dataset

Sample Size Planning References n n n K Dobbin, R Simon. Sample size determination Sample Size Planning References n n n K Dobbin, R Simon. Sample size determination in microarray experiments for class comparison and prognostic classification. Biostatistics 6: 27, 2005 K Dobbin, R Simon. Sample size planning for developing classifiers using high dimensional DNA microarray data. Biostatistics 8: 101, 2007 K Dobbin, Y Zhao, R Simon. How large a training set is needed to develop a classifier for microarray data? Clinical Cancer Res 14: 108, 2008

Sample Size Planning for Classifier Development n The expected value (over training sets) of Sample Size Planning for Classifier Development n The expected value (over training sets) of the probability of correct classification PCC(n) should be within of the maximum achievable PCC( )

Sample size as a function of effect size (log-base 2 fold-change between classes divided Sample size as a function of effect size (log-base 2 fold-change between classes divided by standard deviation). Two different tolerances shown, . Each class is equally represented in the population. 22000 genes on an array.

BRB-Array. Tools n n n Architect – R Simon Developer – Emmes Corporation Contains BRB-Array. Tools n n n Architect – R Simon Developer – Emmes Corporation Contains wide range of analysis tools that I have selected Designed for use by biomedical scientists Imports data from all gene expression and copy-number platforms n n Automated import of data from NCBI Gene Express Omnibus Highly computationally efficient Extensive annotations for identified genes Integrated analysis of expression data, copy number data, pathway data and data other biological data

Predictive Classifiers in BRB-Array. Tools n Classifiers n Diagonal linear discriminant Compound covariate Bayesian Predictive Classifiers in BRB-Array. Tools n Classifiers n Diagonal linear discriminant Compound covariate Bayesian compound covariate Support vector machine with inner product kernel K-nearest neighbor n Nearest centroid n Shrunken centroid (PAM) Random forrest Tree of binary classifiers for k-classes n n n n n Supervised pc’s With clinical covariates Cross-validated K-M curves Predict quantitative trait n LARS, LASSO Feature selection options n n n Univariate t/F statistic Hierarchical random variance model Restricted by fold effect Univariate classification power Recursive feature elimination Top-scoring pairs Validation methods n Survival risk-group n n n Split-sample LOOCV Repeated k-fold CV. 632+ bootstrap Permutational statistical significance

DISTANT EVENT FREE SURVIVAL Cross-validated Kaplan-Meier curves for risk groups using 50 th percentile DISTANT EVENT FREE SURVIVAL Cross-validated Kaplan-Meier curves for risk groups using 50 th percentile cut-off GENE MODEL COVARIATES MODEL COMBINED MODEL

BRB-Array. Tools July 2008 n 8934 Registered users 68 Countries 616 Citations 19, 628 BRB-Array. Tools July 2008 n 8934 Registered users 68 Countries 616 Citations 19, 628 hits/month to website n Registered users n n 4655 in US n 898 at NIH n n 387 at NCI 2994 US EDU 1161 US Gov (non NIH) 4655 Non US

Countries With Most BRB Array. Tools Registered Users n n n Germany 292 France Countries With Most BRB Array. Tools Registered Users n n n Germany 292 France 289 Canada 287 UK 278 Italy 250 China 241 Netherlands 240 Taiwan 222 Korea 192 Japan 187 Spain 168 n n n n n Australia 155 India 139 Belgium 103 New Zeland 63 Brazil 54 Singapore 53 Denmark 52 Sweden 50 Israel 45

Conclusions n n n New technology makes it increasingly feasible to identify which patients Conclusions n n n New technology makes it increasingly feasible to identify which patients are most likely to benefit from a specified treatment Predictive oncology is feasible based on genomic characterization of a patient’s tumor Targeting treatment can provide n n n Patient benefit Economic benefit for society Improved chance of success for new drug development n Not necessarily simpler or less expensive development

Conclusions Achieving the potential of new technology requires paradigm changes in focus and methods Conclusions Achieving the potential of new technology requires paradigm changes in focus and methods of “correlative science. ” n Effective interdisciplinary research requires increased emphasis on cross education of laboratory, clinical and statistical/computational scientists n

Acknowledgements n BRB Senior Staff n n n n Post-docs n n n n Acknowledgements n BRB Senior Staff n n n n Post-docs n n n n Boris Freidlin Ed Korn Lisa Mc. Shane Joanna Shih George Wright Yingdong Zhao, Kevin Dobbin Alain Dupuy Wenyu Jiang Aboubakar Maitournam Annette Molinaro Michael Radmacher BRB-Array. Tools Development Team