Advisory Committee Chair
Gary R Cutter
Advisory Committee Members
Alfred A Bartolucci
Meredith L Kilgore
Charity J Morgan
Huifeng Yun
Document Type
Dissertation
Date of Award
2017
Degree Name by School
Doctor of Philosophy (PhD) School of Public Health
Abstract
In the clinical decision between two or more discrete competing diagnostic or therapeutic patient care strategies, the main consideration is typically the expected difference in the primary outcome of interest. When head-to-head randomized trial results are unavailable or insufficient, this expected difference in outcome can be estimated with an indirect comparison in some settings. Publication of formalized indirect comparisons and proposed statistical methods in the medical literature began in 1993. Despite growing acceptance of indirect treatment comparisons as credible evidence, the statistical underpinnings, properties, and performance of these methods both individually and comparatively remain incompletely explored and defined. This dissertation addresses several fundamental statistical questions concerning indirect treatment comparison methods. These range from basic statistical properties including estimation bias, variance, and mean square error, definition and satisfaction of the methods’ core “similarity” assumption, estimators’ robustness to violations of this assumption, type 1 error and power of indirect estimators’ hypothesis tests, and the potential utility of these methods to enhance clinical decision-making. The dissertation is divided into three parts. Part I comprehensively defines the existing and proposed statistical estimators of indirect treatment effects and examines their statistical properties. A framework for systematically, quantitatively, and thus more rigorously evaluating satisfaction of the core similarity assumption is proposed and demonstrated with simulation. Based on this framework, a practical guide to indirect estimator selection is developed conditioned upon robustness to known or potential departures from the similarity assumption. Part II defines the indirect treatment comparison hypothesis tests. The hypothesis tests’ individual and comparative type 1 errors and powers are examined particularly under violations of similarity. A practical guide to estimator hypothesis test selection to control type 1 error to nominal indirect hypothesis test size α with high probability is developed. Part III proposes two clinical decision algorithms informed by indirect treatment estimators’ estimates. The first, grounded on indirect treatment effect hypothesis testing, is shown to outperform random treatment selection under equipoise, potentially reducing patient event rates. The second clinical decision algorithm, based on indirect treatment estimates and pre-specified clinical minimum meaningful differences, also outperforms random treatment selection with lower expected patient event rates.
Recommended Citation
Hillegass, William, "Comparative Performance And Clinical Utility Of Indirect Treatment Comparison Estimators" (2017). All ETDs from UAB. 1942.
https://digitalcommons.library.uab.edu/etd-collection/1942