All ETDs from UAB

Advisory Committee Chair

Gary R Cutter

Advisory Committee Members

Alfred A Bartolucci

Meredith L Kilgore

Charity J Morgan

Huifeng Yun

Document Type


Date of Award


Degree Name by School

Doctor of Philosophy (PhD) School of Public Health


In the clinical decision between two or more discrete competing diagnostic or therapeutic patient care strategies, the main consideration is typically the expected difference in the primary outcome of interest. When head-to-head randomized trial results are unavailable or insufficient, this expected difference in outcome can be estimated with an indirect comparison in some settings. Publication of formalized indirect comparisons and proposed statistical methods in the medical literature began in 1993. Despite growing acceptance of indirect treatment comparisons as credible evidence, the statistical underpinnings, properties, and performance of these methods both individually and comparatively remain incompletely explored and defined. This dissertation addresses several fundamental statistical questions concerning indirect treatment comparison methods. These range from basic statistical properties including estimation bias, variance, and mean square error, definition and satisfaction of the methods’ core “similarity” assumption, estimators’ robustness to violations of this assumption, type 1 error and power of indirect estimators’ hypothesis tests, and the potential utility of these methods to enhance clinical decision-making. The dissertation is divided into three parts. Part I comprehensively defines the existing and proposed statistical estimators of indirect treatment effects and examines their statistical properties. A framework for systematically, quantitatively, and thus more rigorously evaluating satisfaction of the core similarity assumption is proposed and demonstrated with simulation. Based on this framework, a practical guide to indirect estimator selection is developed conditioned upon robustness to known or potential departures from the similarity assumption. Part II defines the indirect treatment comparison hypothesis tests. The hypothesis tests’ individual and comparative type 1 errors and powers are examined particularly under violations of similarity. A practical guide to estimator hypothesis test selection to control type 1 error to nominal indirect hypothesis test size α with high probability is developed. Part III proposes two clinical decision algorithms informed by indirect treatment estimators’ estimates. The first, grounded on indirect treatment effect hypothesis testing, is shown to outperform random treatment selection under equipoise, potentially reducing patient event rates. The second clinical decision algorithm, based on indirect treatment estimates and pre-specified clinical minimum meaningful differences, also outperforms random treatment selection with lower expected patient event rates.

Included in

Public Health Commons



To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.