Source Themes

Neutralizing antibody immune correlates in COVAIL trial recipients of an mRNA second COVID-19 vaccine boost
Neutralizing antibody titer has been a surrogate endpoint for guiding COVID-19 vaccine approval and use, although the pandemic’s evolution and the introduction of variant-adapted vaccine boosters raise questions as to this surrogate’s contemporary performance. For 985 recipients of an mRNA second bivalent or monovalent booster containing various Spike inserts [Prototype (Ancestral), Beta, Delta, and/or Omicron BA.1 or BA.4/5] in the COVAIL trial (NCT05289037), titers against 5 strains were assessed as correlates of risk of symptomatic COVID-19 (“COVID-19”) and as correlates of relative (Pfizer-BioNTech Omicron vs. Prototype) booster protection against COVID-19 over 6 months of follow-up during the BA.2-BA.5 Omicron-dominant period. Consistently across the Moderna and Pfizer-BioNTech vaccine platforms and across all variant Spike inserts assessed, both peak and exposure-proximal (“predicted-at-exposure”) titers correlated with lower Omicron COVID-19 risk in individuals previously infected with SARS-CoV-2, albeit significantly less so in naïve individuals [e.g., exposure-proximal hazard ratio per 10-fold increase in BA.1 titer 0.74 (95% CI 0.59, 0.94) for naïve vs. 0.41 (95% CI 0.23, 0.64) for non-naïve; interaction p = 0.013]. Neutralizing antibody titer was a strong inverse correlate of Omicron COVID-19 in non-naïve individuals and a weaker correlate in naïve individuals, posing questions about how prior infection alters the neutralization correlate.
Generalizing the intention-to-treat effect of an active control from historical placebo-controlled trials: A case study of the efficacy of daily oral TDF/FTC in the HPTN 084 study
In many clinical settings, an active-controlled trial design (e.g., a non-inferiority or superiority design) is often used to compare an experimental medicine to an active control (e.g., an FDA-approved, standard therapy). One prominent example is a recent phase 3 efficacy trial, HIV Prevention Trials Network Study 084 (HPTN 084), comparing long-acting cabotegravir, a new HIV pre-exposure prophylaxis (PrEP) agent, to the FDA-approved daily oral tenofovir disoproxil fumarate plus emtricitabine (TDF/FTC) in a population of heterosexual women in 7 African countries. One key complication of interpreting study results in an active-controlled trial like HPTN 084 is that the placebo arm is not present and the efficacy of the active control (and hence the experimental drug) compared to the placebo can only be inferred by leveraging other data sources. In this article, we study statistical inference for the intention-to-treat (ITT) effect of the active control using relevant historical placebo-controlled trials data under the potential outcomes (PO) framework. We highlight the role of adherence and unmeasured confounding, discuss in detail identification assumptions and two modes of inference (point vs. partial identification), propose estimators under identification assumptions permitting point identification, and lay out sensitivity analyses needed to relax identification assumptions. We applied our framework to estimating the intention-to-treat effect of daily oral TDF/FTC versus placebo in HPTN 084 using data from an earlier Phase 3, placebo-controlled trial of daily oral TDF/FTC (Partners PrEP). Supplementary materials for this article are available online, including a standardized description of the materials available for reproducing the work.
Efficient Algorithms for Building Representative Matched Pairs with Enhanced Generalizability
Many recent efforts center on assessing the ability of real-world evidence (RWE) generated from non-randomized, observational data to produce results compatible with those from randomized controlled trials (RCTs). One noticeable endeavor is the RCT DUPLICATE initiative. To better reconcile findings from an observational study and an RCT, or two observational studies based on different databases, it is desirable to eliminate differences between study populations. We outline an efficient, network-flow-based statistical matching algorithm that designs well-matched pairs from observational data that resemble the covariate distributions of a target population, for instance, the target-RCT-eligible population in the RCT DUPLICATE initiative studies or a generic population of scientific interest. We demonstrate the usefulness of the method by revisiting the inconsistency regarding a cardioprotective effect of the hormone replacement therapy (HRT) in the Women’s Health Initiative (WHI) clinical trial and corresponding observational study. We found that the discrepancy between the trial and observational study persisted in a design that adjusted for the difference in study populations' cardiovascular risk profile, but seemed to disappear in a study design that further adjusted for the difference in HRT initiation age and previous estrogen-plus-progestin use. The proposed method is integrated into the R package match2C.
Instrumental variables: To strengthen or not to strengthen?
Instrumental variables (IVs) are extensively used to estimate treatment effects when the treatment and outcome are confounded by unmeasured confounders; however, weak IVs are often encountered in empirical studies and may cause problems. Many studies have considered building a stronger IV from the original, possibly weak, IV in the design stage of a matched study at the cost of not using some of the samples in the analysis. It is widely accepted that strengthening an IV tends to render nonparametric tests more powerful and will increase the power of sensitivity analyses in large samples. In this article, we re-evaluate this conventional wisdom to bring new insights into this topic. We consider matched observational studies from three perspectives. First, we evaluate the trade-off between IV strength and sample size on nonparametric tests assuming the IV is valid and exhibit conditions under which strengthening an IV increases power and conversely conditions under which it decreases power. Second, we derive a necessary condition for a valid sensitivity analysis model with continuous doses. We show that the Γ sensitivity analysis model, which has been previously used to come to the conclusion that strengthening an IV increases the power of sensitivity analyses in large samples, does not apply to the continuous IV setting and thus this previously reached conclusion may be invalid. Third, we quantify the bias of the Wald estimator with a possibly invalid IV under an oracle and leverage it to develop a valid sensitivity analysis framework; under this framework, we show that strengthening an IV may amplify or mitigate the bias of the estimator, and may or may not increase the power of sensitivity analyses. We also discuss how to better adjust for the observed covariates when building an IV in matched studies.
Matching One Sample According to Two Criteria in Observational Studies
Multivariate matching has two goals: (i) to construct treated and control groups that have similar distributions of observed covariates, and (ii) to produce matched pairs or sets that are homogeneous in a few key covariates. When there are only a few binary covariates, both goals may be achieved by matching exactly for these few covariates. Commonly, however, there are many covariates, so goals (i) and (ii) come apart, and must be achieved by different means. As is also true in a randomized experiment, similar distributions can be achieved for a high-dimensional covariate, but close pairs can be achieved for only a few covariates. We introduce a new polynomial-time method for achieving both goals that substantially generalizes several existing methods; in particular, it can minimize the earthmover distance between two marginal distributions. The method involves minimum cost flow optimization in a network built around a tripartite graph, unlike the usual network built around a bipartite graph. In the tripartite graph, treated subjects appear twice, on the far left and the far right, with controls sandwiched between them, and efforts to balance covariates are represented on the right, while efforts to find close individual pairs are represented on the left. In this way, the two efforts may be pursued simultaneously without conflict. The method is applied to our on-going study in the Medicare population of the relationship between superior nursing and sepsis mortality. The match2C package in R implements the method.
Statistical matching and subclassification with a continuous dose: characterization, algorithm, and application to a health outcomes study
Subclassification and matching are often used to adjust for observed covariates in observational studies; however, they are largely restricted to relatively simple study designs with a binary treatment. One important exception is Lu et al.(2001), who considered optimal pair matching with a continuous treatment dose. In this article, we propose two criteria for optimal subclassification/full matching based on subclass homogeneity with a continuous treatment dose, and propose an efficient polynomial-time algorithm that is guaranteed to find an optimal subclassification with respect to one criterion and serves as a 2-approximation algorithm for the other criterion. We discuss how to incorporate treatment dose and use appropriate penalties to control the number of subclasses in the design. Via extensive simulations, we systematically examine the performance of our proposed method, and demonstrate that combining our proposed subclassification scheme with regression adjustment helps reduce model dependence for parametric causal inference with a continuous treatment dose. We illustrate the new design and how to conduct randomization-based statistical inference under the new design using Medicare and Medicaid claims data to study the effect of transesophageal echocardiography (TEE) during CABG surgery on patients' 30-day mortality rate.
Estimating and improving dynamic treatment regimes with a time-varying instrumental variable
Estimating dynamic treatment regimes (DTRs) from retrospective observational data is challenging as some degree of unmeasured confounding is often expected. In this work, we develop a framework of estimating properly defined optimal DTRs with a time-varying instrumental variable (IV) when unmeasured covariates confound the treatment and outcome, rendering the potential outcome distributions only partially identified. We derive a novel Bellman equation under partial identification, use it to define a generic class of estimands (termed IV-optimal DTRs), and study the associated estimation problem. We then extend the IV-optimality framework to tackle the policy improvement problem, delivering IV-improved DTRs that are guaranteed to perform no worse and potentially better than a pre-specified baseline DTR. Importantly, our IV-improvement framework opens up the possibility of strictly improving upon DTRs that are optimal under the no unmeasured confounding assumption (NUCA). We demonstrate via extensive simulations the superior performance of IV-optimal and IV-improved DTRs over the DTRs that are optimal only under the NUCA. In a real data example, we embed retrospective observational registry data into a natural, two-stage experiment with noncompliance using a time-varying IV and estimate useful IV-optimal DTRs that assign mothers to high-level or low-level neonatal intensive care units based on their prognostic variables.