Department of Biostatistics
HIV Working Group
2014 - 2015
ABSTRACT: The stepped wedge design (SWD) provides a rigorous, randomized scheme for evaluation of interventions previously shown to be efficacious in some HIV/AIDS prevention projects. One example is the assessment of early ART access on retention and virological suppression compared to standard of care for time of initiating ART treatment. I will provide an introduction to the SWD, where facilities or groups of facilities are randomized to the step at which they are phased into the intervention; by the study's end, all clusters receive the intervention, eliminating ethical concerns related to withholding efficacious treatments. Although SWDs have been successfully implemented, little statistical theory for study design exists. We developed exact formulas for power determination for the common setting of binary endpoints. We determined theoretical asymptotic power using Romberg integration over cluster random effects for the binary outcome, two-treatment SWD setting. We used a linear mixed effects model focused on estimation of the individual-level risk difference and compared the power to test non-zero risk differences using a two-sided Wald test of size 0.05 to a closed-form approximation given by Hussey and Hughes (HH). Over a range of design parameters, for a fixed number of clients, the exact method provided designs that ranged from 9% to 2.4 times more efficient than designs based on HH. Utilizing a theoretical asymptotic approach to power calculations will provide more efficient study designs for detecting a risk difference of pre-specified magnitude than the previously available method. This suggests that the SWD may be a more feasible study design in HIV prevention studies with binary end points with cluster randomized interventions than has been previously appreciated.
ABSTRACT: Resource-efficient statistical analysis methods such as the case-control design are crucial in resource-limited settings. These analysis techniques are particularly useful in the developing world, where HIV/AIDS remains a major public health concern and multi-national groups such as the World Health Organization (WHO) provide assistance in the development, funding, and implementation of disease prevention and treatment programs. Cost-efficient and effective monitoring and evaluation of such programs is essential for their long-term success.
When the outcome of interest is binary and rare the case control study can be relatively inexpensive and quickly implemented compared to other study designs, providing equal power while requiring far fewer individuals to be sampled than prospective analyses. When stratified case-control sampling is performed in the presence of within-stratum correlation, the assumption of independence across sampling groups no longer holds, inducing estimator bias and invalidating inference for some existing case-control analysis techniques.
We propose fitting marginal models to case-control samples in the correlated data setting, using weighted generalized estimating equations (wGEE) for valid estimation and inference. The operating characteristics of the wGEE estimators under retrospective sampling in the correlated data setting are explored through simulation. We also investigate incorporating auxiliary group-level information into the wGEE estimators using the survey-sampling technique of calibration in an effort to take advantage of the increased power provided by exposure variability in ecological-level information. We present an algorithm for using calibration in the presence of cluster-correlated case-control data and demonstrate that calibration increases estimator efficiency in certain settings. The methods are illustrated using Malawi AIDS data from 2005-2007.
ABSTRACT: There is a need for incidence assays that accurately estimate HIV incidence based on cross-sectional specimens. Viral diversity-based assays have shown promises but are not particularly accurate. We hypothesize that certain viral genetic segments are more predictive of recent infection than others and aim to improve assay accuracy by employing classification algorithms that focus on the highly informative regions (HIR).
We analyzed HIV gag sequences from a cohort in Botswana. Forty-two subjects newly infected by HIV-1 Subtype C were followed longitudinally through 500 days post-seroconversion. Using sliding window analysis, we screened for genetic segments within gag that best differentiate acute versus chronic infection. We use both non-parametric and parametric approaches to evaluate the discriminatory abilities of sequence segments. Segmented Shannon Entropy measures on HIRs were aggregated to develop generalized entropy measures to improve prediction of recency, defined as infection within past 6 months. With logistic regression as the basis for our classification algorithm, we evaluated the predictive power of these novel biomarkers and compared them with recently reported viral diversity measures using Area under the Curve (AUC) analysis. To further improve prediction, we also explored other diversity-related biomarkers.
Change of diversity over time varied across different sequence segments within gag. The top 50% most informative segments were identified through non-parametric and parametric approaches. In both cases HIRs were in non-flanking regions and less likely in the p24 coding region. These new indices outperformed previously reported viral-diversity-based biomarkers. Including skewness in the assay further improved the AUC (see Figure 1), whereas other existing methods did not add much additional predictive power. Sensitivity analysis suggests that antiretroviral use had little impact on our assay performances. We also demonstrate that sensitivity and specificity depend on the datasets used and the underlying distributions of time-since-infection. This explains why we obtained different AUC values compared to previous studies.
Our generalized entropy measure of viral diversity demonstrates the potential for improving accuracy when identifying recent HIV-1 infections. We also show that to properly compare and evaluate assay performances, the distribution of time-since-infection in the validation dataset needs to be accounted for.
ABSTRACT: The best timing of initiation of combination antiretroviral therapy (cART) in asymptomatic HIV-positive individuals remains unknown. We compared the effectiveness of the 3 current cART initiation strategies in AIDS-free HIV-positive individuals: immediate universal cART initiation, cART initiation at a CD4 cell count below 500, and cART initiation at a CD4 cell count below 350 cells/mm3.
We used data from the HIV-CAUSAL collaboration of cohorts in Europe and the United States. We included 55,826 individuals diagnosed with HIV between 2000-2013, ART naïve, AIDS-free, aged ≥ 18 years and within 6 months of HIV diagnosis. We used the parametric g-formula to adjust for baseline and time-varying confounders to estimate the following quantities as would have been observed under each cART initiation strategy after 7 years of HIV diagnosis: death and death or AIDS-defining illness, proportion in need of cART, and proportion with HIV RNA < 50 copies/mL.
We found that immediate universal initiation increases survival and AIDS-free survival, but overall the benefit is small. Immediate initiation substantially increases the proportion of individuals with suppressed virological replication and the proportion of individuals in need of cART. Earlier cART initiation might help increase the proportion of individuals with suppressed virological replication as long as resources exist to sustain the corresponding increase in number of patients in need of cART.
ABSTRACT: Estimating incidence rates to monitor HIV-1 epidemic trends is essential for implementing prevention programs and evaluate their effectiveness. Critical to this enterprise is the development of novel and accurate classification assays that, based on cross-sectional specimens, help determine infection recency status. In this work we present a study that assesses some biases present in the evaluation of HIV recency classification algorithms that rely on measures of within-host viral diversity. Particularly, we assess how the time since infection (TSI) distribution of the infected subjects from which viral samples are drawn affect performance metrics (e.g., area under the ROC curve, sensitivity, specificity and positive predictive values), potentially leading to misguided conclusions about the efficacy of the classification methodologies. By comparing assay's performance using six different TSI distributions (two empirical datasets with distinct ranges and shapes of TSI distribution, and four simulated TSI distributions representing different epidemic scenarios), we show that, indeed, conclusions about the efficacy of an HIV incidence assay depend critically on the TSI distribution. This work underscores the importance of acknowledging and properly addressing evaluation biases upon the introduction of new HIV incidence assays.
ABSTRACT: None Given
ABSTRACT: Since the early 2000s, evidence has accumulated for a significant differential effect of first-line antiretroviral therapy (ART) regimens on HIV treatment outcomes, such as CD4 response and viral load suppression. This finding was replicated in our data from the Harvard President's Emergency Plan for AIDS Relief (PEPFAR) program in Nigeria. Investigators were interested in finding the source of these differences, i.e., understanding the mechanisms through which one regimen outperforms another, particularly via adherence. This amounts to a mediation question with adherence playing the role of mediator. Existing mediation analysis results, however, have relied on an assumption of no exposure-induced confounding of the intermediate variable, and generally require an assumption of no unmeasured confounding for nonparametric identification. Both assumptions are violated by the presence of drug toxicity. In this paper, we relax these assumptions and show that certain path-specific effects remain identified under weaker conditions. We focus on the path-specific effect solely mediated by adherence and not by toxicity and propose a suite of estimators for this effect, including a semiparametric-efficient, multiply-robust estimator. We illustrate with simulations and present results from a study applying the methodology to the Harvard PEPFAR data.
ABSTRACT: When to switch treatment is an important clinical question when, for example, the current therapy fails or shows suboptimal results. Switching strategies often depend on the evolution of an individual's time-dependent covariate(s). These so-called dynamic strategies can be directly compared in randomized trials. For example, consider a trial in which HIV-infected individuals receiving antiretroviral therapy are randomized to switching therapy within 90 days of HIV-RNA crossing above either 400 copies/mL (tight-control strategy) or 1000 copies/mL (loose-control strategy). Here we describe an approach to emulate this trial by applying inverse-probability weighting of a dynamic marginal structural model to observational data from the Antiretroviral Therapy Cohort Collaboration (ART-CC), the HIV-CAUSAL Collaboration, and the CFAR Network of Integrated Clinical Systems (CNICS). Of 43,803 individuals who initiated an acceptable initial regimen antiretroviral therapy in 2002 or later, 2,015 and 1,655 met the baseline inclusion criteria for the mortality and AIDS or death analyses, respectively. There were 21 deaths and 33 AIDS or death events in the tight control group, and 28 deaths and 41 AIDS or death events in the loose control group. Compared with tight control, the hazard ratios (95% CI) for loose control were 1.09 (0.73, 1.64) for death and 1.04 (0.84, 1.29) for AIDS or death adjusting for baseline and time-varying variables. While our sample sizes were small and our estimates imprecise, the methodological approach described here will serve as a model for future analyses.
ABSTRACT: ACTG A5260s, a prospective metabolic substudy of A5257, was designed to evaluate and characterize the cardiovascular, anthropometric, and skeletal effects of antiretroviral therapy (ART) initiation with contemporary ART regimens. A total of 328 HIV-infected, ART-naïve persons at least 18 years of age, without known cardiovascular disease or diabetes mellitus were randomly assigned to one of three regimens of tenofovir disoproxil fumarate-emtricitabine (TDF/FTC) plus either: atazanavir-ritonavir (ATV/r); darunavir/ritonavir (DRV/r); or raltegravir (RAL). Annual assessments over 3 years of follow-up assessed carotid artery intima-media thickness (CIMT), flow-mediated vasodilation (FMD), peripheral and central body fat, lean mass, bone mineral density (BMD), and a multitude of biomarkers. Contrary to the primary hypothesis, the study primary results demonstrated a slower rate of CIMT progression with ATV/r initial therapy compared to DRV/r; the rate of progression with RAL was intermediate. It is hypothesized that treatment-induced early changes in biomarkers mediate these differences in longer-term CIMT progression. However, traditional modeling approaches are problematic due to the larger number of biomarkers and temporal relationships of treatment, biomarkers, and outcomes. To examine this hypothesis, exploratory factor analysis is used to group biomarkers by latent factors, and examine whether these latent factors mediate treatment-associated changes in outcomes. This analysis considers 22 biomarkers at an early time point (either 24 or 48 weeks on treatment) to create latent factors, and structural equation models to relate treatment, baseline CD4 count, baseline viral load, and the latent factors with CIMT progression measured longitudinally.
ABSTRACT: None Given
ABSTRACT: None Given
ABSTRACT: None Given
|Back to SPH Biostatistics||
Maintained by the
Last Update: March 23, 2015