The article by Aldama et al.1 describes the main components and outcomes of PROGALIAM. The question we must ask ourselves is if the changes observed are due to the intervention. In this case, the authors concluded that PROGALIAM had “achieved its 2 main goals”: increased survival and better access to primary revascularization for patients with ST-elevation acute myocardial infarction. They also reported a significant link between reduced mortality and increase in primary percutaneous coronary interventions. These results are in line with those published from the RECALCAR project,2 which found lower mortality in regions that had comprehensive programs for the management of myocardial infarction.
We believe that the regression analysis in the RECALCAR study, and the hazard ratio and before-after design of the study that assessed PROGALIAM are not suitable to determine a causal relationship with certainty.
Hazard ratios have a limitation: it cannot be assumed that the ratio will remain constant for the whole follow-up period,3 especially when the confounding variables may be unbalanced between the intervention and control groups.4 In the case of PROGALIAM, differences were observed in the clinical characteristics of the patients and the presentation of ST-elevation acute myocardial infarction in the periods before and after its implementation. The use of historic controls5 in observational studies increases this problem, due to the possible secular changes in medical practice: bias may increase with longer study duration.
Randomized clinical trials are the reference standard for clinical research to compare the average outcome with one treatment (the factual) with the average outcome in patients who do not receive the treatment (the counterfactual). Assessing the impact of interventions (such as PROGALIAM) to determine if the changes observed are due to the intervention and its efficacy involves a counterfactual analysis: comparing what happened with what would have happened in the absence of the intervention when such an experimental design would be unfeasible or unethical. The main challenge in assessing this impact is finding a good counterfactual – a reasonable, convincing comparator group.
Possible methods that provide a more robust counterfactual causal inference6 than before-after study designs are propensity scoring,7 difference-in-difference,8 instrumental variables,9 and regression discontinuity.10 These designs are carried out in nonexperimental situations and with systematically-collected data.
Stepped wedge designs also provide a way around the practical or ethical limitations of experimental evaluation of interventions.11 In these studies, several clusters are successively included over time (this timing may even be randomized) to receive the intervention. Eventually, the whole population receives the intervention, but the integrated randomization over time provides intervention and control groups.
These methodological approaches can generate valid evidence for evaluation of the effect, overcoming the inherent limitations of clinical research designs in such situations and reducing the bias in estimations on the benefit of these interventions in real clinical practice.
FUNDINGRed Temática de Investigación en Servicios de Salud y Enfermedades Crónicas (REDISSEC RD16/0001/0019).
.