Improving statistical methods for estimating treatment effects from electronic health records in health economic evaluation.

Study type
Protocol
Date of Approval
Study reference ID
17_215
Lay Summary

Information about patients and the care they receive is routinely collected in public hospitals and general practices. Government agencies use this information to evaluate the benefits and value-for-money of different treatment options, for example for managing long-term conditions such as heart disease. However, policy-makers are worried that the incorrect use and interpretation of routine data may lead to incorrect decisions and poor use of taxpayerÂ’s money.

Unlike clinical trials, routine data are not collected for research purposes. Therefore, the investigator has no control over the way patients are allocated to different treatment groups. In such studies, the resulting treatment effect may be misleading (confounded) because there are common factors which affect both the treatment patients receive and how well they respond to that treatment. This problem is known as confounding. Additionally, routine data tend to be incomplete because patients often do not respond to health questionnaires or fail to attend routine appointments.

This study will address these concerns by carefully developing, comparing and translating statistical
methods to address confounding and non-response in health economic evaluation that use routine data. By achieving this, the proposed research will help future studies to provide more reliable evidence of which treatments are most worthwhile.

Technical Summary

Big observational data are increasingly used to complement trial-based evidence on treatment effects for cost-effectiveness analysis (CEA) of health interventions. However, a major concern is that these studies may be biased due to time-varying confounding. A closely related problem is that outcomes and confounders tend to be incomplete, which further increases the risk of bias, and raises additional challenges for tackling the confounding. Recent progress has been made in statistical methods for addressing confounding in CEA, but these developments have been limited to settings with time-invariant, observed confounding, and no missing data.

This study will develop appropriate methods for tackling time-varying confounding and missing data in CEA that use routinely-collected data. Firstly, we will propose a new statistical approach (combination of marginal structure models with multiple imputation) for jointly handling observed time-varying confounding and missing data in CEA, and compare it with existing methods (for example, inverse-probability weighting approaches for handling the missing data). Secondly, we will develop a sensitivity analysis framework for assessing the impact of departures from standard assumptions, such as 'no unobserved confounding' and missing-at-random. Thirdly, we will consider how decision-analytical cost-effectiveness models can incorporate uncertainty from confounding and missing data, and assess the implications for decision-making.

Collaborators

Manuel Gomes - Chief Investigator - University College London ( UCL )
Manuel Gomes - Corresponding Applicant - University College London ( UCL )
Gianluca Baio - Collaborator - University College London ( UCL )
James Carpenter - Collaborator - London School of Hygiene & Tropical Medicine ( LSHTM )
Michail Katsoulis - Collaborator - Farr Institute of Health Informatics Research
Richard Grieve - Collaborator - London School of Hygiene & Tropical Medicine ( LSHTM )

Linkages

HES Admitted Patient Care;ONS Death Registration Data;Patient Level Index of Multiple Deprivation