Understanding where we are starting from is essential to making progress. Progress itself is contingent on a measured difference from this baseline. When we are beginning from a place of incomplete or missing data, we are faced with a dilemma about how to proceed. While this problem is common in the global arena, it is not isolated to global surgery as a discipline alone. In the following sections, we will use our observations conducting data systems research in Ethiopia this past summer to discuss how we fundamentally choose what data to collect. This will be considered in the larger context of global health metrics and whether the data collected contributes value to our broader knowledge and interventions.
Question – How do we choose global health indicators that coincide with community priorities and improve local systems? In 2004, Larson and Mercer described certain features that “guide the selection of a global health indicator,” namely definition, validity, feasibility, and utility1. Let’s apply their descriptions to rural Ethiopia; starting with Definition, “the indicator must be well defined, and the definition must be uniformly applied internationally.” This translates to the importance of standardization for cross-comparing data and in an international health setting, tracking progress towards goals within invested countries. However, at the hospital level, we understand the importance of well-defined indicators by the quality of data produced. Take surgical site infection (SSI) rates. As an example, in Ethiopia, this indicator includes the total number of inpatient SSIs arising during the reporting period2. To reduce the mental and resource-intensive burden of collecting multiple indicators at various reporting periods, the majority of surgical indicators are collected on a monthly basis. Yet if we speak with local providers, they will tell you that many SSIs are missed altogether when the patient is discharged and lost to follow-up or visits a different hospital altogether. This challenge demonstrates the subtle ways priority influences data systems i.e. national health administrators have to choose between collecting what is actually an SSI based on the definition and the practicality of such an endeavor, including the need to overhaul what is currently available and put follow-up systems for discharged patients in place. The end result often errs toward practicality and choosing to settle for a variable that is more constrained and not in its truest form. However, just wanting to collect data doesn’t make a system. After introducing SSI as an indicator and realizing it did not capture the data it needed to, the definition of the indicator has evolved. Now SSIs will be collected quarterly, in hopes of including more outpatient SSIs and getting closer to its true value.
In the case of Validity, “the indicator must be valid (it must actually measure what it is supposed to measure), reliable (replicable and consistent between settings) and readily interpretable1.” Unlike Definition which is ironically vague, Validity seems to get into what valuable scientific data really is. Let’s look at an example where the Definition aspect has already been established and reviewed in detail by the international community, the Safe Surgery Checklist (SSC), introduced by the WHO in 2009 and viewed widely as a new standard of quality and safety in surgery3. As an indicator, SSC utilization fails to meet all aspects of validity when measured in a local setting. Like other health indicators in Ethiopia, SSC utilization is recorded on paper, a single paged checklist intended to be completed at the start of surgery and then placed in the patient’s chart. The straightforwardness stops here as most hospitals have determined where within their current data collection system is the best place for checking this indicator’s data, that the checklist was in fact completed. For some, that involves randomly selecting 10 patient charts to see if the checklist is filled out and for others, checkpoints have been put in place that prevent a patient’s chart from proceeding to its usual destination without this completed form. With this variability, can we really say that the SSC utilization rate is actively capturing the use of the SSC before a patient’s surgery? If the goal of this indicator is to assess compliance with a new surgical standard, is the collection of data on whether certain boxes have been checked really the best way to ensure patient safety and staff buy-in? Despite the SSC being a replicable instrument, data on its use is influenced by variability in the time course of collection and the method of completion. Therefore, its interpretation as an indicator falls short of capturing authentic compliance.
Amongst the selection features, Feasibility is the most realistic and relevant to the communities that are meant to benefit from the collection of global health indicator data. In choosing indicators, “the gathering of the required information must be technologically feasible and affordable and must not overburden the system1.” Larson and Mercer acknowledge the overwhelming number of health indicators within the international community, including a glossary of such indicators and discussing the “statistical overload” phenomenon. For resource-limited hospitals, data collection, management, and quality improvement can put additional strains on an already strained system. Beyond purely the work involved in these tasks, lack of personnel, inadequate training, and technological infrastructure deficits lead to a failure of the current indicators to meet the feasibility criteria in rural Ethiopian hospitals. Well-meaning outside organizations create programs to bridge the gaps – providing much-needed infrastructure, resources, mentorship, and training to improve compliance in data collection and quality overall. When these projects invariably end, hospital data systems are reduced to their previous iteration and their CEOs struggle to make the intervention sustainable. Even for university hospitals with more resources, the burden of producing and managing healthcare data on extraneous international indicators is still present. In one such setting, three full-time people were employed solely for this purpose and resident physicians were asked to do their part in managing aspects of the data collection process. At every level, third parties partner to strengthen systems and collect data, often creating additional burdens and commonly leaving infrastructure unchanged. Where feasibility is concerned, global health data is only as good as the infrastructure in place.
Utility is the most meaningful factor in choosing international indicators for data collection. As defined, “the indicator should provide information that is useful to decision-makers and can be acted upon at various levels (local, national, and international)1.” While the focus here is on international decisions, the utility of this data for helping local communities allocate resources is fundamentally the goal of global health. If this were the case, however, there would be as much focus on collecting data as providing training and improving the capacity of community members to utilize data for quality improvement projects. The reality in many rural Ethiopian hospitals is that the burden of data collection eliminates its prioritization. Rather, the data itself comes to represent the hospital and its status. Therefore, amongst hospital and administrative workers, the data becomes something to fear, rather than benefit from. This is compounded when rewards, penalties, or decisions on allocated resources are determined exclusively by the data. In truth, they contribute to inaccurate data generated out of fear of the consequences. Most local providers are aware of this as certain indicators, namely SSI, SSC, and Peri-Operative Mortality Rate (POMR) show considerable discrepancies from third-party data checks2. From our own observations, SSC utilization rate rarely falls below 100% as hospitals were previously rewarded for high levels of compliance. But national health systems are aware of this trend and are now rewarding hospitals for “accurate” data. Will doctored decreases be next? It’s too early to say, however refocusing the utility of data on its benefit to the local community rather than big-brother monitoring by national and international systems is likely to increase buy-in, compliance, and overall data quality in the long run.
The conditions on which the data in rural Ethiopia has been collected are not isolated and can potentially be applied elsewhere. We have already discussed the importance of reflecting on the value of the information being collected, what it truly measures, and how local context influences its quality and use. Whether you practice in Pennsylvania or Pakistan, data is powerful and can contribute to equity or inequity and therefore accountability is indispensable. Finally, much like the complex realities we hope to distill, indicators evolve and it is necessary for healthcare systems to have the capacity and infrastructure to accommodate this5. It is imperative to think critically about the data systems already in place as well as the data necessary to create future systems.
- Larson, C. (2004). Global health indicators: An overview. Canadian Medical Association Journal, 171(10), 1199-1200. doi:10.1503/cmaj.1021409
- Iverson KR, Drown L, Bari S, et al. Surgical key performance indicators in Ethiopia’s national health information system: Answering the call for global surgery data. East Cent Afr J Surg. 2021;26(3):92-103. doi:10.4314/ecajs.v26i3.1
- Enright, K. (2020, November 5). Figuring it out: Exploring metrics in Global Health [Web log post]. Retrieved October 03, 2022, from https://www.globalpharmacyexchange.org/post/metrics
- Jain, D., Sharma, R., & Reddy, S. (2018). Who safe surgery checklist: Barriers to universal acceptance. Journal of Anaesthesiology Clinical Pharmacology, 34(1), 7. doi:10.4103/joacp.joacp_307_16
- Nambiar, D., Sankar, H., Negi, J., Nair, A., & Sadanandan, R. (2020). Field-testing of primary health-care indicators, India. Bulletin of the World Health Organization, 98(11), 747-753. doi:10.2471/blt.19.249565
- Davies, J. I., Gelb, A. W., Gore-Booth, J., Martin, J., Mellin-Olsen, J., Åkerman, C., . . . Weiser, T. G. (2021). Global surgery, obstetric, and anaesthesia indicator definitions and reporting: An Utstein Consensus Report. PLOS Medicine, 18(8). doi:10.1371/journal.pmed.1003749
- Shiffman, J., & Shawar, Y. R. (2020). Strengthening accountability of the global health metrics enterprise. The Lancet, 395(10234), 1452-1456. doi:10.1016/s0140-6736(20)30416-5