- Research Article
- Open Access
Quality indicators and performance measures for prison healthcare: a scoping review
Health & Justice volume 10, Article number: 13 (2022)
Internationally, people in prison should receive a standard of healthcare provision equivalent to people living in the community. Yet efforts to assess the quality of healthcare through the use of quality indicators or performance measures have been much more widely reported in the community than in the prison setting. This review aims to provide an overview of research undertaken to develop quality indicators suitable for prison healthcare.
An international scoping review of articles published in English was conducted between 2004 and 2021. Searches of six electronic databases (MEDLINE, CINAHL, Scopus, Embase, PsycInfo and Criminal Justice Abstracts) were supplemented with journal searches, author searches and forwards and backwards citation tracking.
Twelve articles were included in the review, all of which were from the United States. Quality indicator selection processes varied in rigour, and there was no evidence of patient involvement in consultation activities. Selected indicators predominantly measured healthcare processes rather than health outcomes or healthcare structure. Difficulties identified in developing performance measures for the prison setting included resource constraints, data system functionality, and the comparability of the prison population to the non-incarcerated population.
Selecting performance measures for healthcare that are evidence-based, relevant to the population and feasible requires rigorous and transparent processes. Balanced sets of indicators for prison healthcare need to reflect prison population trends, be operable within data systems and be aligned with equivalence principles. More effort needs to be made to meaningfully engage people with lived experience in stakeholder consultations on prison healthcare quality. Monitoring healthcare structure, processes and outcomes in prison settings will provide evidence to improve care quality with the aim of reducing health inequalities experienced by people living in prison.
In 2018 the number of people in penal institutions worldwide was at least 10.74 million, an average of around 140 people per 100,000 of the world’s population (Walmsley, 2018). Although epidemiological data is limited in some countries (Kinner & Young, 2018), evidence suggests that people who experience incarceration are more likely to be disproportionately impacted by structural health inequalities than those who have not lived in prison (Brinkley-Rubinstein, 2013; De Viggiani, 2007). Mental health problems, substance misuse, cognitive disability, communicable and non-communicable diseases (Fazel, Hayes, Bartellas, Clerici, & Trestman, 2016; Stürup-Toft, O’Moore, & Plugge, 2018; Thomas, Wang, Curry, & Chen, 2016; Tyler, Miles, Karadag, & Rogers, 2019; World Health Organisation, 2019), alongside lower levels of health service engagement (Begun, Early, & Hodge, 2016; Hopkin, Evans-Lacko, Forrester, Shaw, & Thornicroft, 2018), are more prevalent amongst people who have experienced incarceration.
Since reducing health inequalities is a fundamental principle of global public health policies (Stürup-Toft et al., 2018), there is a clear imperative to address the complex health needs of prison populations. Statutory responsibilities towards the human rights of prisoners – including their health - are outlined in the United Nations’ Standard Minimum Rules for Treatment of Prisoners (known as the Nelson Mandela Rules), which state that people living in prison ‘should enjoy the same standards of health care that are available in the community’ (Rule 24.1, a stance known as the equivalence principle) and that prison healthcare services are responsible for ‘evaluating, promoting, protecting and improving’ the health of incarcerated people (Rule 25.1) (United Nations General Assembly, 2015). Thus prison, which represents an opportunity to improve the health of underserved populations (Ginn, 2013; McLeod et al., 2020), is charged with dual and related objectives of providing equivalent care (healthcare process) and improving health (health outcomes). Yet whether the provision of an equivalent standard of care - given the health inequities between the prison population and the population as a whole - will reduce inequalities satisfactorily is a matter of some debate. Several authors have argued that the primary goal of prison healthcare should be a reduction in health inequities through greater, rather than equal, intensity of service provision (Birmingham, Wilson, & Adshead, 2006; Charles & Draper, 2012; Exworthy, Wilson, & Forrester, 2011; Ismail & de Viggiani, 2018; Jotterand & Wangmo, 2014; Lines, 2006; Niveau, 2007). What is not disputed is that - whichever goal is given primacy - prison healthcare globally needs to generate reliable evidence on healthcare provision and to be more accountable (McLeod et al., 2020). This could be facilitated in part by the implementation of transparent monitoring systems to measure evidence-based performance of prison healthcare and identify areas for improvement (Asch et al., 2011; Greifinger, 2012; Halachmi, 2002; Mainz, 2003). Such performance measurement would enable regular internal analyses of the quality of healthcare within a single prison, and permit external comparisons with healthcare provided in other prison establishments and in the community.
Selecting appropriate measures of performance, however, is not unproblematic (Kötter, Blozik, & Scherer, 2012; Loeb, 2004). There may be more than one set of evidence-based standards from which to develop quality indicators (Castro, 2014; Greenhalgh, Howick, & Maskrey, 2014; Willis et al., 2017), or, as was the case until recently for women in prison, a dearth of rigorously developed standards (McCann, Peden, Phipps, Plugge, & O’Moore, 2019). Translating an evidence-based standard into a quantifiable measure involves multiple decisions and this process is often poorly reported (Kötter et al., 2012). Additionally, resource constraints limit the collection and analysis of data to a relatively small number of indicators, which inevitably privileges some health conditions and, by extension, some populations over others; decisions therefore have to be made regarding the potential for positive impact for patients (Rushforth et al., 2015) with some stakeholders inevitably having more input into the selection process than others. Further, due to the unique nature of delivering healthcare in prison, some quality indicators may not be able to be simply taken from community primary care and “parachuted” into the prison setting due to significant differences in disease prevalence, premature physiological ageing (Omolade, 2014; Williams, Stern, Mellow, Safer, & Greifinger, 2012), the short periods of time many people are incarcerated for and the limited functionality for linkage between community and prison clinical systems (Stone, Kaiser, & Mantese, 2006). Therefore, it is essential to explore the challenges particular to measuring performance in this context. The aim of this international scoping review is to identify and synthesise previous research conducted on the selection and development of quality indicators in the prison setting.
A scoping review is a method of mapping the conceptual terrain of a particular topic (Arksey & O’Malley, 2005; Peters et al., 2020; Tricco et al., 2016). In comparison to systematic reviews, which aim to synthesise evidence on specific questions often relating to interventions, scoping reviews explore the breadth and depth of available literature, define key concepts, outline methodological approaches and identify knowledge gaps. As such, scoping reviews tend to have broad research questions, and take an inclusive stance towards evidence sources. Although scoping review methodology has historically been poorly defined in comparison to systematic reviews, recent efforts to standardise scoping reviews has resulted in the establishment of the PRISMA-ScR, a reporting checklist (Tricco et al., 2018). The conduct of this study has been guided by the items on the PRISMA-ScR. The research question for this study is:
What is known from the research literature about the development and selection of quality indicators for primary healthcare in the prison setting?
The focus for this international review was the development or selection of quality indicators for healthcare within the prison context. Papers that focussed on the transition of people between prison and the community, or healthcare delivery in criminal justice settings in the community were excluded.
We searched six databases that we anticipated would index relevant sources: CINAHL and Criminal Justice Abstracts (via the Ebsco platform), MEDLINE, PsycInfo and Embase (via the Ovid platform) and Scopus, from January 2004 to April 2021. 2004 was chosen as the start date as it marked the beginning of the prison healthcare governance transition from the Home Office to the National Health Service in the UK, and was also a time when authors were reflecting on growing accountability and strategic management models in prison systems in other countries (Coyle, 2004; K. N. Wright, 2005). The electronic database search strategy was informed by a published search strategy on primary care, quality indicators and severe mental illness (Kronenberg et al., 2017) and was constructed around three key concepts: quality indicators/ performance measurement, primary care and prison healthcare. An academic librarian developed the search syntax (see Appendix for a sample search strategy). Research papers, commentaries, editorials and grey literature were included. Since the purpose of this review was to provide a descriptive overview of the body of literature on quality indicators in the prison setting, rather than to assess the robustness of clinical evidence underpinning quality indicators, sources were not subjected to critical appraisal.
Three supplementary search strategies were employed: journal searches, author searches, and forwards and backwards citation tracking. The five journals handsearched from January 2004 to April 2021 were: International Journal of Prisoner Health, Journal of Correctional Health Care, British Journal of General Practice, BMC Health & Justice (from Volume 1, 2013) and The Prison Journal. Author searches, and forwards and backwards citation tracking were conducted following identification of key papers.
The electronic search returned 1739 hits. A further 93 sources were identified through the supplementary searches. Following automated and manual deduplication of the combined total of 1832 sources, 1598 unique sources were available for screening (see Fig. 1). Title, abstract and full-text screening was conducted independently by two researchers (SB and KC), using inclusion and exclusion criteria listed in Table 1, with each reviewing the other’s exclusions, and any disputes were resolved in discussion with a third member of the team (LS).
Twelve sources from the United States were included in the review (Table 2); no sources from any other country were identified. The date range of sources was 2004–2016. Three of the publications, Asch et al. (2011), Teleki et al. (2011) and Damberg et al. (2011), were part of the same research project: all were published in a special issue of the Journal of Correctional Health Care. The study was organised into three workpackages: an expert consultation process reported in Asch et al., with the resulting list of indicators published by Teleki et al., interviews, site visits and document reviews within California Department of Corrections and Rehabilitation (Teleki et al., 2011), and a review of performance measurement activities in six correctional systems (Damberg, Shaw, Teleki, Hiatt, & Asch, 2011). None of the remaining sources were linked to each other.
A data charting table was constructed using generic study features informed by the Joanna Briggs Institute Manual for Evidence Synthesis (Peters et al., 2020). Bespoke elements were integrated iteratively following detailed reading of the texts selected after full-text review. The table was constructed by one researcher (SB) and reviewed by two others (KC and LS).
Data items relating to the features of the study were extracted, such as the country of origin, the year, the study type, study aims and key findings. In addition, contextual elements relating to the development of quality indicators were charted, including drivers for the development of performance measurement, the challenges and constraints of the prison environment, issues relating to the transfer of performance measures from a community setting, and stakeholder engagement in decision-making processes.
Five studies developed quality indicators or performance measures (Asch et al., 2011; Greifinger, 2012; Hoge, B., Lundquist, & Mellow, 2009; Stone et al., 2006; K. N. Wright, 2005), two sources reviewed indicators or approaches to performance measurement (Damberg et al., 2011; Teleki et al., 2011), one described implementation (Raimer & Stobo, 2004), and one commentary paper advised on implementation (Laffan, 2016). Two sources described approaches to developing and testing indicators across US prisons (Bisset & Harrison, 2012; Watts, 2015), and one developed then tested performance measures of diabetes screening in one prison (Castro, 2014).
Quality indicators and performance measures for the prison setting
Several papers in the review described methods of selecting performance measures or quality indicators (Asch et al., 2011; Greifinger, 2012; Hoge et al., 2009; Stone et al., 2006; Watts, 2015; K. N. Wright, 2005), with the quality indicators resulting from Asch et al.’s (Asch et al., 2011) consultation process being reported in sister paper Teleki et al. (2011). Issues raised by the authors of this group of papers include interrelated notions of comparability and transferability, that is, the extent to which the prison population has comparable health needs and health behaviours to people living in the community, whether the prison health care setting bears similarity to those in the community, and hence whether indicators from community health care settings have ‘external validity’ (Stone et al., 2006, p.94) and can reasonably be transferred with the same benchmarks to the prison setting. An additional area of interest relates to the extent to which each criminal justice setting should be able to customise recommended indicators which align with their mission statements and priorities, despite the impact this would have on standardisation and benchmarking, and which stakeholder voices are privileged in selection processes, and which unheard. Finally, pragmatism was observed to be an important aspect of quality measurement; staff and IT resources constrain the number of indicators it is practicable to collect and analyse data for.
Processes of selecting performance measures and quality indicators
Greifinger’s (2012) performance measures are orientated towards improving the safety of people living in prison. Drawing on national and international prison healthcare standards, community patient safety standards relevant to the prison setting, and his own experience of reviewing correctional healthcare, he compiled a guide of measures covering 30 domains of prison healthcare, including (but not limited to) access to care, chronic disease management, mental health assessment and treatment, medical record keeping, sexually transmitted infections, and mortality reviews.
In contrast to this individual approach to compiling performance measures, other authors described consensus approaches to indicator selection. Asch et al. (2011), for instance, utilised a modified Delphi method, drawing on the expertise of a panel comprising nine senior people with clinical experience in correctional healthcare as well as relevant experience in other areas such as prison directorships, court-appointed monitorships and membership of clinical guideline committees. Following preparatory investigations (Damberg et al., 2011; Teleki et al., 2011), 16 healthcare topics were chosen for further investigation, and 1069 relevant indicators were identified and classified using Donabedian’s structure-process-outcome taxonomy (Donabedian, 1988). Content reviewers evaluated groups of indicators using criteria including importance to prison health care, focus on primary care, scientific evidence base, implementability and interpretability. As a result of this process, 111 indicators were presented to the panel for validity and feasibility assessment, with a 0–9 rating requested from panel members both before and during the meeting. Ultimately, 79 measures were retained, 62 of which were process indicators, 10 outcome indicators and 7 access indicators. The panel remarked that while these quantitative measures were valuable means of assessing quality, they needed to be augmented by implicit quality measures such as mortality reviews and patient experience surveys for a more comprehensive view of prison healthcare quality. Processes to select guidelines, perform content reviews, and engage an expert panel for the selection process were clearly articulated; the expertise of the reviewers was described and the rationales for selection and elimination of indicators were coherent. However, testing and implementing the measures was beyond the scope of the study, and it is possible that a set of 79 indicators, in an environment where requirements for data collection for purposes other than quality assessment can be onerous (Teleki et al., 2011), may be too burdensome.
While others have used consultation methods to identify quality indicators and performance measures, none match Asch et al.’s (2011) rigorous multi-staged approach. Stone et al. (2006), for instance, in their development of a quality indicator matrix for the Missouri Department of Corrections, appeared to rely only on the research team to identify the domains of healthcare delivery for which to identify standards and quality indicators, although administrators and medical staff were involved in selecting the final 32 indicators from an original list of 150. Where Stone et al.’s (2006) work differed from Asch et al.’s (2011) was in their attempts to define performance benchmarks based on community benchmark data for similar indicators. This involved some modification of the indicators, for example, age range adjustments, to more closely align the prison population – often perceived as prematurely aged (Omolade, 2014; Williams et al., 2012) - with the population as a whole.
Another study that sought to adapt community indicators for the prison setting was Hoge et al.’s (Hoge et al., 2009) selection of performance measures for mental health care in prisons. Twenty nine participants including for-profit and independent mental health practitioners and researchers participated in a 6-hour roundtable discussion to reach consensus on meaningful indicators drawn from national standards. According to the authors, consensus was reached on nearly every subject, but how ‘consensus’ was defined and assessed is not clearly articulated.
Watts (2015) reports on the development of a quality indicator set based on the Healthcare Effectiveness Data and Information Set (HEDIS®) metrics, the work conducted by the RAND organisation in 2011 (Asch et al., 2011; Damberg et al., 2011; Teleki et al., 2011) and the Vermont Department of Corrections internal measurement system. However, very little information is given on the processes through which some of the measures were adapted for the prison setting. Similarly, Laffan (2016), Bisset and Harrison (2012) and Raimer and Stobo (2004) provide short lists of measures but only minimal detail on the origin or development of the indicators.
Wright (2005) recounts the Association of State Correctional Administrators’ (ASCA) preliminary efforts to identify eight domains across the spectrum of activities in correctional systems that could be subject to a national performance measurement system, enabling a greater degree of transparency and accountability. Using seven comprehensive prison performance models, an ASCA subcommittee selected the eight most pertinent areas of correctional performance to assess, two of which were health-related: ‘substance abuse and mental health’ and ‘health’. The subcommittee then selected three of the eight for their preliminary performance measurement system, including ‘substance abuse and mental health’ but excluding ‘health’. Following some debate, the subcommittee decided upon performance indicators for each domain; for substance abuse and mental health, they chose average daily rates of people receiving treatment for both conditions to be the indicators of performance.
In all of the papers that developed or selected indicators in this review, none explicitly included the patient perspective, drawing instead on researcher, healthcare provider or manager input. However, it was noted by one group of authors, Asch et al. (2011), that people on the receiving end of care may have different priorities for performance measures, perhaps placing more value on outcome indicators which measure changes in health status or highlight risks of mortality, than those relating to healthcare processes.
Processes used to identify performance measures or quality indicators for the prison setting are summarised in Table 3.
Identifying the problem and benchmarking
Setting performance targets for quality indicators to enable meaningful benchmarking has been less well developed in this body of literature. Stone et al., in their 2006 development of a matrix of prison healthcare quality indicators, modified community healthcare quality indicators to facilitate comparison between prison and community healthcare. Greifinger (2012) set a 90% target for the majority of his performance indicators, yet the rationale for settling on this figure was not evident; similarly, Watts (2015) suggested an 85% target, rising to 90% by the second year and 95% by the third, again with no rationale given. Other authors, while providing clearly delineated numerators and denominators, did not suggest what an acceptable level of performance would be.
Format of quality indicators and performance measures used in the prison setting
Most of the literature included in the review listed quality indicators or performance measures, although the content varied widely from a few illustrative examples (Asch et al., 2011; Bisset & Harrison, 2012; Laffan, 2016; Raimer & Stobo, 2004; K. N. Wright, 2005) to extensive lists (Greifinger, 2012; Hoge et al., 2009; Stone et al., 2006; Teleki et al., 2011; Watts, 2015). Further variation was found in the format of measures, with some authors providing ‘explicit’ quality indicators (Asch et al., 2011; Raimer & Stobo, 2004; Stone et al., 2006; Teleki et al., 2011; Watts, 2015) - defined by Damberg et al. (2011) as objective, evidence-based measures that provide a standardised means of measuring quality across prisons - while others provided more broadly stated performance measures (Bisset & Harrison, 2012; Greifinger, 2012; Hoge et al., 2009; Laffan, 2016; K. N. Wright, 2005). Explicit indicators, Damberg et al. suggest, are distinguishable by their format; they have a clearly expressed denominator i.e. the number of people eligible for a particular measure, and a specified numerator i.e. the number of people from the denominator who satisfy the measure. Further parameters are often included, such as a reporting period (for example, the last 12 months) or particular diagnostic codes. The measure is then expressed as a percentage, calculated by dividing the numerator by the denominator and multiplying by 100. Explicit quality indicators typically fall into one of three classifications: ‘structure’ indicators, relating to resources, ‘process’ indicators, focussing on care delivery, or ‘outcome’ indicators, which measure the achievement of a particular health outcome (Donabedian, 1988), as shown in Table 4.
In the reviewed body of literature, performance measures provided ways to assess prison healthcare quality, but the numerators, denominators and reporting periods were typically implied rather than specified. For example, Greifinger (2012) appended a list of questions that could identify areas for clinical performance improvement through the interrogation of randomly-selected small samples of healthcare records. For example, taking ten records of people with positive tests for syphilis, gonorrhoea and chlamydia, Greifinger suggested that a measure of quality would be those who had received an appropriate prescription to treat their condition within 3 days. Similarly, Hoge et al. (2009) suggested that people in prison who screen positive on a validated suicide risk assessment measure should ‘receive a referral to a mental health staff member for evaluation. All inmates deemed to be an acute risk should be placed on suicide watch immediately and be immediately referred to the mental health team’ (p.643). Thus, the numerators and denominators are implicit in these measures of healthcare quality, but further work would be required to clarify the parameters of the metrics before they could be implemented in practice; clarifying denominators in the prison population, for instance, is particularly challenging given the transience of the population as people move between the community and the prison estate or are transferred between prisons.
To create a concise list, and following Damberg et al. (2011) and Kronenberg et al.’s (2017) lead, quality indicators and performance measures identified in the sources have been merged and summarised under broad headings in Table 5.
Challenges and constraints of implementing quality assessment in the prison setting
Authors of papers in this review described a range of challenges to the implementation of performance measurement system in the prison setting, including changing demographics of the prison population, the functionality of the data system, staffing and resourcing issues, and challenges to standardising quality of care measurement across different organisations.
Changes in prison populations
Prison populations in the US have undergone significant changes in recent years, with an increase of over 700% in the size of their prison population between 1970 and 2009 (Karstedt, Bergin, & Koch, 2019). Although numbers have fallen in the past decade, the US prison population per capita (655 per 100,000) is still the highest in the world (Walmsley, 2018).
In addition to the increase in numbers towards the end of the 20th and the first years of the 21st centuries, the demographics of the prison population have changed. Most notably, the prison population is ageing (Maschi & Viola, 2013; Stürup-Toft et al., 2018) and evidence suggests that the prevalence of chronic conditions in US prisons is increasing (Binswanger, Krueger, & Steiner, 2009; Wilper et al., 2009). Additionally, multi-morbidity may be a problem in the older prison population; 85% of the over 50s in prison are reported to have three or more chronic health conditions, while four out of five people aged 65 years and over have a chronic condition that impacts on their physical function (Kintz, 2013). The changing landscape of prison health needs may require a reevaluation of existing sets of quality indicators to assess the quality of healthcare for co- and multimorbid conditions (Asch et al., 2011).
Data system functionality
The inadequacies of existing data systems in the prison setting were highlighted in most of the reviewed sources, with key issues being poor co-ordination and a lack of functionality in key areas such as capture and extraction of data (Castro, 2014; Hoge et al., 2009; Watts, 2015), interface with other prison systems (Damberg et al., 2011), prison pharmacies (Castro, 2014; Teleki et al., 2011) and community health care settings (Watts, 2015). A lack of co-ordination with community health care settings leads to clinicians’ reliance on patient self-report which can compromise measures of prison health care quality. However, integrating prison health systems with those of community healthcare settings can be, as Bisset and Harrison (Bisset & Harrison, 2012) noted, ‘unfamiliar and daunting territory’ (p.3). Inconsistency in data input was also reported as a problem that could adversely affect the reliability of analyses (Bisset & Harrison, 2012; Damberg et al., 2011; Teleki et al., 2011).
The absence of prison-specific benchmark data was also cited as an inhibiting factor to quality assessment (Damberg et al., 2011; Stone et al., 2006; K. N. Wright, 2005). Additionally, the capacity of the data collection system was perceived to be problematic, with requirements to collect data for legal purposes competing with the collection of data for quality monitoring purposes (Teleki et al., 2011; Watts, 2015): Teleki et al. (2011) observed that there are ‘too many metrics being tracked for too many different purposes’ (p.110) which can dilute performance measurement efforts. The same authors also identified difficulties clarifying the numerator and denominator, and a concern that the amount of data for some conditions would be too small to conduct a meaningful analysis.
Some authors highlighted the difference in priorities between the medical staff and the prison administrators (Hoge et al., 2009; Laffan, 2016), noting that healthcare budgets may be managed by people lacking experience of healthcare delivery (Watts, 2015) and that effective quality assessment of healthcare required collaboration between the two systems.
High levels of staff turnover (Hoge et al., 2009) and the need to employ a data analyst to write and run queries (Damberg et al., 2011) were seen as difficulties that could jeopardise attempts to measure the quality of healthcare. In addition, the lack of a feedback loop for staff to gain insights into under-performing services can impede quality improvement activities (Teleki et al., 2011).
A further issue raised is whether standardisation should occur when institutions have varying mission statements, legal structures and populations (K. N. Wright, 2005). Standardisation can also be compromised by the lack of universal agreement on disease management for chronic health conditions.
To the authors’ knowledge, this is the first scoping review on quality indicators and performance measurement for healthcare in the prison setting. While all the evidence sources identified originated from the US, a number of significant issues have been identified with relevance to performance management in prison healthcare systems beyond America. This review found that selection processes varied both in rigour and in stakeholder involvement, with none including patient representation. Secondly, indicators were predominantly process-oriented with few measures of outcomes or structure. Finally, a range of challenges to performance measurement for prison healthcare was identified including the comparability of prison and community populations, limited data functionality and resource constraints.
Rigour in development
Kötter et al. (2012) have provided a useful systematic review describing and comparing methods of quality indicator development for healthcare delivery. While they affirm that there is no ‘gold standard’ for developing indicators from clinical guidelines, they identify six steps in the rigorous development and implementation of quality indicators: topic selection, guideline selection, extraction of recommendations, quality indicator selection, practice test and implementation. To ensure the establishment of quality indicators that meet certain criteria – relevance to the population, evidence-based, feasible, reliable, understandable, achievable, measureable and amenable to change – selection methods, they argue, should have a high degree of transparency and rigour.
The selection processes identified in this review were largely opaque, with Asch et al.’s (2011) RAND/University of California, Los Angeles (UCLA) modified Delphi approach the most systematic and transparent. Consultation methods in Wright’s (2005), Stone et al.’s (2006), Hoge et al.’s (2009) and Watts (2015) work, while present, were less clearly articulated, with little detail given on the participants or the process. There was no evidence of consultation processes in other published lists of performance measures (Greifinger, 2012; Laffan, 2016; Raimer & Stobo, 2004). Common to all attempts to develop quality indicators or performance measures described in this review, there was no indication that patients had been involved, despite recognition from the RAND research team that patient experience is an important facet of quality assessment efforts (Asch et al., 2011; Damberg et al., 2011; Teleki et al., 2011) and that patient acceptability of a treatment or intervention is a well-established component of quality in both health and behavioural sciences (Gainforth, Sheals, Atkins, Jackson, & Michie, 2016; Maxwell, 1992). Currently, however, there are relatively few examples of patient engagement in prison health care organisation, and greater efforts to meaningfully engage people who’ve lived in prison are warranted.
Transferability and adaptation
In their conceptualisations of quality in health services, both Kessner et al. (1973) and Maxwell (Maxwell, 1992) highlighted the importance of quality indicators being relevant and appropriate to the population served by the health system. In this group of papers, Stone et al. (Stone et al., 2006) most clearly attempted to gain evidence of the comparability of prison population demographics to those of the community in order to ascertain whether quality indicators used in the community could be utilised in the prison setting, although other authors quoted prevalence statistics of particular health conditions or evidence of poor quality care to substantiate their attempts to create performance measures. While it must be acknowledged that many of the papers were written when the ageing prison population was perhaps less evident, little reference was made to the benefits of including indicators that account for high levels of co- and multi-morbid mental and physical health conditions (Stürup-Toft et al., 2018; Tyler et al., 2019). Additionally, colorectal and cervical cancer screening indicators were only included by a few authors, and none of the papers included in this review incorporated dementia indicators. Little is known about the prevalence of dementia in prison populations (Brooke, Diaz-Gil, & Jackson, 2018), but it is likely that, with increasing numbers of people in prison over the age of 50, and developing awareness that dementia can affect people under 65 years old (Carter, Oyebode, & Koopmans, 2018), prison health services will be required to provide screening and support for people with dementia.
Use of community indicators in prison healthcare services presents opportunities to assess equivalence. The quality of primary care in community general practice in England is monitored by the Quality and Outcomes Framework (BMA & NHS England, 2021); however, reporting on this indicator set is not mandated in English prisons and is hence inconsistent across the sector (N. Wright, Hankins, & Hearty, 2021). In the USA, community healthcare performance measures include the Healthcare Effectiveness Data and Information Set (HEDIS®) and the Uniform Data System (Health Resources and Services Administration, 2021). This latter set may provide particularly useful data since it is reported on by Federally Qualified Health Centers which serve vulnerable communities demographically similar to incarcerated populations. Use of these indicator sets makes it possible to understand how healthcare can be compared across populations, but ongoing debates about the interpretation of the equivalence principle mean questions remain about what should be compared.
Equivalence of care or outcomes?
Assessing performance of health services requires a multi-faceted conceptualisation of quality. According to Maxwell (1992), population relevance, effectiveness, efficiency, acceptability, access and equity are all criteria that should be satisfied by quality measurement processes. Access and equity, he notes, are sometimes conflated on the basis of the assumption that inequities are created by unequal access. Maxwell counters this conceptual stance, proposing that inequities caused by, for example, institutionalised racism, cannot be subsumed within the notion of access. In essence, this standpoint about the distinction between access and equity is at the heart of discourse around the equivalence principle, which, it has been argued, is typically interpreted as equivalence of care rather than equivalence of outcome (Birmingham et al., 2006; Charles & Draper, 2012; Exworthy et al., 2011; Jotterand & Wangmo, 2014; Niveau, 2007). The tacit assumption within the notion of the equivalence of care is that the prison population is comparable to the population as a whole - rather than ‘inherently dissimilar’, as Exworthy et al. (2011) (p. 201) would have it - and therefore that the same standard of health services will produce equivalent health outcomes. The greater disease burden experienced by prison populations on account of socioeconomic determinants (Stürup-Toft et al., 2018; Tyler et al., 2019), combined with accelerated physiological aging (Williams et al., 2012), constraints on their autonomy (Jotterand & Wangmo, 2014) and life in an environment not conducive to healthy lifestyle choices (Ginn, 2013), impact the comparability of the prison population to the population as a whole. Hence, to maximise health equity, that is, to improve the health status of people in prison to a level comparable with the non-incarcerated population, the equivalence principle could be expanded to incorporate equivalence of outcomes, which may require health services in prison to exceed, rather than match, those in the community setting (Lines, 2006). Equivalence of outcomes for socially excluded prison populations, however, remains a significant challenge due to the significant socioeconomic barriers to health incarcerated people face.
It is notable that, in this review, the majority of the measures identified were process, rather than outcome measures. This may be due in part to landmark legal proceedings in America in the 20th century (in particular, the case Estelle v Gamble in 1976) which identified poor access to care in prison to be a violation of the 8th Amendment, and subsequently triggered a focus on prison healthcare processes (Damberg et al., 2011; Hoge et al., 2009; Raimer & Stobo, 2004; Teleki et al., 2011; Wilper et al., 2009). However, a primary focus on process rather than outcome indicators has been similarly identified in studies of primary care quality indicators in UK community settings (Kronenberg et al., 2017; Ryan & Doran, 2012). Accountability for process is more readily ascribed than for health outcomes, which are subject to a range of confounding factors including medication adherence, lifestyle choices, and unpredictable trajectories of conditions. However, it is reasonable to suggest that people on the receiving end of care are likely to be more interested in outcome – the chance of an improvement in health status, or the risk of further morbidity or mortality - than the proportion of people who received a particular intervention (Asch et al., 2011), and that inclusion of patients in stakeholder consultations may shift the process-outcome indicator balance.
Virtually absent from the reviewed papers is the third category of quality indicators described by Donabedian (Donabedian, 1988): structure. Structural indicators relate to the health care setting, and include measures relating to resources such as budgets, clinical spaces, equipment, staff licencing, training and peer review processes. Structure, process and outcome, according to Donabedian (Donabedian, 1988), are causally linked in that quality in terms of structure creates conditions that are conducive to quality processes which are also likely to promote good outcomes, and that a comprehensive picture of quality relies on a combination of all three types of indicators. In this review, structural indicators were rarely included by authors; none of the indicators in Asch et al.’s (2011) or Stone et al.’s (2006) lists related to structure. Only Laffan (2016) and Greifinger (2012) included structural indicators in their lists of performance measures. Structural indicators may receive less focus because human and material resources within the prison setting – for example the number of clinic rooms - may be less within the bounds of influence of the healthcare team, who could not be held accountable. Secondly, while process and outcome indicators provide data at a patient population level, for example, people living with diabetes, structural indicator data is contextual, relating to the setting of health care delivery, and may be of less interest to health care providers trained to prioritise patient need. However, in line with the above argument on the equivalence principle, where increasing health care services could potentially reduce health disparities between the prison and community populations (Lines, 2006), structural indicators, which provide a way to measure, benchmark and monitor the available healthcare resources in the prison environment, may become more apposite.
This review aimed to identify international research on quality indicators and performance measurement in the prison setting; however, only literature from the US context was identified, even with the use of supplementary searches. We did not identify any reports on indicator development from within correctional or prison healthcare services using academic search strategies, and would encourage transparent reporting of such processes within peer-reviewed literature. The quality of clinical evidence underpinning the listed indicators was not appraised. Articles not published in the English language may have held valuable content that we were not able to access. Although we approached the literature with a critical stance, we did not use formal critical appraisal tools to eliminate any sources from the review, which resulted in considerable variability in quality.
Developing a robust set of evidence-based indicators will enable prison establishments to monitor quality of care through both internal and external comparisons and to identify areas for improvement. Challenges exist, however. Selecting indicators is complicated by the number of available guidelines, the unique constraints of the prison setting, the functionality and compatibility of the data infrastructure, and community-prison population comparability. Future research should select indicators that can be implemented using routinely-collected data in prison estates. Where possible, indicators that enable comparison with community settings should be included to reveal imbalances between the quality of prison and community healthcare. Prison health care services could consider adopting community indicators that are in operation in their country, such as the Uniform Data Set in the US and the Quality and Outcomes Framework in England. Achieving an appropriate balance of structure, process and outcome indicators would address the dual objectives set out in the Nelson Mandela Rules, and would make progress towards improving both care quality and health outcomes. Finally, selecting measures of performance requires a rigorous, multi-stakeholder approach in which recipients of prison healthcare are represented alongside healthcare commissioners and providers.
Availability of data and materials
Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.
Arksey, H., & O'Malley, L. (2005). Scoping reviews: Towards a methodological framework. International Journal of Social Research Methodology, 8(1), 19–32. https://doi.org/10.1080/1364557032000119616.
Asch, S. M., Damberg, C. L., Hiatt, L., Teleki, S. S., Shaw, R., Hill, T. E., … Grudzen, C. R. (2011). Selecting performance indicators for prison health care. Journal of Correctional Health Care, 17(2), 138–149. https://doi.org/10.1177/1078345810397712.
Begun, A. L., Early, T. J., & Hodge, A. (2016). Mental health and substance abuse service engagement by men and women during community reentry following incarceration. Administration and Policy in Mental Health and Mental Health Services Research, 43(2), 207–218. https://doi.org/10.1007/s10488-015-0632-2.
Binswanger, I. A., Krueger, P. M., & Steiner, J. F. (2009). Prevalence of chronic medical conditions among jail and prison inmates in the USA compared with the general population. Journal of Epidemiology and Community Health, 63(11), 912–919. https://doi.org/10.1136/jech.2009.090662.
Birmingham, L., Wilson, S., & Adshead, G. (2006). Prison medicine: Ethics and equivalence. British Journal of Psychiatry, 188(1), 4–6. https://doi.org/10.1192/bjp.bp.105.010488.
Bisset, M. M., & Harrison, E. A. (2012). Health outcomes in corrections: Health information technology and the correctional health outcome and resource data set.
BMA, & NHS England. (2021). Quality and outcomes framework guidance 2021/22. Retrieved from London: https://www.england.nhs.uk/wp-content/uploads/2021/03/B0456-update-on-quality-outcomes-framework-changes-for-21-22-.pdf. Accessed 14 Feb 2022.
Brinkley-Rubinstein, L. (2013). Incarceration as a catalyst for worsening health. BMC Health and Justice, 1(3). https://doi.org/10.1186/2194-7899-1-3.
Brooke, J., Diaz-Gil, A., & Jackson, D. (2018). The impact of dementia in the prison setting: A systematic review. Dementia., 19(5), 1509–1531. https://doi.org/10.1177/1471301218801715.
Carter, J. E., Oyebode, J. R., & Koopmans, R. T. C. M. (2018). Young-onset dementia and the need for specialist care: A national and international perspective. Aging & Mental Health, 22(4), 468–473. https://doi.org/10.1080/13607863.2016.1257563.
Castro, M. E. (2014). Diabetes screening in inmates: A quality improvement pilot project (pp. 468). Doctoral Dissertations. https://opencommons.uconn.edu/dissertations/468.
Charles, A., & Draper, H. (2012). `Equivalence of care’ in prison medicine: Is equivalence of process the right measure of equity? Journal of Medical Ethics, 38(4), 215–218. https://doi.org/10.1136/medethics-2011-100083.
Coyle, A. (2004). Prison reform efforts around the world: The role of prison administrators. Pace Law Review, 24(2), 825–832.
Damberg, C. L., Shaw, R., Teleki, S. S., Hiatt, L., & Asch, S. M. (2011). A review of quality measures used by state and federal prisons. Journal of Correctional Health Care, 17(2), 122–137. https://doi.org/10.1177/1078345810397605.
De Viggiani, N. (2007). Unhealthy prisons: Exploring structural determinants of prison health. Sociology of Health & Illness, 29(1), 115–135. https://doi.org/10.1111/j.1467-9566.2007.00474.x.
Donabedian, A. (1988). The quality of care: How can it be assessed. Journal of the American Medical Association, 260(12), 1743–1748. https://doi.org/10.1001/jama.1988.03410120089033.
Exworthy, T., Wilson, S., & Forrester, A. (2011). Beyond equivalence: prisoners' right to health. The Psychiatrist, 35(6), 201–202. https://doi.org/10.1192/pb.bp.110.033084.
Fazel, S., Hayes, A. J., Bartellas, K., Clerici, M., & Trestman, R. (2016). Mental health of prisoners: Prevalence, adverse outcomes, and interventions. The Lancet Psychiatry, 3(9), 871–881. https://doi.org/10.1016/S2215-0366(16)30142-0.
Gainforth, H. L., Sheals, K., Atkins, L., Jackson, R., & Michie, S. (2016). Developing interventions to change recycling behaviors: A case study of applying behavioral science. Applied Environmental Education & Communication, 15(4), 325–339. https://doi.org/10.1080/1533015X.2016.1241166.
Ginn, S. (2013). Promoting health in prison. British Medical Journal, 346(7910), 19–21.
Greenhalgh, T., Howick, J., & Maskrey, N. (2014). Evidence-based medicine: A movement in crisis? British medical journal, 348. https://doi.org/10.1136/bmj.g3725
Greifinger, R. B. (2012). Independent review of clinical health services for prisoners. International Journal of Prison Health, 8(3-4), 141–150. https://doi.org/10.1108/17449201211285012.
Halachmi, A. (2002). Performance measurement, accountability and improved performance. Public Performance and Management, 25(4), 370–374. https://doi.org/10.1080/15309576.2002.11643674.
Health Resources and Services Administration. (2021). Uniform Data System Reporting Requirements for 2021 Health Center Data. Retrieved from https://bphc.hrsa.gov/sites/default/files/bphc/datareporting/pdf/2021-uds-manual.pdf. Accessed 14 Feb 2022.
Hoge, S. K., Greifinger, R. B., Lundquist, T., & Mellow, J. (2009). Mental health performance measurement in corrections. International Journal of Offender Therapy & Comparative Criminology, 53(6), 634–647. https://doi.org/10.1177/0306624X08322692.
Hopkin, G., Evans-Lacko, S., Forrester, A., Shaw, J., & Thornicroft, G. (2018). Interventions at the transition from prison to the Community for Prisoners with mental illness: A systematic review. Administration and Policy in Mental Health and Mental Health Services Research, 45(4), 623–634. https://doi.org/10.1007/s10488-018-0848-z.
Ismail, N., & de Viggiani, N. (2018). How do policymakers interpret and implement the principle of equivalence with regard to prison health? A qualitative study among key policymakers in England. Journal of Medical Ethics, 44(11), 746–750. https://doi.org/10.1136/medethics-2017-104692.
Jotterand, F., & Wangmo, T. (2014). The principle of equivalence reconsidered: Assessing the relevance of the principle of equivalence in prison medicine. The American Journal of Bioethics : AJOB, 14(7), 4–12. https://doi.org/10.1080/15265161.2014.919365.
Karstedt, S., Bergin, T., & Koch, M. (2019). Critical junctures and conditions of change: Exploring the fall of prison populations in US states. Social & Legal Studies, 28(1), 58–80. https://doi.org/10.1177/0964663917747342.
Kessner, D. M., Kalk, C. E., & Singer, J. (1973). Assessing health quality - the case for tracers. The New England Journal of Medicine, 288(4), 189–194. https://doi.org/10.1056/NEJM197301252880406.
Kinner, S. A., & Young, J. T. (2018). Understanding and improving the health of people who experience incarceration: An overview and synthesis. Epidemiologic Reviews, 40(1), 4–11. https://doi.org/10.1093/epirev/mxx018.
Kintz, K. E. (2013). Quality measures in correctional health care.
Kötter, T., Blozik, E., & Scherer, M. (2012). Methods for the guideline-based development of quality indicators - a systematic review. Implementation Science, 7(1), 7. https://doi.org/10.1186/1748-5908-7-21.
Kronenberg, C., Doran, T., Goddard, M., Kendrick, T., Gilbody, S., Dare, C. R., … Jacobs, R. (2017). Identifying primary care quality indicators for people with serious mental illness: A systematic review. British Journal of General Practice., 67(661), e519–e530. https://doi.org/10.3399/bjgp17X691721.
Laffan, S. (2016). Evaluation of your medical department. American Jails, 30(2), 62–64.
Lines, R. (2006). From equivalence of standards to equivalence of objectives: The entitlement of prisoners to health care standards higher than those outside prisons. International Journal of Prisoner Health, 2(4), 269–280. https://doi.org/10.1080/17449200601069676.
Loeb, J. M. (2004). The current state of performance measurement in health care. International Journal for Quality in Health Care, 16(suppl_1), i5–i9. https://doi.org/10.1093/intqhc/mzh007.
Mainz, J. (2003). Defining and classifying clinical indicators for quality improvement. International Journal for Quality in Health Care, 15(6), 523–530. https://doi.org/10.1093/intqhc/mzg081.
Maschi, T., & Viola, D. (2013). The high cost of the international aging prisoner crisis: Well-being as the common denominator for action. The Gerontologist, 53(4), 543–554. https://doi.org/10.1093/geront/gns125.
Maxwell, R. J. (1992). Dimensions of quality revisited: From thought to action. Quality in Health Care, 1(3), 171–177. https://doi.org/10.1136/qshc.1.3.171.
McCann, L. J., Peden, J., Phipps, E., Plugge, E., & O'Moore, É. J. (2019). Developing gender-specific evidence-based standards to improve the health and wellbeing of women in prison in England: A literature review and modified eDelphi survey. International Journal of Prisoner Health, 16(1), 17–28. https://doi.org/10.1108/IJPH-02-2019-0010.
McLeod, K. E., Butler, A., Young, J. T., Southalan, L., Borschmann, R., Sturup-Toft, S., … Kinner, S. A. (2020). Global prison health care governance and health equity: A critical lack of evidence. American Journal of Public Health, 110(3), 303–308. https://doi.org/10.2105/AJPH.2019.305465.
Niveau, G. (2007). Relevance and limits of the principle of “equivalence of care” in prison medicine. Journal of Medical Ethics, 33(10), 610–613. https://doi.org/10.1136/jme.2006.018077.
Omolade, S. (2014). Analytical summary 2014: The needs and characteristics of older prisoners: Results from the surveying prisoner crime reduction (SPCR) survey. London: Ministry of Justice (UK).
Peters, M. D. J., Godfrey, C., McInerney, P., Munn, Z., Tricco, A. C., & Khalil, H. (2020). Scoping reviews. In E. Aromataris, & Z. Munn (Eds.), JBI manual for evidence synthesis: JBI.
Raimer, B. G., & Stobo, J. D. (2004). Health care delivery in the Texas prison system: The role of academic medicine. Journal of the American Medical Association, 292(4), 485–489. https://doi.org/10.1001/jama.292.4.485.
Rushforth, B., Stokes, T., Andrews, E., Willis, T. A., McEachan, R., Faulkner, S., & Foy, R. (2015). Developing ‘high impact’ guideline-based quality indicators for UK primary care: A multi-stage consensus process. BMC family practice, 16, 16. https://doi.org/10.1186/s12875-015-0350-6.
Ryan, A. M., & Doran, T. (2012). The effect of improving processes of care on patient outcomes: Evidence from theUnited Kingdom's quality and outcomes framework. Medical Care, 50(3), 191–199. https://doi.org/10.1097/MLR.0b013e318244e6b5.
Stone, T. T., Kaiser, R. M., & Mantese, A. (2006). Health care quality in prisons: A comprehensive matrix for evaluation. Journal of Correctional Health Care, 12(2), 89–103. https://doi.org/10.1177/1078345806288948.
Stürup-Toft, K. A., O'Moore, É. J., & Plugge, E. H. (2018). Looking behind the bars: Emerging health issues for people in prison. British Medical Bulletin, 125(1), 15–23. https://doi.org/10.1093/bmb/ldx052.
Teleki, S. S., Damberg, C. L., Shaw, R., Hiatt, L., Williams, B., Hill, T. E., & Asch, S. M. (2011). The current state of quality of care measurement in the California Department of Corrections and Rehabilitation. Journal of Correctional Health Care, 17(2), 100–121. https://doi.org/10.1177/1078345810397498.
Thomas, E., Wang, E., Curry, L., & Chen, P. (2016). Patients' experiences managing cardiovascular disease and risk factors in prison. Health & Justice, 4(1), 1–8. https://doi.org/10.1186/s40352-016-0035-9.
Tricco, A. C., Lillie, E., Zarin, W., O'Brien, K., Colquhoun, H., Kastner, M., … Straus, S. E. (2016). A scoping review on the conduct and reporting of scoping reviews. BMC Medical Research Methodology, 16(15), 15. https://doi.org/10.1186/s12874-016-0116-4.
Tricco, A. C., Lillie, E., Zarin, W., O'Brien, K. K., Colquhoun, H., Levac, D., … Straus, S. E. (2018). PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Annals of Internal Medicine, 169(7), 467–473. https://doi.org/10.7326/M18-0850.
Tyler, N., Miles, H. L., Karadag, B., & Rogers, G. (2019). An updated picture of the mental health needs of male and female prisoners in the UK: Prevalence, comorbidity, and gender differences. Social Psychiatry and Psychiatric Epidemiology, 54(9), 1143–1152. https://doi.org/10.1007/s00127-019-01690-1.
United Nations General Assembly (2015). United Nations Standard Minimum Rules for the Treatment of Prisoners (the Nelson Mandela Rules) : resolution / adopted by the General Assembly, 8 January 2016, A/RES/70/175. Available at: https://www.refworld.org/docid/5698a3a44.html. Accessed 30 June 2021.
Walmsley, R. (2018). World Prison Population List. Retrieved from https://www.prisonstudies.org/sites/default/files/resources/downloads/wppl_12.pdf. Accessed 14 Feb 2022.
Watts, B. (2015). Development of a performance-based RFP for correctional health Care Services in Vermont.
Williams, B. A., Stern, M. F., Mellow, J., Safer, M., & Greifinger, R. B. (2012). Aging in correctional custody: Setting a policy agenda for older prisoner health care. American Journal of Public Health, 102(8), 1475–1481. https://doi.org/10.2105/AJPH.2012.300704.
Willis, T. A., West, R., Rushforth, B., Stokes, T., Glidewell, L., Carder, P., … Foy, R. (2017). Variations in achievement of evidence-based, high-impact quality indicators in general practice: An observational study. PLoS One, 12(7), e0177949. https://doi.org/10.1371/journal.pone.0177949.
Wilper, A. P., Woolhandler, S., Wesley Boyd, J., Lasser, K. E., McCormick, D., Bor, D. H., & Himmelstein, D. U. (2009). The health and health care of US prisoners: Results of a nationwide survey. American Journal of Public Health, 99(4), 666–672. https://doi.org/10.2105/AJPH.2008.144279.
World Health Organisation. (2019). Status report on prison health in the WHO European region. Retrieved from Denmark:
Wright, K. N. (2005). Designing a national performance measurement system. The Prison Journal, 85(3), 368–393. https://doi.org/10.1177/0032885505279389.
Wright, N., Hankins, F., & Hearty, P. (2021). Long-term condition Management for Prisoners: Improving the processes between community and prison. BMC Family Practice, 22(80), 80. https://doi.org/10.1186/s12875-021-01417-9.
The authors would like to thank the study project team and oversight committee.
This study (“Understanding and improving the quality of primary care for prisoners: a mixed methods study”) is funded by the National Institute for Health Research (NIHR) Health Services and Delivery Programme, UK (reference number: HS&DR 17/05/26). The views expressed are those of the authors and not necessarily those of the NIHR or the Department of Health and Social Care.
Ethics approval and consent to participate
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Bellass, S., Canvin, K., McLintock, K. et al. Quality indicators and performance measures for prison healthcare: a scoping review. Health Justice 10, 13 (2022). https://doi.org/10.1186/s40352-022-00175-9
- Quality indicators
- Performance measurement
- Prison healthcare
- Correctional healthcare
- Quality of prison healthcare