Health Psychologist (MSc)
Body & Gestalt Psychotherapist (ECP)

Can qualitative research methods make a useful contribution to the evaluation of interventions for the promotion and protection of health?

While the idea of health promotion appeared early in this century, the first published definition was provided by Lalonde (1974 in Kumar, 2012): “a strategy aimed at informing, influencing and assisting both individuals and organizations so that they will accept more responsibility and be more active in matters affecting mental and physical health”.

More recently, the “Ottawa Charter for Health Promotion” (World Health Organization [WHO], 1986) defined Health Promotion as “the process of enabling people to increase control over and to improve their health”. Based on this, Nutbeam’s (1985) conceptualization further clarified the relationship between individuals and communities, what is to be controlled, and a potential causal mechanism: “the process of enabling people to increase control over the determinants of health and thereby improve their health”.

All these definitions stress empowerment which is widely accepted as the fundamental principle of health promotion (Rootman, Goodstadt, Hyndman, McQueen, Potvin, Springett, and Ziglio, 2001). They indicate that health promotion involves making sure that individuals and communities can assume the power they deserve. Thus, an intervention is considered as health promoting only if it includes empowering activities (Rootman et al, 2001). Health promotion calls for actions to improve political, cultural, social, economic, environmental, biological and behavioral factors which influence people’s health (WHO, 1986). In this context, health is seen as a state of absolute physical, mental and social well-being, a resource for everyday life which goes beyond the province of the health sector alone. Health promotion requires the coordinated action of all sectors concerned: health, social and economic sectors, governments, nongovernmental and voluntary organizations, local authorities, industry and the media. It also involves people in all walks of life as individuals, families and communities. Professional and social groups and health stuff need to mediate between various interests in society to promote health (WHO, 1986). Besides, health promotion emphasizes equal opportunities and resources for all people to attain complete health potential. This entails action towards achieving a secure base in a supportive environment, access to information, life skills and opportunities for making healthy choices (WHO, 1986). Within these principles, the approach of health promotion can be applied in the domains of prevention, treatment, rehabilitation and long-term care (Rootman et al, 2001).

Health promotion and protection are necessities given the estimated mortality rates from chronic diseases provided by WHO for the world population of 2009 (WHO, 2009).  In particular, heart disease, stroke, cancer, chronic obstructive pulmonary disease  (COPD), chronic renal failure, diabetes, noninfectious digestive diseases like liver cirrhosis and neurological disorders like Alzheimer’s disease, caused about 60% (36 million) of all deaths worldwide while the remaining 40% (24 million) are attributed to rather acute conditions: infectious/parasitic diseases, accidents, violence, malnutrition, congenital anomalies, maternal conditions, and perinatal disease (low birth weight/failure to thrive) (Harris, 2013).

Given their crucial aim, interventions designed for the purposes of health promotion need to be evaluated in terms of their actual effect in the target population, their cost effectiveness, whether expected outcomes have been achieved, and identifying potential improvements (Round, Marshall, and Horton, 2005). WHO (1998) stresses that “evaluation aims to produce information that can be used by those who have an interest in the improvement and effectiveness of interventions”. Good evaluation provides for systematic documentation, dissemination and promotion of effective practice (Garrard, Lewis, Keleher, Tunny, Burke, Harper, and Round, 2004). It is especially required in generating evidence base for integrated health promotion programs which can ameliorate health outcomes and reduce the request for health services (Round et al, 2005).

Evaluation may broadly refer to process, impact and outcome. Process evaluation determines the quality, appropriateness and scope of the intervention by assessing the components of its development and delivery. It can be applied in any stage from planning to implementation. Potential problems which are identified can thus be immediately resolved with the least effect on the program (Round et al, 2005). Impact evaluation examines the extent to which an intervention has achieved its objectives. It measures the intervention’s direct effects once the implementation stages have been completed. Finally, outcome evaluation assesses the degree to which intervention goals have been attained by measuring its longer-term effects such as, changes in mortality rates, incidence/prevalence of health conditions, sustained behaviour, environmental conditions, equity and quality of life (Round et al, 2005).

In general, health promotion evaluation employs quantitative and qualitative methods. Quantitative methods involve collecting numerical data (e.g. pre/post surveys with or without a comparison group) and are often used in impact evaluation to measure the effect of an intervention in the target group as compared to a “control group” which did not receive the intervention. On the other hand, qualitative methods involve colleting written or spoken data (e.g. interviews, focus groups, case studies, document analysis) and are frequently used in process evaluation (Round et al, 2005).

The choice of data collection methods is determined by the purpose of evaluation, the issues to be explored, financial resources, skills and time demands. To measure change –over time or across groups– in health status (impacts or outcomes) behaviours, knowledge or intentions, or to enable generalization of results from the sample used to the whole target population, quantitative methods are most suitable (Round et al, 2005). Then again, to explore participants’ experiences and issues of “how” and “why”, qualitative methods are more pertinent. Qualitative methods can also be combined with quantitative to clarify potential effects or outcomes indicated by the quantitative data. For example, in the evaluation of a nationwide French Health Promotion program aiming to enhance primary school pupils’ social, emotional and physical health through improvement of teachers’ HP practices and promotion of a healthy school environment (Pommier, Guével, & Jourdan, 2010), qualitative and quantitative methods were mixed in all stages of the project from design through data collection to data interpretation. Quantitative numerical data was collected from questionnaires and forms at the same time with qualitative data (text data, transcripts and memos) collected from open-ended questions in questionnaires and forms and from semi-directed interviews. The data was analyzed through typical quantitative and qualitative analysis and interpretation was quantitative, qualitative and combined: quantitative results were clarified by the qualitative results so as to generalize the findings (Pommier et al, 2010).

Besides, Patton (2002 in Round et al, 2005) argues that evaluation can be reinforced via “triangulation” that is, combining several evaluation methods, either qualitative or quantitative or both (Round et al, 2005). For example, in the evaluation of the “In SHAPE” health promotion program designed for people with serious mental illness (SMI) (Shiner, Whitley, Van Citters, Pratt, and Bartels, 2008), informal conversations with “In SHAPE” health mentors followed after interviews with participants in order to expand the data set, thus allowing for analytical triangulation. Also, in the ACT Consortium projects (Chandler, Reynolds, Palmer, & Hutchinson, 2013) concerning the development and evaluation of delivery mechanisms of Artemisinin Combination Treatment (ACT) for malaria in Africa and Asia, data collected from focus groups were often ‘triangulated’ with data from individual interviews or questionnaires so as to validate or broaden the interpretation. The ACT Consortium (Chandler et al, 2013) employed qualitative methods to explore perceptions and behaviors of the community, patients and health workers about malaria, its diagnosis and treatment in various settings. Qualitative methods were also used in outcome evaluation to collect and compare perceptions before and after an intervention or between intervention groups and control groups. Moreover, in process evaluation qualitative methods were employed to understand in detail the elements of interventions delivered to participants and thus interpret the interventions’ impact and inform policy.

Quantitative data collection methods of health promotion evaluation comprise surveys, structured observation, population statistics or other record data, environmental audits and quantitative content analysis (such as analysis of policies) (Garrard et al, 2004). Quantitative surveys involve closed-ended questions with responses being chosen from a predetermined set of answers which allow for quantification of the effects that are examined. Surveys are administered by phone, fax, email or mail, or in person (Round et al, 2005). Structured observation concerns watching or assessing individuals or groups to evaluate the effectiveness of an intervention. For example, to evaluate the efficacy of a media campaign on the SunSmart program which aims to reduce the burden of skin cancer in Melbourne, Victoria (Cancer Council Victoria, 2018), researchers would observe and measure skin protection behaviours such as wearing long-sleeved cloths, sunscreen and hats. Population statistics involve collection of data sets (e.g. participation numbers, health data) by health and other agencies (Garrard et al, 2004).

On the other hand, qualitative data collection methods of health promotion evaluation comprise in-depth interviews, focus group discussions, participant observation, and qualitative document analysis and record analysis (Garrard et al, 2004). In-depth interviews are unstructured or semi-structured interviews conducted in person or by telephone. They are useful for gathering the views of key persons involved in a program or discussing sensitive issues with individuals or a few people, and provide thorough understanding of complex issues (Garrard et al, 2004). For example, in outcome evaluation of ACT Consortium projects, in-depth interviews produce findings concerning perceptions and practices which may be compared between groups or time points and concerning participants’ experiences during the intervention. They also help explain the context of intervention and control groups (Chandler et al, 2013). Focus groups constitute semi-structured discussions with up to 12 participants led by a facilitator, around key issues and questions which are usually prepared and sometimes made known to the participants at the start of the interview. Proceedings are recorded by audiotaping or a note taker to be transcribed later. Focus groups provide in-depth information about stakeholders’ (those participating in or implementing the program) experiences, beliefs, concerns, attitudes and explanations and may identify issues which can be used in quantitative survey work or in explaining the meaning of quantitative data that has been gathered (Round et al, 2005). For example, in ACT Consortium projects, some of the data collected from focus groups may concern possible responses to fever which are then interpreted in terms of what each response may refer to or mean (Chandler et al, 2013). In participant observation, the evaluator observes people’s activities while exerting no influence (unobtrusive form) or participating in activities having stated their role as evaluator (Round et al, 2005). The recorded qualitative notes (field notes) provide information about the physical/social context, the dynamics, what is happening and why. Such information can be used to develop further data collection and to supplement quantitative impact data (Garrard et al, 2004). For example, direct observation was used in ACT Consortium projects (Chandler et al, 2013) to collect data about the context in which an intervention occurs and about how participants experience and interpret the intervention and its consequences. Document analysis involves diaries or journals of activities, experiences and responses recorded by stakeholders over a specific period/section of the program. Diaries help explore processes (e.g. how the program is going and obstacles that have emerged) and impacts (changes that have occurred due to program participation). Journals are useful in mapping change over time as they provide detailed descriptions of specific aspects of the program and continuing documentation by stakeholders (Round et al, 2005). Finally, open-ended surveys involve a fixed set of questions which can be answered in the respondents’ own words. They can be used to provide greater depth and illuminate particular issues or to explore closed-ended survey results (quantitative). They can be conducted face-to-face, by telephone, mail or electronically (Round et al, 2005).

As we can see from the above, qualitative data can help explain and provide greater insight on quantitative data. Overall, while quantitative findings can be statistically sound, very consistent, reliable, precise and generalizable (provided that the sample used is large enough and carefully selected to be representative of the target population), usually they cannot explain the context of a phenomenon or other complex issues (Copenhagen School of Global health, 2018). On the other hand, qualitative data can supplement and refine quantitative data, yield comprehensive information to explain complex issues, may concern sensitive subjects and its collection, albeit time-consuming (as it comes in large quantities), can be cost-effective (Copenhagen School of Global health, 2018). For instance, regarding the “In SHAPE” health promotion program, only qualitative approaches can capture participants’ individualized experiences and provide deeper understanding of the process of engaging individuals with SMI in health behavior modification (Shiner et al, 2008).

Because quantitative approaches involve structured questionnaires with close-ended questions, they produce limited outcomes (defined in the research proposal) which may not represent what actually happens in a generalized form (Sudeshna, 2016). Besides, while qualitative data is collected from a small group of people, quantitative methods generally require a large sample size to represent the target population and attain thorough responses on a subject-matter which often renders data collection too expensive compared to qualitative approaches (Sudeshna, 2016). Furthermore, by contrast to quantitative research, qualitative studies do not require statistical analysis and the use of pertinent computer programs which are often difficult to carry out especially for those who lack experience in statistics (Sudeshna, 2016).

Quantitative methods are appropriate and beneficial in exploring “what” and “to what extent” and when systematic and standardised comparisons are required but often fail to explain “why” and “how” (WeeTech Solution, 2018). In contrast, qualitative methods mostly focus on investigating in-depth the issues concerned, making sense of the actual problem and enable answering many types of questions (WeeTech Solution, 2018). In this endeavor, qualitative data analysis of health promotion evaluation employs a variety of methods including thematic analysis, grounded theory and Interpretative Phenomenological Analysis (IPA).

Thematic analysis is particularly useful in describing and categorizing the ‘what’ and ‘how’ of data. This involves categorizing and summarizing themes (common, prominent, shared ideas) such as attitudes, perceptions and reported behavior that occur in data collection including diaries, journals, video or film footage, summaries of participants’ comments and other material (Chandler et al, 2013). Respondents’ ideas are coded into categories according to a pre-defined scheme (potentially generated from discussions with fieldworkers and research coordinators) or a flexible scheme derived from the actual data. Such coding can summarize participant issues around specific topics. For instance, in ACT projects, these might include treatment seeking patterns (such as “where do community members seek treatment for fever?”) and practical constraints to access health care facilities (Chandler et al, 2013). Respondents’ ideas may then be interpreted via comparative methods to examine relationships between different themes or concepts identified in the coding stage, for instance, in ACT projects, the relationship between perceptions of the high cost of new antimalarials and new tests and perceptions of patient demands, or variation in perceptions between different groups such as male and female health workers (Chandler et al, 2013).

This method of analysis helps explain the broadness of perceptions among participants and potential contradictions that can illuminate target processes for strategies of behavior modification (Chandler et al, 2013). For example, thematic analysis was used in the evaluation of the individualized health promotion program “In SHAPE” which comprises elements that increase engagement in physical exercise, dietary modifications, lifestyle changes and preventive health care to promote physical health improvement for those with serious mental illness (SMI) (Shiner et al, 2008). The evaluation aimed at identifying aspects of “In SHAPE” that individuals with SMI perceived to be most helpful in achieving physical health improvements, and to use these understandings to improve the program. Thematic analysis of participants’ transcripts revealed three main themes as factors of success in “In SHAPE”: (1) individualized interventions to promote engagement in the program; (2) relationships formed in the program and (3) enhanced self-confidence due to program participation. The findings indicated that relationships, self-confidence built within the long-term engagement in healthy behaviors, support from fitness trainers and activities integrated into community settings are key factors in promoting long-term engagement in health behavior change and should be incorporated in any health-promotion program designed for people with SMI. This insight feeds into performance improvement and sustainability of “In SHAPE” (Shiner et al, 2008).

Grounded theory goes further, exploring ‘why’ and focusing on phenomena in the data which are treated as instances of more generalizable concepts. The objective is to generate theory from data, for instance, concerning stigma in mental health (e.g. Medved, 2014); beliefs and perceptions of mental illness in different ethnic groups (e.g. Knifton, 2012) as opposed to enforcing theory on the data through arranging data in a pre-determined structure (Chandler et al, 2013). This method employs flexible coding and comparative analysis techniques (described above) to derive analytical categories along with their dimensions, and to identify relationships between them (Spencer, Ritchie and O'Connor, 2003). Theory is built from concepts which identify the general phenomenon being represented in a segment of data (or collection of descriptive codes) undergoing analysis. The theory development process sees that deviant cases are included to produce a detailed, thorough account which is grounded in the empirical data (Chandler et al, 2013). Data collection and conceptualization processes keep on until categories and relationships become “saturated”, so that new data no longer adds to the developing theory (Spencer et al, 2003). Grounded theory is more intensive and time-consuming than thematic analysis, yet it can better illuminate how participants see the world in a conceptual manner rather than just practical. It provides in-depth understanding of why target groups behave in specific ways which may indicate acceptable and drastic mechanisms for change (Chandler et al, 2013). Grounded theory approach was used, for instance, in Pommier et al’s (2010) Health Promotion program. Following the realistic evaluation framework of Pawson and Tilley (1997 in Pommier et al, 2010), this HP program was regarded as a system of assumptions i.e. action mechanisms inducing expected outcomes, which was tested by the evaluation process in order to build a theory that can be applied and amended for the particular program in different contexts. The underlying rationale was that studying the mechanisms being triggered at the implementation of the program in the particular context and demonstrating a relationship between the outcomes observed can reveal how, for whom and under what circumstances the program works (Pommier et al, 2010).

At this point, we need to emphasize that in order to be effective, qualitative approaches to HP evaluation require carefully chosen methodology which will be appropriate for the issues/components to be evaluated and rigorously implemented to produce pertinent knowledge for actions (Pommier et al, 2010). Besides, analysis of qualitative data calls for a variety of operations as well as systematic, transparent and comprehensive processes to detect patterns, themes and theoretical constructs in the data and continuous reflection to explore these (Chandler et al, 2013).

Provided that these requirements are met, the main contribution of qualitative approaches in the evaluation of health promotion interventions is that they provide very useful data about various topics and research questions which cannot be studied via conventional quantitative methods. Their exploratory nature enables researchers to describe, understand and explain specific phenomena, addressing questions of ''what?'', ''why?'' and ''how?'' rather than ''how many?'' or ''how frequently?'' as quantitative methods do. So they are particularly suited in studying and documenting processes i.e. how outcomes are attained, what mechanisms are involved, how situations or changes evolve, what difficulties are met and how they are perceived and treated, and also how clinicians, patients and managers make decisions. Importantly, they can illuminate why implementation of specific interventions is successful or not (Barbour, 2000). While they are not appropriate for questions of prevalence, prediction, outcomes, cause and effect, and their findings are not statistically generalizable, they can provide in-depth contextualized accounts and explanations for –sometimes unexpected or anomalous– quantitative findings. They may also offer insight into the mechanisms which underlie correlations or relationships found by quantitative studies (Barbour, 2000). Either on their own or combined with quantitative methods, qualitative methods can produce “knowledge for use”, help explain social processes and how these can be modified to achieve desired ends (Harding & Gantley, 1998, in Barbour, 2000), and shed light on how non-clinical factors influence a wide range of decisions (e.g. whether to refer children for tonsillectomies; how to manage hospital waiting lists).

Terms and Conditions for Republishing Content

Article Author: Panagiota Kypraiou MSc Health Psychology, MBPsS - Body & Gestalt Psychotherapist (ECP) - Body Psychotherapy Supervisor - Parents' Education Groups Coordinator https://www.psychotherapeia.net.gr

References

Barbour, R.S. (2000). The role of qualitative research in broadening the “evidence base” for clinical practice. Journal of Evaluation in Clinical Practice6(2), 155–163.

Cancer Council Victoria. (2018). SunSmart program. Retrieved from http://www.sunsmart.com.au/about/sunsmart-program.

Chandler, C.I.R., Reynolds, J., Palmer, J.J., & Hutchinson, E. (2013). ACT Consortium Guidance: Qualitative Methods for International Health Intervention Research. Retrieved from http://www.actconsortium.org/resources.php/72/qualitative-methods-for-international-health-intervention-research.

Copenhagen School of Global health. (2018). Strengths and limitations. Retrieved from http://betterthesis.dk/research-methods/lesson-1different-approaches-to-research/strengths-and-limitations.

Garrard, J., Lewis, B., Keleher, H., Tunny, N., Burke, L., Harper, S., and Round, R (2004). Planning for healthy communities: reducing the risk of cardiovascular disease and type 2 diabetes through healthier environments and lifestyles. Victorian Government Department of Human Services, Melbourne.

Harris, R. (2013). Global epidemiology of chronic diseases: the epidemiologic transition. In Harris, R. (Ed.), Epidemiology of Chronic Disease: Global Perspectives, (pp. 1-24). Jones & Bartlett Learning.

Knifton, L (2012, September). Understanding and addressing the stigma of mental illness with ethnic minority communities. Health Sociology Review, 21(3), 287-298. doi: 10.5172/hesr.2012.21.3.287.

Kumar, S., & Preetha, G. (2012, January). Health Promotion: An Effective Tool for Global Health. Indian Journal of Community Medicine: Official Publication of Indian Association of Preventive & Social Medicine37(1), 5–12. doi: 10.4103/0970-0218.94009.

Medved, D. G. (2014). A grounded theory investigation of public stigma, internalized stigma, and mental health recovery in the Wellness

Management and Recovery program" (2014). Theses and Dissertations. 1781.

Retrieved from http://utdr.utoledo.edu/theses-dissertations/1781.

Nutbeam, D. (1985). Health promotion glossary. Copenhagen,WHO Regional Office for Europe (document ICP/HBI 503 (GO 4)).

Pommier, J., Guével, M.-R., & Jourdan, D. (2010, January). Evaluation of health promotion in schools: a realistic evaluation approach using mixed methods. BMC Public Health, 10, 43. doi: 10.1186/1471-2458-10-43.

Rootman, I., Goodstadt, M., Hyndman, B., McQueen, D. V, Potvin, L., Springett, J. and Ziglio, E. (2001).  Evaluation in health promotion: principles and perspectives. WHO Regional Publications, European Series, No. 92.

Round, R., Marshall, B. & Horton, K. (2005). Planning for effective health promotion evaluation. Victorian Government Department of Human Services, Melbourne.

Shiner, B., Whitley, R., Van Citters, A. D., Pratt, S. I., and Bartels, S. J. (2008, June). Learning what matters for patients: qualitative evaluation of a health promotion program for those with serious mental illness. Health Promotion International, 23(1), 275–282. doi: 10.1093/heapro/dan018