By: Rebecca Herrington
Picking up where we left off last time with Qualitative Rigor in Design, this week’s blog focuses on ensuring Qualitative Rigor in Implementation, Data Cleaning, and Analysis. If you have not had time to read our first blog on Qualitative Rigor, we highly recommend that you start there as it lays out Headlight’s framing and some key principles for Qualitative Rigor writ large.
Implementation Phase
Implementation is a core phase for quality control of qualitative approaches. Just like quantitative approaches, it is essential during implementation to ensure adherence to tool design, meet sampling thresholds, and capture and store data in a safe, secure, and accurate manner.
First and foremost, design of evaluative efforts in international development work often happens during the proposal phase. This can mean that there is a different team that writes the original design than the team that is formed to implement data collection. Especially for qualitative efforts where there is more nuance and often misinterpretation of methods, it is essential to have a solid transition between the design and the implementation teams to ensure a robust and nuanced understanding of the original design, the rationale for design decisions, and a collaborative discussion around how any limitations or contextual concerns may have shifted and what design alterations are most appropriate. Other key considerations to build in qualitative rigor during implementation include:
- Review tool questions to make sure they appropriately anonymize data, align with the relevant evaluation questions, are lean, contextually-relevant, and do not bias respondents to a particular answer;
- Conduct double-blind translations of data collection tools. This helps ensure accurate information is collected from respondents and that questions within tools take into account any contextual or potential interviewee sensitivities;
- Practice doing interviews with others on the data collection team to maintain consistency in protocol application and build capacity of interviewers on how to pivot or dig deeper as needed;
- Conduct daily check-ins with yourself and team members during data collection to create intentional space to debrief, clarify, adapt, and reflect;
- Keep clean notes with consistent capture of respondent information, ideally through live transcription. This may require note-taking training for the team before fieldwork;
- Establish and adhere to a strict protocol for any post data collection transcription, upload/saving, and processing so that no data is lost or corrupted before cleaning and coding; and,
- Push back against donors who want to see “preliminary findings” immediately after fieldwork. Without time for analysis, these “findings” are based on recall bias and a handful of statements that stuck out to a particular interviewer. Presenting “findings” this way does not adhere to core principles of rigor, sets up bad expectations about the evaluative work with the client, and can bias the analysis by altering the codebook.
Data Cleaning and Coding Phase
The next phase in evaluative efforts is often data cleaning and coding. Qualitative coding has many rigor pitfalls and can quickly become disastrous if certain protocols and standards are not followed. At a minimum:
- Review all data BEFORE upload into any qualitative analysis software. Make sure notes are clean and fix any misspelled words, acronyms, or shorthand used during note-taking. Copy-paste segments to corresponding interview protocol questions if any misalignment occurred during note-taking in the interviews;
- Name files consistently, clearly, and double-check anonymity, especially if you are bringing in additional staff to support coding and analysis;
- Uploads should be in the best format for the qualitative analysis software you are using. For example, when using Dedoose, clean Word documents with no tables or PDF documents work best, although the software manages a wide range of additional formats. Headlight has a forthcoming blog to help you choose which qualitative analysis software is right for you;
- Ensure you have a well-defined codebook. The codebook should be structured around the evaluation questions, NOT “preliminary findings or trends.” All codes should be well-defined with layman’s terms so that any coder can check for proper application. The codebook should have space to evolve based on sub-trends identified during coding, but those sub-trends should be identified under the core codebook skeleton aligned with the evaluation questions;
- Code the appropriate amount of information. Ideally, this means coding only so much that another person who has not read the interview would understand the key point being made. Sometimes this is just a phrase, sometimes a sentence, sometimes a few sentences. You do not want to code only a few words and not know what they refer to when doing the analysis. However, you also do not want to only code whole paragraphs and have to re-read everything when doing analysis either;
- When onboarding new coders, start with double-blind coding. This is useful even with trained coders when you have three or more people coding a large project. Have everyone use the codebook to code an interview or document. Trade documents between coders and highlight any code applications that might be missing, may not align with the definitions, or are too much or too little information. Conduct a collaborative feedback session led by the codebook designer and/or the evaluation lead to confirm and refine codebook application; and,
- Keep track of emergent sub-trends and start coding them as soon as possible (after two or so identified instances). Avoid back-coding to minimize bias. It is better to have a few codes with insufficient data to analyze than multiple rounds of coding and circling back through documents, which can overemphasize trends identified later in the coding process.
Analysis Phase
The analysis phase can also suffer from a lack of rigor due to misinterpretation of findings; lack of distinction between findings, conclusions, and recommendations; untriangulated findings; and unfounded and unactionable recommendations. Headlight has a wealth of information to share to improve qualitative analysis, which we will continue to share via blogs and our forthcoming Methods Memos. However, the core aspects of analysis you need to remember to ensure Qualitative Rigor are:
- Analysis must be done through a structured process that goes beyond summarizing the data. You cannot achieve Qualitative Rigor through summarization of qualitative data. Headlight strongly recommends using a Findings, Conclusions, Recommendations Matrix, which we will detail more about in a forthcoming blog post. Using this structure will help, but is not sufficient to ensure rigor in analysis by itself;
- A clear separation of Findings, Conclusions, and Recommendations is critical to Qualitative Rigor. Common pitfalls include insufficient triangulation of findings, making findings conclusions, and not logically connecting recommendations to findings and conclusions, amongst other issues. The following definitions and guidance will help separate findings, conclusions, and recommendations into their distinct and respective boxes:
- Findings are only what the data says. Findings entail illustrative quotes, quantitative counts of code applications, and identification of which trends have been triangulated. Findings should not include any summarization of multiple points of data nor any interpretation of the data. It is important to triangulate findings. Single data points and multiple mentions of the same point by a single interviewee are not sufficient to mention a finding or make it actionable. Key findings should be triangulated from at least three distinct data sources;
- Conclusions are the “So, What?” stemmings from findings. Conclusions are a direct interpretation of a finding or multiple findings together that explain why something happened or is meaningful. Conclusions should not include additional interpretation of observations unless those are explicitly recorded in the findings and have been captured intentionally as data included in coding. Conclusions should be detailed enough to present a fuller picture of what worked or what did not work that is easily digestible for the client and/or implementer. Ideally, an Executive Summary provides an overview of Key Conclusions, not Key Findings, as the “So, What?” is often what clients are most interested in;
- Recommendations must stem directly from Findings and Conclusions. Recommendations should be actionable with enough information for the client and/or implementer to devise an action plan to appropriately implement an adaptation directly from the recommendation/evaluation report content. This clarity comes from explicitly naming the who, how, constraints, and why of taking a specific action in response to a conclusion. Unclear, unactionable recommendations prevent uptake and integration of evidence into development work, wasting MEL resources. Ideally, recommendations are prioritized by the strength of the data to facilitate clients operationalizing changes based on the most supportive evidence; and,
- Qualitative Rigor in analysis often requires secondary coding and analysis. Initial coding will identify the core trends, but to properly leverage the data available from qualitative evaluative efforts, secondary analysis under the larger trends (triangulated trends with substantial excerpts underneath them) is necessary to provide an actionable level of nuance. Headlight will provide additional how-to resources on secondary analysis in the near future.
There is currently very low application of these techniques across the evaluation field, both in guidance and understanding from donors (see Raising the Bar evaluation report from Alliance for Peacebuilding) and in training up evaluators (Dewey, et. al, 2008). Application of qualitative rigor is often self-taught or learned on the job, with few resources providing the depth of information needed to become truly skilled in these techniques. As such, there are minimal expectations for application from donors and even within the MEL community. Headlight wishes to share what we have learned about improving rigor in qualitative approaches to contribute to the advancement and continued growth of the evaluation field. Headlight specializes in qualitative methods and rigor, so if you’d like help on your next qualitative evaluative effort, training for staff on qualitative approaches, or associated technical assistance, reach out to us at <info@headlightconsultingservices.com>. Our upcoming blog posts will delve deeper into qualitative analysis, choosing qualitative analysis software, and using a Findings, Conclusions, and Recommendations Matrix.
Comments
June 12, 2023 at 1:28 pm
Very useful, thanks