How triangulation contributes to meaningful data analysis

When one hears monitoring and evaluation (M&E) experts speak, the concept ‘triangulation’ often comes up. In a space that can be incredibly technical in nature, M&E can sometimes be too ‘jargony’. Therefore unpacking M&E terms is a critical part of ensuring that M&E is comprehensible to as many people as possible. Tshikululu believes that this, ultimately, will allow for more effective social investment decision-making across the board. So what is triangulation and why is it so important?

Triangulation refers to the use of multiple methods or data sources in qualitative research to develop a comprehensive understanding of phenomena[1]. It improves the quality of evaluation findings and makes analysis more reliable by examining data from different points of view, collected through different methods from varying stakeholders. It enriches findings by exploring and explaining different aspects of an issue. In addition, it allows evaluators to better unpack unexpected (or expected) findings that emerge through the evaluation process. For example, evaluators may find that points highlighted by a specific group of stakeholders may contradict each other, therefore minimising the reliability of the information. In such a case, additional data collection – using a different method – may be needed to determine the reasons for differing accounts. For example, in a recent community needs assessment conducted by Tshikululu, youth highlighted recreational facilities, skills development and bursary programmes as their most urgent community needs, while business forums stressed the importance of business opportunities and refurbishment of roads as the top priority. Depending on the kind of stakeholder and their role in the community or programme, feedback will inevitably vary.

The use of triangulation can also eliminate (or reduce) biases of different stakeholders, including the evaluators themselves.  For example, Tshikululu conducted an evaluation that looked into the outcomes achieved through a school infrastructure project. Survey responses from schools indicated that the infrastructure was in good condition. However, when the evaluation team went to observe the infrastructure in-person, we found that the infrastructure was not maintained and the sustainability of results was threatened.

When conducting and managing evaluations, one of Tshikululu’s ‘cardinal rules’ is to ensure there is sufficient triangulation. For example, at a minimum each evaluation question should be answered through at least two data sources. The following are some of the key data collection methods used by Tshikululu to make this happen:

  • Desktop research:The collection of secondary data from easily accessible (usually online) information (e.g. literature, research, policy documents, community demographics, national/regional statistics, etc.).
  • Project document review: The systemic review of accessible documents related to the project, programme or organisation being evaluated. These include project conceptualisation or strategy documents, performance monitoring reports, quantitative/qualitative data and previous evaluations that have been completed.
  • Interviews: Interviews provide the opportunity to ‘dive deeply’ into stakeholder experiences during the planning and/or implementation of the programme/project/organisation being evaluated.
  • Focus group discussions: A focus group discussion involves facilitating an in-depth conversation between a group of participants about a given topic or issue with the assistance of an external moderator. This method is used to gauge participants’ attitudes, perceptions, knowledge, and experiences toward a given topic or issue.
  • Surveys:Surveys consist of closed-ended or open-ended questions and are used for collecting data from large groups of subjects on a specific topic.
  • Photovoice: This method utilises the power of storytelling through photography. Photovoice allows researchers and participants to use photos to interpret emotions, experiences and reality.

Once data has been collected, through triangulation and analysis we are able to clearly identify the areas where information is consistent or contradictory, and then investigate the reasons for this. The credibility, validity and reliability of evaluation findings are unquestionably stronger as a result. Too often, we see evaluations that skew too far in one direction, or make ‘bold claims’ on un-triangulated evidence. For example, if one or only a few key informants report through an interview that a programme was not relevant or effective, this finding is important to highlight. However, if it is an isolated data point – and all other participants had different (positive) feedback – it must be noted as such.

Evaluators must understand the importance of triangulation. When done effectively, triangulation strengthen evaluation findings and improves the chances of evaluation use. If the involved stakeholders trust and agree with the evaluation findings, they are more likely to use the findings and implement recommendations. Just as importantly, it prevents a situation where flawed evaluation findings – which have not been triangulated – are used to make decisions about the future direction of a project, programme or organization.

There is a need to promote evaluations as a tool for learning and improving development interventions as opposed to evaluations being a ‘tick-box’ exercise. Strategic social investment ought to be informed by monitoring and evaluation data; and when this happens the chances of maximising social impact are increased.