Five key questions to determine if and when to evaluate your social investments

As more and more social investors are moving away from just “doing good” and becoming intentional about making impactful investments, high-quality formal evaluations are crucial to inform improved design and implementation.

The primary aim of evaluations is to assess the extent to which interventions are achieving their intended objectives. Usually, specific processes and procedures are followed when conducting an evaluation. How these processes are undertaken can either improve or hinder the utility and value of the evaluation. Understanding barriers to the use of evaluations, as well as factors that enhance effective utilisation of findings, is therefore essential. Tshikululu actively seeks to maximise evaluation utilisation by focusing on the five key questions.

  • Is it the right time to evaluate?

Often, investors are obsessed with measuring the ‘impact’ of their investments. As per the Organisation for Economic Co-operation and Development (OECD), impact refers to the positive and negative changes produced by a development intervention, directly or indirectly, intended or unintended . Reporting on the impact of the programme is a fair client expectation at face value, but measuring impact is sometimes unrealistic as it may not be immediately observable upon project implementation or completion. Tshikululu advises our clients to consider appropriate evaluation types depending on the project implementation stage and information available to make an informed judgement of the programme.

Specifically, social investors should ideally conduct  an evaluability assessment prior to evaluating programmes. This assessment helps evaluators determine if it is worthwhile to evaluate. In instances where it might be too early, Tshikululu advises to give time for end beneficiaries to properly experience the intervention before investing in an (often costly) evaluation. Data gaps are also a reality; sometimes there simply isn’t enough data/evidence for impact assessments. In these cases, social investors should opt for an outcomes evaluation as opposed to impact evaluation.

  • Is the evaluation framed objectively?

Evaluators often face resistance from programme partners and/or beneficiaries as they can be viewed as ‘police’ whose aim is to identify and pick on what’s going wrong. It’s important for evaluators to build rapport with the implementation team and potential evaluation participants (while of course not getting so close that the independence of the evaluator may be questioned). Most importantly, participants and organisations should understand that the evaluation process is meant for learning and not to “get anyone in trouble”. In that way, there is a better chance of establishing trust, which will improve the validity of findings. Validity, in turn, enhances the uptake of evaluation recommendations.

When engaging end-beneficiaries, substantial power dynamics may come into play. For example, beneficiaries may be concerned that providing negative feedback will lead to the loss of funding or resources. This results in beneficiaries painting a perfect picture of the programme or saying what they think the evaluator ‘needs’ to hear. By making use of various participatory approaches that enable the beneficiaries to “own” their impact stories without fear, Tshikululu aims to improve validity of qualitative findings even further.

In addition, Tshikululu invests in building transparent relationships with our partner organisations. These relationships are characterised by honest conversations and feedback. In instances where it makes sense to exit a programme, Tshikululu notifies the organisation and ensures that there is a well-communicated and structured exit process, including an exit grant where appropriate. This process can help to eliminate fear from the partner, and manage inevitable donor-donee power dynamics.

  • Who are the most important stakeholders to involve?

The participation of all relevant stakeholders in an evaluation is a key enabler for effective utilisation. Without meaningful participation of the key players, buy-in and support for both the process (from inception to close-out) and the findings may be limited. Evaluation stakeholders may include clients, end-beneficiaries, programme partners and/or sector experts. These stakeholders can be involved at different stages of the evaluation process. For an example, it’s important to involve the client during  evaluation design as they need to input on the methodology and instruments. During data collection, the partner must be involved for accurate feedback on implementation as they have the on-the-ground programme experience. The end-beneficiaries are best placed to report on what the impact of the programme is, as they were the ultimate recipient of the intervention.

  • How should we communicate our evaluation findings and recommendations?

Lengthy reports that are too complicated and full of technical jargon run the risk of not being used.  Ever wonder what happens to your 100-page evaluation report? It could be sitting on someone’s shelf collecting dust or perhaps acting as a nice doorstop! It’s important to produce simplified, accessible and comprehensible evaluation reports. If people are able to read, enjoy and understand your evaluation report- utilisation is improved. Tshikululu achieves this by providing concise versions of evaluation reports (an executive summary, that also includes more visuals, infographics, etc.) in addition to a detailed final report.

  • Are the recommendations realistic and implementable?

Evaluators sometimes don’t think about the practicality of their recommendations – they take a “Rolls Royce” approach when there is only budget, time or energy for a Volkswagen! In order to improve evaluation use, recommendations must be feasible and consider the amount of time and resources required to implement. As a means of enhancing utilisation, Tshikululu workshops recommendations with the client to establish the extent to which they are feasible and economic. For an example, in the education space, recommending that there should be a coach for every teacher is unrealistic, inefficient and way too costly. However, virtual coaching or looking at more realistic teacher/coach ratios could be considered for effective resource utilisation and teacher development.

In conclusion, although there may be external factors that lead to evaluations not being utilised (e.g. change of government policies, change in social investor strategy), evaluators do have many ways of influencing uptake of findings for the better. By taking a utilisation-focused approach when carrying out evaluations, Tshikululu aims to do just that.