Getting started with loneliness evaluation

Close-up of evaluation charts

Understanding and evaluating the impact of interventions aimed at addressing loneliness can pose a unique set of challenges. In a recent member co-working session, we worked through some of the challenges members had with their own evaluation methods. This blog post synthesises the key takeaways from the discussion. If you’re a member of the Hub, visit the Evaluation Group to access a copy of the slides and continue the discussion.

Close-up of two peoples' hands and papers showing data and graphs.

Evaluating interventions to tackle loneliness

We began the session by recapping the insights from Evaluation of interventions to tackle loneliness, a report which aimed to understand the different types of loneliness interventions and their context, and the factors that enable or limit robust loneliness measurement.

The recommendations from this report acknowledged current challenges with loneliness evaluation, including: the recommendation to develop new guidance on evaluation of tackling loneliness interventions; the recommendation for high quality training and support on how to use the recommended measures; and the recommendation for project funding to cover evaluation resourcing. We explored the recommendations in full at a recent Hub event – watch the recording.

Evaluation methods and processes

We revisited the key stages in the evaluation process including design and data gathering and analysis and dissemination, as well as qualitative evaluation approaches such as running focus groups.

As a rule of thumb, our evaluation methods need to be:

  • Proportionate to the capacity of our organisation and the nature of our project.
  • Acceptable to participants and to those carrying out the evaluation work.
  • Useful and meaningful to key stakeholders.
  • Carried out in a rigorous way.
  • Adequately resourced (time, money, skills).

Nuances of evaluation

One question to ask ourselves when evaluating a projects is – how do we know it was us that caused this change? This is why it’s helpful to think about attribution analysis, such as setting up pre and post measures, benchmarking, and testing the results against other potential causes, as well as contribution analysis, including creating a theory of change and exploring ripple effect mapping.

Involving everyone in the evaluation process

The second focal point of our discussion addressed how we can get everyone on board in cross team evaluation work, as good collaboration between funders, commissioners and delivery organisations can help to ensure a successful evaluation process. Top tips for this included:

  • Avoiding a wholly top-down approach.
  • Co-producing for joint ownership.
  • Providing clarity around why we evaluate.
  • Discussing why the evaluation data will be useful for your organisation.

Getting started with your own evaluation

Knowing where to start with your own evaluation can feel daunting. But by breaking things down into small manageable steps and acknowledging what is possible given the resources that you have, it can make the process feel much easier. Focus on what you do best, do that a little better, then adapt to get it right over time.

As we collectively learn from experiences and adapt our approaches, we can contribute to a growing body of knowledge that enhances the effectiveness of interventions and improves the lives of people experiencing loneliness.

Responses

Your email address will not be published. Required fields are marked *

Skip to content