Realist evaluations: a tool for repairing programs

All the tools an evaluator needs.There are so many evaluation tools to pick from and none are right for every job.

Everyone who runs a social service or a community program wants it to make a difference. We want recipients to be healthier, more successful at school or better clinicians. And unfortunately so often our programs do not work or work only with a small group of people led by someone deeply committed.  The effects are not replicated when the first leader leaves and someone else runs the program or the program is offered in another town.

The other day a tweep alerted me to a terrific post that described all of the reasons why (non-experimental) evaluations of small-scale programs give misleadingly positive results that lull us into thinking that a program has worked.

The message is that if your evaluation doesn’t have a control group, then you can rarely trust the results. It is hard to argue with that. These failings have been known for a long time and the solution is to have an experimental or at least a rigorous quasi-experimental study.  (Actually there are quite a few other reasons that small studies get artificially positive evaluation results.  See the seminal article by John Ioannidis which should be on every evaluator’s hard drive.)

But the kind of evaluation that comes up with a result that the program either worked or it didn’t work is not what most program managers need.  Managers need more details: under what circumstances does it work, which elements work, and for whom.

Even with control groups a conventional evaluation would not tell you how to run a really great program for the people or services, because the context matters.  Some people are more receptive to a particular learning style or financial inducements. Some organizations have the capacity to take on new projects and some organizations cannot.

The people who fund or run these programs want to know not only if a program made a difference but specifically what part of the program worked for which people or which communities.

The ‘realist evaluation’ approach is emerging as a tool to answer these questions.

I first encountered the method when I was part of a large project on the drivers of improved routine immunization coverage in Africa.  Each research team (I led the team in Ghana) went to a four districts to investigate what had been done to improve coverage and what had enabled the health workers or community members to make that change.

I recently used it for an evaluation of a three-month course for start-up social entrepreneurs.

A realist evaluation, as the name implies, uncovers how a program actually works. A realist evaluation employs mixed methods, that is, both quantitative and qualitative data collection techniques and its logic is derived from rigorous qualitative research designs that test theories through careful comparison of plausible causes and consequences.  A realist evaluation usually starts out to test the program’s theory of change, but because it uses qualitative methods there is still the capacity to uncover findings that were not expected from the theory.

A realist evaluation encourages each participant to tell a story about their situation before the program started, what happened during the program and what has changed as a result.  The analysis looks for evidence of links in the Context-Mechanism-Outcomes chain.  This helps the program to be improved, by targeting people most likely to benefit and offering experiences most likely to lead to positive outcomes.

The down side of realist evaluations is that they much more analytically intensive than conventional program evaluations. The typical approach is to capture what aspects of the program that participants like and what they achieved and to assume that one caused the other.  Realist evaluations dig deeper to probe the conundrum of a positively received intervention that does not result in change.  Organizing the data and making sense out of the patterns takes a long time. A recent article in American Journal of Evaluation describes the time consuming process of coding 100s of chains from only 11 20-minute interviews.

But the practical findings that emerge from the analysis are worth it. As an evaluator I find realist evaluations worthwhile because I can give my client very specific feedback.

For example, we found that districts in Africa that did not have basic immunization infrastructure (vaccines, refrigerators and so forth) would not be able to benefit from better team management.  In the training program the results clearly showed that while everyone loved most of the course, only people with decision making powers were able to benefit from it.

Realist evaluations can also show which program elements are directly related to outcomes and which are not.  Participants may have loved particular activities but if none describe how that activity helped them to progress their enterprise or increase immunization coverage, then that activity probably isn’t needed.

That’s a good evaluation tool.

Categories: Blog Evaluation methods Tags: , , , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

  • Email

    ann.larson [at] socialdimensions [dot] com.au peter.howard [at] socialdimensions [dot] com.au

    Contact us

  • Phone

    04 2707 0683
    04 3419 5184
    08 9965 3015

    ABN

    38849688220

  • Post

    PO Box 2429 Geraldton Western Australia, 6530

    Subscribe

    Don't miss a post. Subscribe to have new blog posts delivered to your mailbox.


     

  • Site by Us&Them Studios | Log in