Balancing Rigour With Reality: Designing Evaluations for Community/Organization-Led Implementation Initiatives

By Lauren Tessier, Implementation Support Consultant


Note: We have updated our website since this article was published. As a result, you may have been redirected here from a previous URL. If you are looking for the article, "Using evaluation in implementation efforts to “fall forward” instead of standing still” by Dr. Daniel F. Perkins, please click here.


As a PhD student in health services outcomes and evaluation research, I’ve spent a lot of time thinking and reading about evaluation. Both in the classroom, and in the innumerable peer-reviewed readings, evaluation is presented as an endeavour that requires the utmost rigour. Even its definition (at least in the public health sphere) communicates this idea: the use of social research methods to systematically investigate the effectiveness of social intervention programs in ways that are adapted to their political and organizational environments and are designed to inform social action to improve social/health conditions (Rossi, 2004).

This is a great definition. There is nothing inaccurate about it. It communicates exactly what an evaluation should entail under ideal circumstances. However, as many practitioners and individuals working on the frontline know, the real world rarely produces ideal circumstances. And the problem that arises from this is the continued widening of the research-practice gap.

Bridging the gap between evaluations of implementation science and implementation practice

At one end of the spectrum are researchers and academics in a topic area who use implementation science to guide research projects. This lends itself to rigorous, complex evaluations, often with a mixed method design, containing many different data points collected at many different periods in time. This is what we hold as a standard of evidence when we talk about evidence-based programs. While the publication potential is high, these types of evaluations are often inaccessible and unrealistic for implementers with less evaluation experience, who find themselves at the other end of the spectrum. People in implementation practice tend to shy away from logic models and evaluation designs, which means they run the risk of conducting low quality evaluations, if they conduct any at all, potentially missing opportunities to capture great outcomes or data on how to enhance implementation.  

So, where do we go from here? Ideally, implementation practitioners should strive to evaluate their programs with as much rigour as is feasible. This is the best route of getting to a “middle ground” in evaluation. An important first step toward getting to this middle ground is remembering that, if you’re an implementer with less experience, the question should never be whether or not to evaluate. Always evaluate! And re-evaluate later on!

You don’t need to do it alone – there are amazing evaluators out there, ready to support organizations and communities to build realistic evaluations. The most important questions to ask yourself are: what is the current state? What needs to change? What information do I need to plan gather to make sure that I will be able to tell whether that change happened or not? It is better to start small than to not start. I think a lot of people shy away from evaluation because they fear negative results. They fear failure. Negative results are not an indication of failure. They represent opportunities to learn and to go back and improve.

This article was featured in our monthly Implementation in Action bulletin! Want to receive our next issue? Subscribe here.

Previous
Previous

Using Evaluation in Implementation Efforts to “Fall Forward” Instead of Standing Still

Next
Next

Integrating Implementation Science and Quality Improvement: An Implementation Guide