Making Evidence Locally Meaningful and Relevant

By Dr. Sobia Khan, Director of Implementation


Note: We have updated our website since this article was published. As a result, you may have been redirected here from a previous URL. If you are looking for the article, "Project Spotlight: Aligning Evidence Based Practice (EBP) implementation standards for diverse state agencies” by Stephanie Bradley, please click here.


We often think that the terms “evidence” and “evidence-based” are universally understood and accepted. However, depending on individual and collective experiences and worldviews, notions of what constitutes evidence can be vastly different. Questions that I have heard people mull over include (but are not limited to):

  • Does evidence have to be quantitative to be meaningful?

  • Is a randomized controlled trial truly the gold standard for generating evidence for all interventions?

  • Does lived experience count as evidence?

While there are exceptions to the rule, the general trend that I have seen in my own career is that people working in clinical health care tend be influenced by principles of evidence-based medicine and often faithfully adhere to the evidence pyramid. Evidence typically has to be quantitative in nature to be the most meaningful. In contrast, those working in settings where the target of a program or intervention is the population or community (e.g., in public health, social justice, prevention) and in which multiple stakeholders are involved in implementation seek other sources of evidence because they view the evidence pyramid as limiting for their settings.

Add to this discussion the fact that regardless of the field you work in, applying evidence requires a deep understanding of how that evidence was produced, for whom it was produced, and what that evidence means. Trials are often controlled to such an extent that they are not replicable in real life settings where uncertainty reigns. This is where questions of “adapting evidence” are posed, and there are no concrete answers on how best to do this.  Moreover, inequities exist in who the evidence is applicable for.  At the systems and organization levels, higher-resource settings (e.g., countries, communities, hospitals) tend to be the focus of research studies.  Women, people of colour, and marginalized populations are still disproportionately represented in research at the individual level. Again, some level of meaning has to be derived, and adaptations made, from the evidence that exists in order to make the evidence make sense.

This reasonably leads to the following thoughts about evidence: that evidence is important and non-negotiable (which is why we keep thinking and talking about it); that the “best level of evidence” for different interventions and settings might differ (for example, randomized-controlled trials may still be the standard for clinical trials, but pragmatic trials, pre-post or cohort study designs might be a better standard for other fields); and that evidence has to meaningful to all of those involved in implementing and using it.

This article was featured in our monthly Implementation in Action bulletin! Want to receive our next issue? Subscribe here.

Previous
Previous

Project Spotlight: Aligning Evidence Based Practice (EBP) Implementation Standards for Diverse State Agencies

Next
Next

The Model for Adaptation Design and Impact (MADI): A Solution for Practical Implementation