Is Good Program Design Essential for a Quality Evaluation?
This article is rates as:
I have asked myself this very question. Can I design and deliver a quality evaluation on a program or project that isn’t well designed or implemented or maybe isn’t managed appropriately? There are lots of reasons these may be true, and I’m not trying to throw project managers under the bus, but I have found myself in the situation of trying to evaluate projects that aren’t going well.
Of course, formative, process, implementation, or even developmental evaluation may all be very helpful to get an errant program back on track, but let’s think about outcome evaluation. Can an evaluator comment on whether or not a program has achieved its intended outcomes if it wasn’t implemented as intended?
I will say that good program design, which I also encounter often, lays the foundation for a quality evaluation. With good program design and implementation, the learnings presented in the evaluation are usually confidently accurate and actionable. If good program design and implementation makes good evaluation so easy, what impact does the opposite have?
Program Design and Impact on Evaluation
A good design serves as a blueprint that guides the implementation process and aligns the efforts of all partners. Here are some key elements that constitute a good program design (and implementation), and how they impact your evaluation:
So, what do you do if you think the project you have been tasked with evaluating is poorly designed, implemented or managed?
Of course, the obvious answer is that we report these things. We can always report that no, outcomes were not achieved, or that there was no implementation fidelity.
My background is heavy in quality improvement, with light touches in implementation science so it’s second nature to me to want to marry these lenses with my evaluation lens.
My answer to these questions is often then same: it depends. It may depend on whether or not there is a person you could even raise it to. Without a clear person in charge, your concerns may have nowhere to land. It may depend on your relationship with that person. It may depend on what stage of program design and implementation the evaluation was brought in; being at the design table, it makes far more sense to share concerns than if you’re brought in right at the end!
I think one strategy is to play the fool. As Shelby writes, it is our job to ask questions. It’s likely that you can raise your concerns in the form of a question, “Can you share your communication strategy with me? I want to make sure the survey I send to frontline staff covers all the ways you engaged them.” This may be a subtle(?) way to highlight that there is no communication or engagement strategy for frontline staff.
Another strategy is to use your evaluation tools to highlight any of these risks or gaps. Engaging the team in developing a logic model or theory of change will help commitment to obtainable objectives and ensure a logical framework. Developing a stakeholder matrix may help to ensure adequate oversight and engagement with partners.
Good program design isn’t essential for a good evaluation, but it does provide the necessary foundation for clear, consistent, and relevant evaluations that produce actionable insights. A well-designed program knows what it wants to achieve, has a clear workplan supported with leadership and resources, engages and communicates with all partners, and has a mind toward ‘what next?’. Evaluations can support this type of program with evidence to support decision-making, continuous improvement, and greater impact.