Is Good Program Design Essential for a Quality Evaluation?

This article is rates as:

 
 

I have asked myself this very question. Can I design and deliver a quality evaluation on a program or project that isn’t well designed or implemented or maybe isn’t managed appropriately? There are lots of reasons these may be true, and I’m not trying to throw project managers under the bus, but I have found myself in the situation of trying to evaluate projects that aren’t going well.

Of course, formative, process, implementation, or even developmental evaluation may all be very helpful to get an errant program back on track, but let’s think about outcome evaluation. Can an evaluator comment on whether or not a program has achieved its intended outcomes if it wasn’t implemented as intended?

I will say that good program design, which I also encounter often, lays the foundation for a quality evaluation. With good program design and implementation, the learnings presented in the evaluation are usually confidently accurate and actionable. If good program design and implementation makes good evaluation so easy, what impact does the opposite have?


Program Design and Impact on Evaluation

A good design serves as a blueprint that guides the implementation process and aligns the efforts of all partners. Here are some key elements that constitute a good program design (and implementation), and how they impact your evaluation:

Elements of Good Program Design Impact on Evaluation
Clear, Attainable Objectives: Programs must know what they are trying to achieve and have agreement on that understanding. These objectives (or goals or targets or aims or outcomes) provide direction against which progress can be measured. I worked on a project where one partner group thought the primary goal of the project was to test a new implementation approach so that it could be used for future innovations, while another thought the goal of the project was to assess the effectiveness of this particular innovation in a specific setting. These are different objectives. After learning mid-project about this divergence in understanding, my evaluation scrambled and tried to do both but ultimately fell short of some partner expectations. The divergence in understanding also led to different priorities amongst the project team, leading to a less-than-cohesive implementation strategy. Without clear, agreed upon objectives evaluators may struggle to determine what constitutes success, leading to ambiguous or inconsistent evaluations. Similarly, programs with vague, overly broad, or clearly unattainable objectives make it difficult to measure success and may lead to subjective or inconclusive findings.
Logical Framework: No, I don’t mean a logic model or theory of change, although those would check this box, but at the very least, good program design should be able to link the activities to the objectives: knowing that if they engage in X activities that Y is a reasonable outcome. Doing 100 jumping jacks is unlikely to improve your math skills, but sometimes it feels like that’s what evaluators are asked to measure. By clearly linking inputs, activities, and outcomes, evaluators can better determine the cause-and-effect relationships. This is crucial for understanding what aspects of the program are effective and why. Without this logical framework evaluators may find it hard to determine whether observed changes are due to the program or other external factors.
Leadership: Good projects need good project leaders. There are a couple of important points here: 1) that a project leader exists at all ensures that the project has the attention it needs to stay on track, and 2) an experienced project lead is likely skilled at identifying and mitigating risks, proactively planning for anticipated challenges and having clear answers for roles, responsibilities, or other project questions. A dedicated project lead can work with an evaluator to provide guidance that the evaluation is meeting their needs, to provide feedback about feasibility, and to champion the evaluation with staff or team members. A good project lead enables data collection by making connections, opening opportunities, and knowing who to go to for what. Poor or non-existent leadership can be difficult to overcome for evaluators. Evaluators require a dedicated point-person or liaison, someone who is tasked with being the decision-maker. Poor leadership may leave evaluators to make decisions that are unfeasible or take the evaluation in the wrong direction. Inexperienced leads may also introduce ethical risk as well, which may come into play around data sharing or putting participants at risk.
Engagement: Good program designs include engagement: who and when. Good program designs will have communication plans or even a RACI matrix (or something similar) so that everyone knows what they need to know, when (or before!) they need to know it. Very little can be done without engagement. I once evaluated a project in healthcare. When it came time to ask the frontline staff what they thought of this novel program, most of them had never heard of it. I couldn’t believe it. How could an entire program be implemented in their day-to-day setting without their knowledge? Poor engagement was the answer. The project team hadn’t focused on communication and engagement. As you can imagine, it’s hard to get the perspective of a key population group when they have no idea what you’re asking about. From a program perspective, poor engagement likely means poor implementation. These projects will likely lack people who buy-in and are willing to follow protocols or do the extra step. From an evaluation perspective, poor engagement can make it difficult to gather key perspectives, to access the right people, and even to access the right data.
Proper Resource Allocation: Adequate and appropriate allocation of resources, including time, money, and personnel, is essential. Sure, the budget for evaluation may be smaller than we’d like but we know, and often agree to that going in. One of the challenges around budgets is when clients start asking, or expecting!, more than the original agreement. We all know that things change, and plans are rarely followed exactly. It can be difficult for an evaluator to manage a budget when implementation plans go too far off track. Sometimes it all comes down to capacity. Human capacity to manage evaluations can be a hugely limiting factor. Availability can make or break a quality evaluation. Without that leadership discussed earlier, the evaluation will flounder. Without feedback from those doing the work, the evaluation is at risk of missing the mark or going off track. And time. I’d guess maybe 80% of my projects underestimate the time it takes to get things done. Share data? No problem, we’ll send that over … until three months pass and you’re trying to put together privacy impact assessments and still no data. Poor resource allocation leads to incomplete evaluations. The planned data capture activity is cancelled because we ran out of time. Or the document reviews don’t happen because no one took time to share them with you.
Plans to Use the Evaluation: Ok, I may be getting a little too evaluation focused here, but I do believe that good program design has an actual plan for the evaluation they’ve commissioned. That is, evaluation is not a box-checking exercise because it was mandatory in the grant agreement. I can usually tell when a project team actually cares about an evaluation because they have good answers to questions, and solid rationales. They’re quick to tell me things like “No, that’s not something I need” and also “How are you going to get this particular piece of information that I will need?” These are the groups that are on board with data parties or sense-making sessions. These are the groups that know, when you’re creating your evaluation plan, what deliverables they want. A well-designed program ensures that the evaluation addresses relevant questions and lead to actionable insights. It aligns the evaluation with the goals and needs of partners, making the findings more likely to be used for decision-making and improvement. On the other hand, when a group isn’t familiar with evaluation or doesn’t have a clear plan, you’ll find them saying yes to anything you propose, risking your evaluation timeline and budget. You’ll find these are the groups that spring asks on you unexpectedly, “Hey, uh, can you do a presentation to the board next week?” or “I just had the thought that maybe we should do a public survey!” Without a plan for the evaluation, your evaluation gets blown around in the wind, trying to accommodate whims.

So, what do you do if you think the project you have been tasked with evaluating is poorly designed, implemented or managed?

Of course, the obvious answer is that we report these things. We can always report that no, outcomes were not achieved, or that there was no implementation fidelity.

But to me, the question is actually about the role of the evaluator: is it within the scope of our role to raise these issues?

My background is heavy in quality improvement, with light touches in implementation science so it’s second nature to me to want to marry these lenses with my evaluation lens.

My answer to these questions is often then same: it depends. It may depend on whether or not there is a person you could even raise it to. Without a clear person in charge, your concerns may have nowhere to land. It may depend on your relationship with that person. It may depend on what stage of program design and implementation the evaluation was brought in; being at the design table, it makes far more sense to share concerns than if you’re brought in right at the end!

I think one strategy is to play the fool. As Shelby writes, it is our job to ask questions. It’s likely that you can raise your concerns in the form of a question, “Can you share your communication strategy with me? I want to make sure the survey I send to frontline staff covers all the ways you engaged them.” This may be a subtle(?) way to highlight that there is no communication or engagement strategy for frontline staff.

Another strategy is to use your evaluation tools to highlight any of these risks or gaps. Engaging the team in developing a logic model or theory of change will help commitment to obtainable objectives and ensure a logical framework. Developing a stakeholder matrix may help to ensure adequate oversight and engagement with partners.

Good program design isn’t essential for a good evaluation, but it does provide the necessary foundation for clear, consistent, and relevant evaluations that produce actionable insights. A well-designed program knows what it wants to achieve, has a clear workplan supported with leadership and resources, engages and communicates with all partners, and has a mind toward ‘what next?’. Evaluations can support this type of program with evidence to support decision-making, continuous improvement, and greater impact.


Do you have a story of evaluating a poorly designed or poorly implemented program? Share it with us!

Previous
Previous

Data Visualization Applications: Pie Charts

Next
Next

Common Pie Chart Misuses (and How to Fix Them)