When Not To Evaluate
February 2025
This article is rated as:
One of the questions we often get asked by our clients is "when is the right time to start evaluation?". Typically, our answer is "yesterday". As evaluators, we can lend critical insights across the program lifecycle and ensure that evaluative thinking is embedded throughout the program. However, there are certain situations and times when it's best not to engage in evaluation. There are tools such as evaluability assessments, which can help determine whether or not to engage in evaluation, but they take time and money to conduct and many of the categories in the assessment are things that an evaluator can support a program to create. For example, many evaluability assessments look at whether a program has clearly identified measurable indicators. Very often, one of my jobs as an evaluator is to work with our clients to determine what indicators we can use and create data collection tools to gather the data.
Instead of conducting an evaluability assessment, I propose there are some yellow and red flags that suggest it might not be the right time to evaluate. I'll approach these flags as if you are someone looking to conduct an evaluation of your program. If you’re an evaluator, these flags might help you discuss the feasibility of an evaluation with the program lead.
Yellow flags are issues that might be solved in the short term and indicate that you should slow down and ensure you are able to solve them before engaging in evaluation.
Red flags are bigger issues which often cannot be solved in the short term to bring about a fruitful evaluation. These issues are often not able to be worked through with the support of an evaluator and are about larger, organizational level problems.
So, when do you not evaluate?
When your objectives are unclear
Evaluators can help you measure what you are doing, but without a benchmark or end goal in mind, it's very hard to determine whether you're doing what you intended to do and are making a difference. Without some program logic — that is a sense of what you are doing and how it will get you to where you want to go — it's difficult to conduct a thorough evaluation. I will add a caveat here that there is a place for evaluation in developmental initiatives, where outcomes and impacts are emerging. Even then, there must be a higher arching logic or vision guiding the work.
When you are planning your activities
Evaluation can easily become intertwined with program development, as evaluators can ask questions to help clarify how activities and outcomes are related. However, evaluators do not need to be involved in the process of planning your program. If you are still figuring out what it is you are doing, how many staff you will hire, what infrastructure you need to support your program, this is not the time to be engaging in evaluation. Once you have an idea of the things you are going to do, an evaluator can help you ensure you have clear and measurable goals and objectives and align your intended activities and outputs to those objectives. We can then help you collect the right data to measure whether your activities lead to your intended outcomes.
When there is no intention to use the findings
Evaluation is resource-intensive. Yes, we try to make it as painless (and dare I say, fun) as possible for our clients, but there's no way to sugar coat it — evaluation takes time and human resources to conduct. It requires cooperation and time from our clients, their partners, and sometimes even program participants.
When there is no plan to use the findings of an evaluation or learn from the process, there's limited value in conducting an evaluation. Sure, there are times you need to evaluate as it's a requirement of your funding; in these cases, we hope there's still an intention to use the evaluation beyond checking off a box on your funding requirement. I'm sure most projects don't set out to complete an evaluation with the intention to move on without taking any action. But we still talk about "dusty evaluation reports" sitting on shelves, barely read and unused (see our article on tips to get your results used). Those commissioning an evaluation should have a clear intention to use the evaluation findings. A good evaluator will help you determine who needs to know what about the evaluation to ensure we are reporting the right things to the right people. In our evaluation plan checklist, we have a section on "intended users" where we work with the client to outline who are key partners in the program and what they need to know from the evaluation. We also include a section about reporting products, and identifying how key findings will be communicated back to the right groups. If you cannot determine how you would use the results of an evaluation, maybe it's not the right time to evaluate.
When you don't have a way to collect data
Evaluators can help you determine what data collection methods are best and can develop tools, such as surveys, interviews, and focus groups, to collect data. But one area an evaluator cannot help with is when there currently isn't a way to collect key data you need to evaluate your program.
Let me explain with an example. One client I was working with wanted to understand how well they were serving clients in rural locations. Sounds easy, right? The problem was that they didn’t have the ability to collect information about whether the clients accessing their anonymous phone program lived in rural locations. This wasn't a case of needing to build a check box on a data collection form or add a column to a spreadsheet. There wasn't a way for service providers to reasonably ask program users whether they were in a rural location, nor were we able to extrapolate a program user's location based on the information they provided or services they received. So as much as we wanted to evaluate whether the program was reaching this specific population, there just wasn't a feasible way to collect the data we needed at that time. Happily, the client was already aware of their data collection constraints and was undergoing several changes within the organization that would allow for the accurate collection of this data in the future.
When the timelines are too short to determine if you've made a difference
A key consideration for evaluation is "has enough time passed your project to have made a difference"? Typically, it's easier to determine for summative evaluations, where you are looking to see if your desired outcomes are achieved. If you don't leave enough time to assess whether those outcomes happened, it's hard to say whether your program met that goal. For example, if you want to see if your smoking cessation program has helped people stop smoking for over a year, you need to evaluate your outcome at least a year after a significant portion of your participants have completed the program. Your evaluation timeline (and budget) needs to extend well past a year of service provision. If you don't have the funds or ability to evaluate your program over that time period, you cannot tell if you've achieved that specific outcome. When you don't have the time or budget to employ an evaluator to examine these longer-term outcomes, a summative evaluation might not be for you.
This logic also applies to formative and other types of evaluation. If enough time hasn't elapsed to determine if what you did is working, or if the processes have been adhered to, evaluation won't be able to answer those types of questions. Typically, we see this issue arise when project timelines continue to get pushed back and time is running out on an evaluation contract.
When you don't have buy-in from key partners
Evaluation can be an exercise that brings your partners and intended users together and ensures that everyone is (mostly) on the same page. However, if there are major disagreements between key partners about what the program is or what it intends to do, your evaluation isn't likely to be successful. If partners refuse to cooperate with each other or do not believe they need to be involved in the program or evaluation, it will be very difficult to conduct a robust evaluation.
Here at Three Hive Consulting, we rarely conduct evaluability assessments, in part because most clients have hired us specifically to conduct the evaluation and it would be a little silly to turn around and let them know that we don't think they are ready to evaluate. Instead, we've been the "critical friend", asking questions to help our clients work through the challenges in their programs and identifying what is out of the scope of our evaluation due to constraints. There have been times in the context of larger evaluations, where we have highlighted that a specific initiative may not quite be ready for evaluation support, and have worked with the organization to identify what conditions need to be met to evaluate their initiative.