Building Better Evaluations: What Atomic Habits Can Teach Us About Program Evaluation
This article is rated:
One of my goals for 2025 is to read more, and the book I just finished was James Clear’s Atomic Habits. While it’s primarily a book about personal growth, I couldn’t help but notice how many of its lessons resonate with the field of program evaluation. Clear’s central idea is that small, consistent changes lead to extraordinary results; a principle that can transform how we approach evaluation.
As evaluators, we juggle complex projects, balance diverse partner needs, and aim to deliver actionable insights. By applying the principles from Atomic Habits, I think that we can build stronger practices, streamline our workflows, and ultimately deliver more meaningful evaluations.
In this article, I’ll explore five key lessons from Atomic Habits and how evaluators can use them to enhance their work. This includes:
Focus on systems, not just goals
The power of small changes
Make good habits easy and bad habits difficult
Identity-based habits
Build momentum through habit stacking
1. Focus on systems, not just goals
Clear explains that while goals provide direction, it’s the systems, actions, activities, and structures that determine progress. In evaluation, this principle underscores the importance of advocating for programs and organizations to focus not only on their desired outcomes but also on the systems that drive those outcomes. As evaluators, we should also ensure that our evaluations examine these systems, not just the end goals.
Applying this in evaluation:
Make efforts to evaluate the systems driving outcomes: When designing an evaluation, consider including questions that assess the effectiveness of the systems and processes used to achieve program goals if there is the budget, timeline, and scope to do so. For example, evaluate whether staff training, data collection methods, or communication strategies support the desired outcomes.
Help clients connect actions to goals: Work with programs to clarify how their daily activities and systems (e.g., workflows, partnerships, or delivery mechanisms) align with their long-term goals. Encourage them to make adjustments to these systems as needed.
Advocate for continuous system improvement: Emphasize the importance of monitoring and refining systems over time. A strong system will ensure that progress toward goals is sustained and adaptable to challenges.
Key takeaway: As evaluators, we play a critical role in advocating for programs to focus on the systems and actions that lead to their goals. By evaluating and improving these systems, we not only help programs achieve better results but also create a foundation for long-term success.
2. The power of small changes
Clear’s concept of small, incremental changes compounding into significant results is particularly relevant to program evaluation. While it’s natural to focus on the “big picture” outcomes, evaluators must also pay attention to the smaller, day-to-day changes that contribute to these larger impacts. By acknowledging and measuring these smaller shifts, evaluators can highlight incremental progress, validate efforts, and provide actionable insights that keep programs on track.
Applying this in evaluation:
Focus on small contributions using tools like contribution analysis: Evaluators can assess how incremental changes, such as improved workflows, staff training, or minor adjustments in service delivery, contribute to broader outcomes. Contribution analysis is especially useful for identifying how these smaller changes collectively influence long-term goals, making “invisible” progress visible.
Measure incremental improvements: Include metrics in your evaluation that capture small but meaningful changes, such as increased participant satisfaction, slight gains in attendance rates, or improved clarity in communications. These shifts might seem minor but often signal whether larger outcomes are likely to be achieved, just like in a Theory of Change. In a Theory of Change, short-term outcomes are expected to lead to long-term outcomes through a chain of causal logic, where each step builds on the previous one to drive progress toward broader goals.
Encourage iterative program improvements: Advocate for programs to experiment with small, testable changes to their processes and activities, such as piloting new outreach strategies or tweaking curriculum delivery. Evaluators can document these changes and assess their impact to help programs adapt in real time. Check out our article on Developmental Evaluation and Plan Do Study Act.
Incorporate reflection into evaluation processes: After each evaluation phase, take time to reflect with partners on what smaller changes are working and which ones need adjustment. For example, if a pilot survey format results in a higher response rate, ensure that success is captured and incorporated into future designs. Check out our article on the power of self-reflection in evaluation.
Key takeaway: Small changes matter. As evaluators, we should advocate for tracking and analyzing these adjustments, ensuring that programs not only celebrate their small wins but also learn how these changes contribute to broader success. By integrating tools like contribution analysis and Theories of Change, and fostering a mindset that values the incremental, we create evaluations that reflect the complexity and nuance of real-world progress.
Note: These two points, focusing on systems and small changes, are interconnected. As Clear explains, systems provide the structure that drives progress, while small, incremental changes refine and optimize those systems over time. In evaluation, this means examining the big picture of how systems support outcomes while also tracking the smaller adjustments that indicate whether those systems are effective. Together, they offer a holistic approach to understanding and improving program success.
3. Make good habits easy and bad habits difficult
Clear emphasizes that our environment strongly influences our behaviour. For evaluators, this means creating structures and tools that make it easier to adopt and maintain good evaluation practices, like consistent documentation or thoughtful partner engagement, while discouraging unproductive habits, such as skipping critical reflection or prioritizing speed over quality. The goal is not just to streamline tasks but to intentionally design environments and processes that foster better evaluation outcomes.
Applying this in evaluation:
Design systems that encourage consistency in small actions: Simplify processes to ensure that essential steps, like data cleaning or key partner follow-ups, happen regularly and without unnecessary barriers. For example, design indicators and create workflows that make it easier to log participant feedback or track progress toward evaluation milestones.
Break down larger tasks into smaller, more manageable ones: Encourage incremental progress by making large, daunting tasks, such as developing a final report, feel achievable. Breaking these tasks into smaller, actionable steps (e.g., drafting individual sections over time) helps evaluators and teams maintain momentum.
Build reflection and adaptation into your workflow: Make it easy to pause and assess how well an evaluation system or tool is working. For instance, include a step in your evaluation checklist to reflect on whether your data collection tools captured meaningful insights or whether partner needs shifted during the process.
Increase friction for poor practices: Identify inefficiencies in your evaluation process that contribute to missed opportunities. For example, if rushing to finalize reports prevents thoughtful analysis, build in review periods that are non-negotiable. Similarly, adopting efficient alternatives, such as automated survey software, increases friction for poor practices by making outdated or inefficient methods harder to rely on. These modern tools streamline workflows, reducing the temptation to cut corners or fall back on less effective approaches. Check out our recent article on AI in evaluation for more insights!
Examples of making good habits easier:
Evaluation planning: Use a standardized evaluation framework template to ensure no critical components are overlooked, such as logic models or evaluation questions.
Partner communication: Schedule recurring check-ins with partners and predefine agendas to keep conversations focused and productive.
Data management: Automate parts of the data collection or analysis process, such as using a survey platform that exports directly into analysis software, reducing manual effort and errors.
Examples of increasing friction for bad practices:
Prevent rushed decisions: Require a pre-approval process for major evaluation changes, ensuring new approaches are well thought out.
Limit reactive work patterns: Block off uninterrupted time for complex tasks, such as coding qualitative data, and ensure team members and clients respect these boundaries.
Key takeaway: To foster good habits in evaluation, build systems that make high-quality practices the path of least resistance while discouraging unproductive ones. As evaluators, we should advocate for environments that prioritize thoughtful processes, incremental improvements, and consistent reflection. By intentionally shaping how we approach our work, we not only improve the quality of evaluations but also set programs up for sustained success.
4. Identity-based habits
Clear states that habits are most effective when they align with your identity; when they reflect not just what you want to achieve but who you want to become. For evaluators, this means grounding your daily actions in the values, principles, and professional identity of the field. Instead of focusing solely on outcomes like producing a high-quality report or completing a stakeholder meeting, focus on embodying the traits of an ethical, thoughtful, and growth-oriented evaluator.
Applying this in evaluation:
Commit to ethical evaluation practices: Ethical principles, such as transparency, equity, and inclusion, are foundational to evaluation. Build habits that align with these principles to ensure your work consistently reflects the highest standards of the profession.
Adopt a learning mindset: Evaluators must constantly adapt to new methods, technologies, and social contexts. Building habits that foster curiosity and continuous learning can help you stay informed and innovative. For example, schedule regular professional development time each month, such as attending a webinar, reading a journal article, or exploring new tools for data visualization.
Reinforce your role as a facilitator of learning: As evaluators, we often help programs and organizations reflect on their own practices and make decisions based on evidence. Build habits that reinforce your identity as a thoughtful facilitator by prioritizing questions that prompt learning and reflection. For example:
In partner meetings, consistently ask reflective questions like, “What does success look like for you?” or “How can this evaluation support your goals beyond this project?”
During data collection and analysis, habitually consider not just what the data says but what it teaches, and how you can present it in ways that key partners can act upon.
Align your habits with your values as an evaluator: Beyond professional skills, habits that reflect your values help strengthen your identity as an evaluator. If you value collaboration, for example, build routines that involve regular communication and co-creation with stakeholders. Before starting any project, take a moment to reflect on how it aligns with your identity as an evaluator. For example:
“How can I ensure this evaluation is inclusive of diverse perspectives?”
“What opportunities does this project provide for me to grow as a professional?”
Key takeaway: Identity-based habits help evaluators align their actions with the values and principles of their profession. By building habits that reflect ethical practices, continuous learning, and thoughtful facilitation, evaluators can strengthen not just their own practice but also the trust and value they bring to their clients and the broader field of evaluation. By cultivating identity-based habits, evaluators can build routines that strengthen their practice, improve partner relationships, and contribute to the evaluation field as a whole.
5. Build momentum through habit stacking
Clear introduces “habit stacking” as a way to incorporate new habits by linking them to existing ones. This strategy leverages routines that are already part of your day to build momentum for meaningful change. For evaluators, habit stacking can create efficiencies and encourage reflective, thoughtful practices that improve the flow and quality of evaluation work. Instead of trying to implement habits in isolation, pairing them with tasks already ingrained in your workflow can make them easier to adopt and sustain.
Applying this in evaluation:
Integrate reflection into daily tasks: After a client/key partner meeting or data collection session, take five minutes to jot down key insights or questions while the conversation is still fresh in your mind. Pairing this habit with meetings ensures you consistently capture valuable information that might otherwise be forgotten.
Link data analysis to report planning: Once you’ve completed data analysis, stack a habit of drafting a rough outline for the report. This ensures that the momentum from interpreting data flows naturally into planning your deliverable, reducing the gap between analysis and communication.
Pair routine tasks with reflective habits: Use repetitive or predictable tasks as anchors for meaningful reflection. For example:
When compiling survey results, take a moment to reflect on how the findings align with the evaluation questions.
During report writing, pause at the end of each section to ask yourself, “Does this connect back to client/partner needs?” or “What story is this data telling?”
Evaluation is inherently a layered, iterative process. Habit stacking aligns with this complexity by ensuring that productive habits are embedded at every step. This approach helps evaluators:
Maintain momentum between phases of an evaluation, such as transitioning smoothly from data collection to analysis to reporting.
Create opportunities for reflection and course correction in real time, rather than waiting until the end of a project.
Build habits that strengthen collaboration and communication with clients/key partners, which are critical to the success of any evaluation.
Key takeaway: By pairing new habits with existing routines, evaluators can improve their workflows without adding unnecessary complexity. Habit stacking builds momentum for thoughtful, reflective practices that enhance the quality and impact of evaluation work.
Final thoughts
Atomic Habits reminded me that meaningful change doesn’t come from grand transformations but from consistent, intentional effort. As evaluators, applying these principles can help us:
Build systems that support rigorous, efficient evaluations.
Improve client/partner relationships through incremental, thoughtful changes.
Adopt habits that align with the values of ethical, impactful evaluation practice.
By focusing on the small habits that shape our daily work, we can strengthen our processes, deliver better results, and contribute to the growth of the evaluation field.