Use this checklist when setting up or reviewing your program evaluation. Check each item as you complete it. Items marked with a note are commonly overlooked — don't skip them.
Phase 1 — Define What You're Measuring
-
Write your theory of change
One paragraph: If we do X, then Y will happen, because Z. Be specific.
-
Identify 3–5 priority outcomes (not outputs)
Outcomes = what changes in participants. Outputs = what you delivered. Distinguish them clearly.
-
Define a measurable indicator for each outcome
Example: "% of participants who score 80%+ on post-assessment" not "improved knowledge".
-
Set a target or benchmark for each indicator
What does "success" look like? 70% meeting benchmark? 10-point average improvement?
-
Build or update your logic model
Map inputs → activities → outputs → short-term outcomes → long-term impact.
Phase 2 — Design Data Collection
-
Choose your primary data collection method
Pre/post survey, administrative records, interview, observation, or validated scale.
-
Design or select your survey/assessment instrument
Match questions directly to your outcome indicators. Use validated scales when possible.
-
Confirm pre-data collection happens BEFORE intervention starts
Post-only data is weaker evidence. Pre/post is the minimum standard for most funders.
-
Identify who is responsible for collecting data
One named person owns data collection for each instrument. Shared ownership = no ownership.
-
Create a data collection calendar
When exactly will you administer pre-surveys, mid-surveys, and post-surveys?
-
Plan for a comparison or baseline
Control group, historical baseline, or population benchmark. Something to compare against.
Phase 3 — Data Management
-
Set up a data storage system
Spreadsheet, database, or evaluation platform. Must be accessible and backed up.
-
Establish participant ID system for matching pre/post records
Use an anonymized ID (not name) to link pre and post surveys for the same person.
-
Address consent and data privacy requirements
Do participants know how their data will be used? Is consent documented for sensitive data?
-
Plan for missing data
What's the minimum response rate needed? How will you handle participants who only complete pre or post?
Phase 4 — Analysis
-
Write your analysis plan before collecting data
Decide which statistics you'll run before you see results. Prevents cherry-picking.
-
Run descriptive statistics (means, percentages, distributions)
Basic summary of what you collected. Essential foundation before any comparison analysis.
-
Run pre/post comparison with significance test
Paired t-test or Wilcoxon signed-rank test. Shows whether change is statistically meaningful.
-
Calculate effect size (Cohen's d or equivalent)
Effect size shows practical significance, not just statistical significance. Essential for funder reports.
-
Disaggregate results by key demographics
Are outcomes consistent across age, gender, race, location? Disaggregation reveals equity gaps.
Phase 5 — Reporting & Learning
-
Schedule an internal learning review session
Block time for staff to discuss findings before writing the funder report. What does the data tell you?
-
Document program adaptations based on findings
What will you change next cycle? Funders love seeing evidence of learning and adaptation.
-
Draft outcome narrative for funder report
Lead with the most meaningful result. Follow with context, then methodology. Use plain language.
-
Include limitations and caveats
What can't you claim? Small sample, no control group, self-report bias. Honest reporting builds trust.
-
Archive data and report for future comparisons
Year-over-year trend analysis is powerful evidence. Protect your historical data.