Built by Three Flights | PhD Evaluation Research

You've collected the data. We'll tell you what it means.

Outcome Radar is an AI-assisted evaluation platform that turns your program data into rigorous, funder-ready impact reports — built on real evaluation methodology, designed for organizations without internal evaluation teams.

Generate Your First Report → Book a Free Walkthrough
See a sample report →
"OutcomeRadar helped us submit our strongest grant renewal in 10 years. We finally had the statistical evidence to back up what we knew was working — and our funder noticed."
PD
Program Director
Youth Services Organization
"We'd been tracking outcomes for years but never had a way to analyze them rigorously. This platform turned our spreadsheets into a report we were proud to hand to our board."
ED
Executive Director
Community Health Nonprofit
"The methodology section alone saved us weeks of back-and-forth with our evaluator. Everything was documented, clearly explained, and ready to go."
GW
Grant Writer
Workforce Development Organization

Data stays private

Your program data is never shared, sold, or used to train AI models. You own your data, always.

No PII uploaded

Built-in detection flags personal identifiers before upload. Participant privacy is protected at every step.

Funder-ready reports

Every report includes the methodology and limitations your most rigorous grant reviewers will trust.

See the output

What a Funder-Ready Report Looks Like

Every report includes an executive summary, statistical analysis with visualizations, and a methodology section your most rigorous funders will trust — all in a polished Word document.

AI-generated executive summary
Program Impact Evaluation Report
Youth Leadership Initiative · 2024–2025

Participants demonstrated statistically significant improvement across all three primary outcome domains following program completion. Effect sizes were in the moderate-to-large range, suggesting the program produced meaningful real-world change beyond what would be expected by chance.

+34%
Self-Efficacy Score
d = 0.72
Effect Size (Cohen's d)
p < .001
Statistical Significance

These findings support continued investment in the model. The evaluation used a validated pre/post design with n = 48 participants across two cohorts.

Statistical analysis with visual charts
Self-Efficacy: Pre vs. Post Score
Pre-Score Post-Score
Coefficient Estimates (95% CI)
Self-Efficacy
+6.4*
Goal-Setting
+5.1*
Social Skills
+3.9
Export to Word or PDF
Methodology & Limitations
Section 4 of 6

A paired-samples comparison design was used to assess participant-level change from pre-program to post-program on validated outcome measures. This approach controls for baseline differences and provides the statistical power necessary to detect moderate effects.

Design: Pre/post within-subjects
Test: Wilcoxon signed-rank (paired)
Effect size: Cohen's d, Hedge's g
CI level: 95% (two-tailed)
Sample: n = 48 (complete data)
Download .docx
Export PDF
Try It Free with Sample Data →

No credit card required  ·  First report free  ·  Results in minutes

Get Early Access

Join organizations already using OutcomeRadar to prove their impact.

✓ Thanks! Check your email for next steps.
The Problem

Most nonprofits are sitting on data they can't use.

Your funders want evidence. Your board wants outcomes. Your staff knows the program is working — but turning program data into credible impact evidence requires evaluation expertise most organizations don't have on staff.

External evaluation consultants typically cost 10–15% of your total program budget. For a $500K program, that's $50,000–$75,000 for a single evaluation cycle. And the report is often outdated by the time it's delivered.

Meanwhile, 85% of nonprofits track program outputs — how many people they served — but fewer than 70% track actual participant outcomes. The gap between counting people and proving impact is exactly where grant renewals are won or lost.

10–15%
Of program budget — the typical cost of rigorous external evaluation
Source: Knowledge Advisory Group
6–12 mo
Typical turnaround for a traditional evaluation report
Source: Industry standard
46%
Of nonprofit finance leaders say producing outcome metrics is a top unmet need
Source: Sage Nonprofit Technology Trends, 2024
What It Does

Rigorous evaluation methodology. Accessible to any organization.

📊

Outcome Analysis

Upload your participant data and Outcome Radar runs the statistical tests — paired comparisons, effect size calculations, confidence intervals — that evaluation researchers use. Built on validated methodology, not just dashboards with charts.

📝

Funder-Ready Reports

Statistical results are translated into clear, plain-language impact narratives your grant reviewers will understand. Download a polished Word document ready to drop into your grant report.

🔒

Data Privacy Built In

Outcome Radar uses coded participant IDs — never names, emails, or personal identifiers. Built-in PII detection flags personal data before it is ever uploaded. Your participants' privacy is protected at every step.

🎓

Evidence You Can Stand Behind

Every report includes the methodology behind the analysis — what tests were run, what the results mean, and what the limitations are. Because credible impact evidence requires transparency, not just numbers.

Built by an evaluation researcher. Not a software company.

Outcome Radar was built by Jill Antonishak, PhD — a published evaluation researcher and community psychologist with more than 15 years of program evaluation experience across federal agencies, nonprofits, and international governments.

Her work includes a randomized controlled trial demonstrating the effectiveness of a nonprofit program, peer-reviewed research on human-centered evaluation methodology, and program evaluation with the VA's Whole Health initiative, the US Air Force, and government agencies across two countries.

Outcome Radar exists because rigorous, credible program evaluation shouldn't require a large research budget. Every organization doing important work deserves to know — and be able to prove — that it's working.

Learn more about Three Flights →

From data to funder-ready report — in three steps.

1

Upload your program data

Export your participant outcome data as a CSV file. Outcome Radar accepts pre/post survey data, assessment scores, and program outcome variables. A downloadable data template makes formatting simple.

2

We analyze your outcomes

Outcome Radar automatically detects your data types, runs the appropriate statistical tests, calculates effect sizes, and checks that the methodology is sound. No data science background required.

3

You receive your report

Download a polished, funder-ready impact report in Word format — complete with statistical findings, plain-language interpretation, and a methodology section your most rigorous funders will trust.

Generate Your First Report →

Every program should be able to prove it works.

Your first report is free. No credit card required. Upload your data, see your results, and download a report you can actually use — before you spend a dollar.

Generate Your First Report → Book a Free Walkthrough

Request a Demo

See OutcomeRadar in action with your own data. We'll walk you through everything — no prep required.

✓ Thanks! We'll be in touch within 24 hours.