Picture of team members Dr. Mariana de Santibañes and Dr. Anthony Clairmont at the American Evaluation Association Conference

Insights from AEA 2025: Advancing Evaluation Through Community Engagement and Methodological Innovation

Two of our team members recently returned from the 2025 American Evaluation Association conference in Kansas City, Missouri, energized by conversations exploring the latest innovations in evaluation practice. With the conference theme “Engaging Community, Sharing Leadership,” they contributed their expertise and gleaned insights about how evaluators can lead in supporting communities.

Dr. Mariana de Santibañes and Dr. Anthony Clairmont at the American Evaluation Association Conference

Dr. Anthony Clairmont — Leading Methodological Conversations

Our Lead Methodologist, Dr. Anthony Clairmont, played a central role in the conference as the elected Chair of the Quantitative Methods Topic Interest Group (TIG). In this capacity, he hosted two key events that brought evaluators together to strengthen their methodological skills.

At the Annual Meeting of the Quantitative Methods TIG, Anthony hosted an organizing session for researchers passionate about advancing quantitative approaches in evaluation. He also helped facilitate the Quant/Qual Café, an innovative gathering where TIG members provided methodological consultation to evaluators wrestling with thorny design questions. Anthony worked with practitioners to troubleshoot real challenges from their evaluation projects, such as how to design international multi-site evaluations and how to transition from ad hoc questionnaires to more valid forms of measurement.

Three conceptual themes that emerged from sessions and discussions at this year’s AEA that were salient for Dr. Clairmont were the use of foresight analysis, the enduring popularity of realist evaluation, and the opportunities afforded by participatory data analysis.

Drawing from futures studies and strategic planning literature, foresight analysis moves beyond retrospective assessment to incorporate anticipatory thinking into evaluation design. This approach builds on the work of foresight scholars like Michel Godet, who developed prospective methods for strategic planning building on the work of the French philosophers Gaston Berger and Maurice Blondel, the latter of whom famously said: “the future is not forecasted, it is prepared.” By integrating these anticipatory approaches with evaluation and strategic planning practice, we can help programs develop the capacity to detect emerging patterns, explore alternative scenarios, and adapt strategies proactively rather than simply documenting outcomes after the fact.

Realist evaluation continues to be a popular approach among new and seasoned evaluators alike. Developed by Ray Pawson and Nick Tilley, realist evaluation asks not just whether a program works, but explores the underlying mechanisms that produce outcomes in specific contexts. This CMO configuration (Context-Mechanism-Outcome) represents a fundamental departure from the experimental paradigm’s emphasis on average treatment effects. Instead, realist evaluation embraces heterogeneity and more complex designs, seeking to understand why interventions work differently for different populations under different circumstances. 

Participatory data analysis builds on a rich tradition of participatory action research dating back to Kurt Lewin in the Northern tradition and Paolo Freire and Orlando Fals-Borda in the Southern tradition. Contemporary approaches to participatory analysis in evaluation partner with community members as co-researchers with valuable analytical insights during the data analysis process. This collaboration can surface alternative explanations, challenge evaluators’ assumptions, and reveal priors for statistical analysis, ultimately producing more robust and contextually useful findings.

These themes connect to the Quantitative Methods TIG’s mission: ensuring that rigorous quantitative approaches remain accessible, relevant, and responsive to the complex questions evaluators face in the field.


Dr. Mariana de Santibañes — Reimagining Evaluation in Cultural Context

Dr. Mariana de Santibañes facilitated a “Birds of a Feather” session that explored how to adapt evaluation methodologies when working with Indigenous communities and other contexts where conventional evaluation approaches may be ill-suited. Her session created space for honest reflection on when standard approaches fall short—and what evaluators can do about it.

Mariana leading the “Birds of a Feather” roundtable discussion.

Mariana began by presenting a case study of an evaluation process in which evaluators collaborated with program staff and cultural experts to expand their approach, integrating qualitative narratives, staff observations, and community perspectives. By conducting collaborative methods and incorporating Indigenous knowledge systems, the evaluation more authentically represented participants’ experiences and better captured program effectiveness within its cultural context.

Mariana then guided participants through three central questions that cut to the heart of evaluation practice among special populations: 

  1. When evaluation tools capture some impacts but render invisible the transformations communities identify as most meaningful, at what point does “incomplete” become “inadequate”—and who gets to decide? 
  2. What does it mean that discussions of “building community capacity” for evaluation rarely address building evaluators’ capacity to work within community epistemologies? 
  3. How do funding structures and policy requirements shape evaluation work, what would practitioners do differently without those constraints, and what do those gaps reveal?

The session offered practitioners a critical space to reflect on strategies for adapting evaluation designs when distinct cultural frameworks, knowledge systems, or program characteristics demand methodological flexibility. It contributed to vital ongoing conversations about cultural responsiveness in evaluation—moving the field toward approaches that honor diverse ways of knowing and understanding impact.


Both sessions exemplified the conference theme by demonstrating how evaluators can share leadership with communities and engage meaningfully across methodological and cultural boundaries. Our team returned inspired by the field’s commitment to making evaluation more responsive, rigorous, and genuinely useful to the communities it serves.

“If you can’t measure it, you can’t improve it.“

– Peter Drucker

Let’s Connect