July 19, 2012

I am sometimes asked about the difference between research and evaluation. I can understand this confusion, as evaluation is, in many ways, a form of applied research. And, many of us here began our careers in the research world. Both fields rely heavily on collecting and interpreting data. As evaluators, we use the tools of social science research in a setting and with a purpose somewhat different than traditional research. In general terms, research helps move ideas and society forward through exploring new frontiers while evaluation focuses on bringing that information to practitioners and helping them use data to better understand their impact.

The debate about what is research and what is evaluation is not a new one. We have read several articles and had numerous conversations about this topic over the years (for example, Unlearning Some of our Social Scientist Habits by E. Jane Davidson was a great read for our monthly “research group”). I think Ann K. Emery, a fellow evaluator, did an excellent job of briefly explaining some key differences and similarities on her own blog recently. Her blog entry is posted below with her permission. It is great being part of a professional community with so many people willing to share thoughts and ideas with each other.

Researchers vs. evaluators: How much do we have in common?

Original post on May 24, 2012 by Ann K. Emery

I started working as a research assistant with various psychology, education, and public policy projects during college. While friends spent their summers waitressing or babysitting, I was entering data, cleaning data, and transcribing interviews. Yay. Thankfully those days are mostly behind me…

A few years ago, I (unintentionally) accepted an evaluation position, and the contrast between research and evaluation hit me like a brick. Now, I’m fully adapted to the evaluation field, but a few of my researcher friends have asked me to blog about the similarities and differences between researchers and evaluators.

Researchers and evaluators often look similar on the outside. We might use the same statistical formulas and methods, and we often write reports at the end of projects. But our approaches, motivations, priorities, and questions are a little different.

The researcher asks:

 What’s most relevant to my field? How can I contribute new knowledge? What hasn’t been studied before, or hasn’t been studied in my unique environment? What’s most interesting to study?

 What type of theory or model would describe my results?

 What are the hypothesized outcomes of the study?

 What type of situation or context will affect the stimulus?

 Is there a causal relationship between my independent and dependent variables?

 How can I get my research plan approved by the Institutional Review Board as fast as possible?

The evaluator asks:

 What’s most relevant to the client? How can I make sure that the evaluation serves the information needs of the intended users?

 What’s the best method available, given my limited budget, limited time, and limited staff capacity? How can I adapt rigorous methods to fit my clients and my program participants?

 When is the information needed? When’s the meeting in which the decision-makers will be discussing the evaluation results?

 How can I create a culture of learning within the program, school, or organization that I’m working with?

 How can I design a realistic, prudent, diplomatic, and frugal evaluation?

 How can I use graphic design and data visualization techniques to share my results?

 How can program staff use the results of the evaluation and benefit from the process of participating in an evaluation cycle?

 What type of report (or handout, dashboards, presentation, etc.) will be the best communication tool for my specific program staff?

 What type of capacity-building and technical assistance support can I provide throughout the evaluation? What can I teach non-evaluators about evaluation?

 How can we turn results into action by improving programs, policies, and procedures?

 How can we use logic models and other graphic organizers to describe the program’s theory of change?

 What are the intended outcomes of the program, and is there a clear link between the activities and outcomes?

 How can I keep working in the evaluation field for as long as possible so I can (usually) avoid the Institutional Review Board altogether?

Researchers and evaluators are both concerned with:

 Conducting legal and ethical studies

 Protecting privacy and confidentiality

 Conveying accurate information

 Reminding the general public that correlation does not equal causation

What else would you add to these lists? I’ve been out of the research mindset for a few years, so I’d appreciate feedback on these ideas. Thank you!

– Ann Emery | Adventures of a Nonprofit and Foundations Evaluator

Follow Ann on Twitter @annkemery