Evaluations have always been very important in international cooperation. Some of us evaluate, others are being evaluated or manage evaluation contracts and processes. Some of us find evaluation processes rewarding, many find it a frustrating experience. Too often we hear that too little is done with the outcomes of evaluations, that no real learning comes from them. We also see that important stakeholders are not heard, or cannot influence the outcomes. Then there is the question of the quality of evaluation and evaluators: too often evaluators are hired based on their field experience, not knowing much about evaluation and research methodology.
What is it that makes evaluating or being evaluated so frustrating? How can we do things differently? In our experience the way in which evaluations are planned and conducted often limits their potential for learning and change. Also genuine participation is restricted for a wide variety of reasons - always very valid of course. Downward accountability is actually seldom built in evaluation processes. What can be done to remedy this? These are some things we looked at:
1. Some methods and tools limit the space for learning and participation, others actually enhance learning or (downward) accountability. Easily measurable indicators are often overrated compared to evidence that actually informs learning and changes behaviors. 2. Our behaviors (as evaluators but also as "objects" of evaluations) influence the space for participation in evaluation processes and -tools. Sometimes this behavior is intentional; sometimes it is reactive or even unconscious. Also, the interaction between individual behavior and organizational culture and - processes is relevant. 3. Evaluation processes tend to be to verbal: images can provoke much more active learning. 4. Participation can be promoted by using for example internet when people cannot easily meet face to face are not easily coming together. Triggered by our frustrations and inspired by the ideas we developed addressing them, we decided to create opportunities to share experiences. To do so we offer a compact series of tailor made, modular - open subscription - learning events the first to take place in June 2010. We also provide practical tailor-made support to (re)design and implement evaluation and monitoring processes. In the learning events participants, a diverse group of development practitioners, explore ideas and develops new skills to help them to manage, organize, conduct and experience evaluation differently. We address standards and processes of evaluations and look at methodologies, tools and behaviors. We also look into the use of images, video and internet as new media that may generate different behaviors and ideas, and that can support participatory processes. The learning events will involve evaluators, as well as people who commission evaluations and people who are "being" evaluated. We aim to involve people from the South as well as from the North. Before and after the event, participants network using the internet platform we have set- up.
Concrete experiences from participants as well as our own, together with some theory are the main ingredients in the learning events. We are a professional capacity development evaluator, an M&E specialist experimenting with video, and a manager/advisor working on issues of diversity and gender equality. Between the three of us we have at least 50 years of evaluation experience, doing evaluations, being evaluated and hiring and managing evaluators. In our philosophy the best cure for frustration is fun and the best learning is learning from each other! We are not subsidized or sponsored. Participation in the events will be low-cost but not free.