Being educated as a social researcher (MA in Public Administration and Organisational sciences), Bob van der Winden graduated with a thesis on Evaluation (‘Do not beat a drum with an axe’ – Responsive evaluation in an International Context, University of Utrecht, 2004).
Since the main aim of evaluation for me is formative (directed towards lessons for the future, drawn from the past) and participative (reflecting the basic relationships between partners, including donors) I have mainly worked with ditto evaluation approaches mainly ‘Fourth generation evaluation’ and Most Significant Change methodology, including case studies, topic interviews, participative workshops, etc.
Fourth Generation Evaluation
The evaluation approach, called ‘4th generation evaluation’ (more about it on the www.BWsupport.nl website) can be clarified as follows: nowadays in every evaluation we’re trying to know about output (e.g. how many students did we have?), outcome (e.g. have they learned anything?), and impact (did anything change?).
In the First generation evaluation accountability; ‘counting’ output, was the main goal; the Second generation (starting around the 60ties of the past century as a reaction to the first) was mostly a (thick) description of what had happened in the programme / process; in the Third generation – which is most in fashion nowadays – the judgment (conclusions and recommendations of the evaluator) is paramount. The Fourth generation – which is participative and responsive to the stakeholders and constructivist in its methodology – is a relatively new development – mainly in the USA - over the last decade, and still little used in International Cooperation projects.
Focus of the Fourth generation evaluation are:
-a thorough stakeholder analysis (including the question: are there victims of the project?),
-interviews with stakeholders, going from one to another
-the use of Claims, Concerns and Issues (CC&I), informing the process of the evaluation: is there consensus about the ‘good outcome – that’s a claim; is there consensus about the need to change things: that is a concern; and the rest (where people differ) are issues, that need to be discussed in a stakeholder meeting in such a way that an agenda for the future emerges.
-Workshops throughout the evaluation in order to guarantee participation of stakeholders and learning during the evaluation.
Evaluation in practice
With this methodology I conducted several evaluations in Zimbabwe, South Africa, Senegal, Ghana, Nigeria, Egypt and the Netherlands for clients such as ACPD Zimbabwe, NiZA Netherlands, ArmsTrade Campaign Netherlands, Oxfam-Novib, FreeVoice Netherlands, IPAO Dakar, MFWA Accra, E-Motive Netherlands, Phaphama South Africa, Foundation against senseless Violence Netherlands, Ministry of Foreign Affairs and World press Photo Netherlands, CIC Cairo, NIJ Lagos, Diversity Joy Netherlands.
A typical ‘fourth generation’ evaluation has the following form:
1.Client writes a terms of reference
2.I make a quotation which is also content, proposing the way of evaluating, amount of workshops, interview days, etc.
3.If we agree the following step is a first desk study
4.After the desk study a kick-off workshop follows
The kickoff entails:
a.Exchanging expectations for the evaluation
b.Jointly ‘building’ an analytical model
c.Joint stakeholders analysis
d.Setting priorities for the evaluation
e.Invoking stakeholders perceptions of the success of the project (indicators)
f.SWOT analysis of the project
5.I make a report on the kickoff with at the end a preliminary list of claims, concerns, issues; a topic list of questions and an adjusted evaluation planning
6.Clients can react and lists are adjusted
7.A second desk study takes place
8.I conduct interviews with different stakeholders (sometimes by phone/skype), but also a case study is possible, focus groups, sending around questionnaires, etc.
9.Sometimes an intermediate workshop around specific issues is necessary with a group of stakeholders (e.g. a workshop to get an important history right and confirmed by all participants; to shed light on an important challenge etc.)
10.I make an interim report, with a revised list of CC&I
11.That is the input for a final verification workshop with as many different stakeholders as possible: if consensus follows an agenda for the future emerges: only those issues remain about which there is no consensus.
12.The end of the evaluation is the final report, with CC&I, conclusions and recommendations of the evaluator
13.discussed with funder and evaluated parties
Now we have a host of interviews, case stories, field visits, desk study results. How to bake chocolate of all these data? Or in the words of Quinn Patton: “Analysis brings moments of terror that nothing sensible will emerge and times of exhilaration from the certainty of having discovered ultimate truth. In between are long periods of hard work, deep thinking, and weight lifting volumes of material...”
Fortunately there are methodological rules and tactics, giving us guidance and there is a lot of material to help us in this respect:
-In the first place the evaluation report gives a ‘thick description’ (as thick as I dare to afford in the framework of an evaluation) of what happened in the project. That description allows to read opinions, interpretations and expectations of the interviewed themselves, and so every reader can have a much more in-depth knowledge of what is going on: in other words: people can draw their own conclusions!
-Of course the material has been written by me, but I take care that there are so many points of view (golden rule of journalism: always use at least 2 sources!) and different sources (reports, literature, field visits, interviews from partners, stakeholders, media, participants, guest tutors, staff) that there is enough reason to validate the findings because of the triangulation contained in the material.
-In the second place we have the Terms of Reference, where research questions are formulated: they are the final guide for conclusions and recommendations
-Then we have the expectations for the evaluation of the stakeholders (from the kick off workshop)
-Finally we have the most important informing principles of this kind of evaluation; first formulated directly after the kick off workshop (see annex 2): the Claims, Concerns and Issues.
What is always done after the data-collection phase (field visits and interviews in partner organisations and the staff) is reformulating the Claims, Concerns and Issues (as they have been formulated first after the kick-off workshop) with the knowledge acquired in the evaluation field visits, etc.
Then these are presented again to a final stakeholder workshop. The outcome of this ‘agenda’ workshop is thus a validated list of Claims Concerns and Issues: what do the different stakeholders agree upon or not.
After this iterative and participative process, based on all the gathered material, I go back myself to the research questions of the evaluation, drawing conclusions and formulating recommendations thus finalising the ultimate report of the evaluation.
See also: Shuttlebeam (2007) Evaluation ,and Guba & Lincoln (1990) Fourth Generation Evaluation as well as Bob van der Winden (2004) ‘Do not beat a drum with an axe’ – Responsive evaluation in an International Context