You are on page 1of 229

EVALUATION USE IN NON-GOVERNMENTAL ORGANIZATIONS Unlocking the Do Learn Plan Continuum

A Thesis Presented to the Faculty Of The Fletcher School of Law and Diplomacy

By

LAKSHMI KARAN

In partial fulfillment of the requirements for the Degree of Doctor of Philosophy APRIL 2009

Dissertation Committee KAREN JACOBSEN, Co-Chair JOHN HAMMOCK, Co-Chair ADIL NAJAM

UMI Number: 3359808 Copyright 2009 by Karan, Lakshmi All rights reserved

INFORMATION TO USERS

The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleed-through, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion.

______________________________________________________________
UMI Microform 3359808 Copyright 2009 by ProQuest LLC All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code.

_______________________________________________________________
ProQuest LLC 789 East Eisenhower Parkway P.O. Box 1346 Ann Arbor, MI 48106-1346

Lakshmi Karan Mountain View, CA lkaran@hotmail.com Highly creative and experienced leader with over 15 years of strategy and management experience in the non-profit and high-tech sector
UNIQUE SKILLS

Understand Effectiveness: A deep understanding of how organizations can sustainably scale towards higher impact. Develop Strategies: Working with leadership and program teams, develop strategies to increase effectiveness and efficiency. Translate strategies into workable action plans. Drive Impact: Define key success indicators and mobilize the team to focus on these indicators to achieve impact. Work with stakeholders to create shared-understanding. Measure results. Deliver Results: Leverage opportunities and strengths to maximize impact through effective tools/processes and focused execution. Inspire a learning culture.
PROFESSIONAL EXPERIENCE

The Skoll Foundation Director, Impact Assessment, Learning and Utilization

2007present

Developed and led processes that review grant portfolio performance at midpoint and for follow-on investment. Built systems to track metrics and guide staff learning to inform program decision-making. Implemented process efficiencies in the selection of Skoll Awardees. Prepared and presented investment recommendations and learning to Board. Programmed annual convening of Awardees at the Skoll World Forum at Oxford. 20022004

Alchemy Project, Boston, MA Program Manager

Developed the criteria for selection of refugee livelihood programs in Africa; and disbursed over $200,000 in grants. Initiated and created an analytical model using SPSS to measure and assess program performance. This model enabled the project to report statistically on its achievements and also track its long-term impact. Prepared annual reports for donors and funding proposals. Managed a team of 10 field researchers. This included contract negotiations, budget allocation, logistical and technical support.

Reebok Human Rights Program, Canton, MA Management Consultant

2001-2002

Monitored compliance of Reebok supplier factories, worldwide, to worker standards guidelines. Commissioned audits; designed and implemented corrective action steps. 1994-1998

Cap Gemini, Boston, MA Senior Consultant

Led a team that designed, developed and implemented a call-center tracking system for a healthcare management center. This system eliminated customer service response delays by 30% and reduced client staffing costs by 10%. Migrated mainframe-based human resources system to PeopleSoft, for an insurance firm. This required careful process mapping, rationalization of data conversion options and creation of testing modules. Collaborated with the sales team and developed several client proposals. Expanded Cap Geminis opportunities to bring additional revenue of over $1 million. Mentored junior consultants and summer interns.

EDUCATION

The Fletcher School of Law and Diplomacy Tufts University, Medford, MA Ph.D. in International Relations (2009) Fields of Study: Organizational Learning; Nonprofit Management and Evaluation Thesis: Evaluation Use in Non-Governmental Organizations: Unlocking the Do Learn Plan Continuum Masters of Arts in Law and Diplomacy (2000) Slawson Fellow Fields of Study: Human Rights, Humanitarian Studies and Conflict Resolution Thesis: Combating Human-Trafficking: A case study of Nepal National Institute of Information Technology, Madras, India Graduate Diploma in Systems Management (1992) Excellent Honors Madras University, Madras, India Bachelor of Science, Mathematics (1990) OTHER Volunteer, MAITRI, a support organization for domestic violence victims. Board Member, Inspire America Media.

ABSTRACT
This dissertation explored the factors that influence evaluation use and the challenges non-governmental organizations (NGOs) face in adapting learning practices and systems that enable use. While there has been much theoretical work done on evaluation use and learning in general how NGOs can build systems and practices to promote use has been missing. The research addressed this gap it developed a utility model that identifies the key factors that influence use and the practical steps NGOs can take to implement the model.

To get at the answers, the research reviewed the theoretical models - within evaluation and organizational theory - that promote use; conducted a survey to understand the current state of use within the NGO sector and the systems that provide an effective link between doing evaluations, knowing the results and learning from them.

The final evaluation utility model presents a fundamental shift in how NGOs must approach program evaluation. It challenges the conventional thinking in the NGO sector with the notion that it is no longer sufficient to focus on use only at the program level. The utility model revealed that influencing factors must extend to include the larger context of organizational behavior and learning.

Dedicated to fellow travelers who seek learning

ACKNOWLEDGEMENTS
There are many people to whom I owe a great debt of gratitude for assisting me throughout my doctoral studies. While I name only a few here, it must be acknowledged that this accomplishment has been a result of a collective effort of goodwill, support and encouragement from friends and family, from around the world. First, I would like to express my sincere appreciation to my Committee without whom this dissertation would neither have been started nor completed. My Co-Chairs, Dr. Karen Jacobsen, who over the years provided the gentle nudge that helped me, maintain the momentum throughout. Dr. John Hammock, whose commitment and confidence was invaluable in the final home stretch. Dr. Adil Najam, whose critical and insightful feedback helped develop a higher quality product. I have many sets of families to thank who helped me along the way. Work colleagues at the Skoll Foundation for their support, encouragement and confidence. To my parents and sister, whose steadfast love and prayers always kept the positive energy flowing around me, I offer my eternal love and gratitude. To Doss, my best friend, I am grateful for many things but singularly for keeping faith when I faltered. Thank you for the sacrifices you have made over these years so I can achieve this dream. Finally, this dissertation would not have been a reality without Sonu, who has been a loving companion these long years and continues to teach me the simple joys of being in the moment and relishing life.

LIST OF TABLES & CHARTS


Table 2.1 Primary programming contexts of organizations participating in the survey ..................................................................................................................................... 23 Table 2.2 - Role of the survey respondents................................................................. 26 Table 2.3 Experience level of survey respondents................................................... 27 Table 2.4 Participating organizations along with the number of respondents from each organization ........................................................................................................ 28 Table 3.1 Advantages/Disadvantages of Internal and External Evaluations ........... 38 Table 3.2 - A model of Outcomes of Evaluation Influence........................................ 52 Table 3.3 Changes in U.S. International NGO Sector, 1970-94................................. 69 Table 3.4 Growth in Revenue of Northern NGOs Involved in International Relief and Development ............................................................................................................... 70 Table 3.5 - Statistic on the U.S. Nonprofit sector....................................................... 71 Table 3.6 Outcome Mapping factors that enhance utilization ................................. 89 Table 3.7 Organizational Learning Definitions ..................................................... 102 Table 4.1 Intended users grouping......................................................................... 127 Chart 4.1 Intended users grouping ........................................................................ 127 Table 4.2 Involvement of potential users in planning an evaluation ..................... 128 Table 4.3 Importance of involving potential users ................................................ 129 Table 4.4 Uses of program evaluations.................................................................. 130 Chart 4.2 Uses of program evaluations................................................................. 131 Table 4.5 Criteria that impact evaluation use ........................................................ 132 Chart 4.3 Criteria that impact evaluation use ....................................................... 133

Table 4.6 Participation in evaluation planning ...................................................... 134 Table 4.7 Evaluation report interests ..................................................................... 135 Chart 4.4 Evaluation report interests .................................................................... 135 Table 4.8 Program evaluation timing..................................................................... 136 Table 4.9 Evaluation reports expectations ............................................................. 137 Table 4.10 Evaluation recommendations specificity #1 ........................................ 138 Table 4.11 Evaluation recommendations specificity #2 ........................................ 138 Table 4.12 Evaluation follow-up ........................................................................... 139 Table 4.13 Decision-making models ..................................................................... 140 Table 4.14 Drivers of program change .................................................................. 141 Chart 4.5 Drivers of program change ................................................................... 142 Table 4.15 Prevalence of evaluation use process................................................... 143 Table 5.1 Mapping practical steps to the factors that influence evaluation use .... 163

LIST OF FIGURES
Figure 3.1 Kirkharts integrated.................................................................................. 49 theory of influence ...................................................................................................... 49 Figure 3.2 Evaluation Use Relationships.................................................................... 55 Figure 3.3 Campbells implicit process-model........................................................... 57 Figure 3.4 Scrivens summative model ...................................................................... 58 Figure 3.5 Weisss implicit decision model................................................................ 59 Figure 3.6 Wholeys resource-dependent model ........................................................ 59 Figure 3.7 Cronbachs process model......................................................................... 60 Figure 3.8 Rossis process model ............................................................................... 60 Figure 3.9 Greens participatory evaluation process .................................................. 61 Figure 3.10 Cousins and Leithwood utilization model............................................... 62 Figure 3.11 Alkins factor model................................................................................ 63 Figure 3.12 Pattons utilization-focused evaluation framework................................. 64 Figure 3.13 Evaluations filed in ALNAP Evaluative Reports Database .................... 75 Figure 3.14 the Research and Policy in Development Framework ............................ 84 Figure 3.15 Outcome Mapping Framework................................................................ 88 Figure 4.1 Tools to keep evaluation findings current in organization memory........ 144 Figure 4.2 Processes that can increase use................................................................ 146 Figure 4.3 Reasons why evaluations get referred or not........................................... 148

Figure 5.1: The Utility Model................................................................................... 150 Figure 5.2 Evaluation use and decision-making groups ........................................ 154 Figure 5.3 Practical Steps at the Planning and Execution Phase ........................... 164 Figure 5.4 Practical Steps at the Follow-up Phase................................................. 169

TABLE OF CONTENTS
Chapter 1: Introduction ....................................................................................................... 1 Research Context ............................................................................................................ 1 The Problem: the under-utilization of evaluation in NGOs............................................ 3 Purpose of this Research................................................................................................. 5 Methodology................................................................................................................... 7 Theories that Frame Research......................................................................................... 9 Research Findings and Conclusion............................................................................... 12 Dissertation Organization ............................................................................................. 13 Chapter 2: Methodology ................................................................................................... 14 Proposition and Research Questions............................................................................. 14 Research Structure ........................................................................................................ 16 Stage 1: Theoretical Review ..................................................................................... 18 Stage 2: Survey ......................................................................................................... 22 Chapter 3: Literature review ............................................................................................. 34 Evaluation Utilization ................................................................................................... 34 Definitions................................................................................................................. 34 1960s through 1970s: The Foundation Years ........................................................... 41 1980s through 1990s: The rise of context in evaluation theory................................ 43 The 21st Century: Stretching the boundaries beyond use.......................................... 48 Process Models of Evaluation Use ........................................................................... 57 Program Evaluation Systems in NGOs......................................................................... 65 Definitions................................................................................................................. 65 Growth of the NGO Sector ....................................................................................... 69 Current Use of Evaluations in NGOs........................................................................ 74 Barriers to Evaluation Use in NGOs......................................................................... 95 Organizational Learning ............................................................................................. 102 Definitions............................................................................................................... 102 Types of Learning ................................................................................................... 104 Levels of Learning .................................................................................................. 106

Leading Theorists.................................................................................................... 108 Main Constructs ...................................................................................................... 114 Evaluation Use and Organization Learning ............................................................ 122 Chapter 4: Presentation of Survey Results...................................................................... 126 Stage 1: Evaluation Planning...................................................................................... 126 Stage 2: Evaluation Implementation........................................................................... 133 Stage 3: Evaluation Follow-Up................................................................................... 139 Chapter 5: The Utility Model.......................................................................................... 149 Explanation of Model ................................................................................................. 150 Steps to Implement the Model .................................................................................... 161 Practical Steps at the Program Level ...................................................................... 164 Practical Steps at the Organization Level ............................................................... 171 Chapter 6: Conclusion..................................................................................................... 176 Recommendations for Future Research...................................................................... 179 REFERENCE LIST ........................................................................................................ 181 Appendix A Evaluation Use in Non-Governmental Organizations Survey ................ 190 Appendix B Master List of US Based NGOs with an International Focus ................. 198 Appendix C Survey Population ................................................................................... 212

Chapter 1: Introduction

Research Context

Over the last two decades there has been a dramatic growth in the number of non-governmental organizations (NGOs) involved in development and humanitarian aid, in both the developed and developing countries. The total amount of public funds being channeled through NGOs has also grown significantly and the proportion of aid going through NGOs, relative to bilateral or multilateral agencies, has also increased. The European Union funding for international NGOs in the mid-1970s had a budget of USD $3.2 million which by 1995 reached an estimated USD $1 billion; accounting for somewhere between 15-20% of all EU foreign aid1. In 2006, the EU budget for the non-profit sector as a whole was close to 55 billion2. Strengthened by enormous funding commitments the number of NGOs grew worldwide and began to establish themselves as experts in all aspects of development and humanitarian issues.

Associated with this growth has been an increasing concern about the efficiencies of NGO policies and practices3. These debates were greatly influenced by the changing donor environment, whose emphasis on quality management resulted in several NGOs adopting processes that contribute to increased transparency,

Kerker Carlsson, Gunnar Kohlin, and Anders Ekbom, The Political Economy of Evaluation: International Aid Agencies and the Effectiveness of Aid (New York: St. Martin's Press, 1994). 2 http://www.idseurope.org/en/budget2006.en.pdf and http://ec.europa.eu/budget/index_en.htm 3 Harry P. Hatry and Linda M. Lampkin, "An Agenda for Action: Outcome Management for Nonprofit Organizations," (Washington DC: The Urban Institute, 2001).

monitoring and evaluation and to an extent, organizational accountability. Some of the processes that evolved to address these concerns include the development of codes of conduct, benchmarks and standards that enhance operations4. NGOs setup partnership networks in their different fields to share and learn from common experiences. An example of such a network in the US is InterAction, a coalition of more than 175 humanitarian organizations working on disaster relief, refugee-assistance, and sustainable development programs worldwide. While such partnerships provided large amounts of information, their impact was all but lost as organizations struggled to assimilate this knowledge.

The increasing demand on NGOs to provide more services with a higher level of competition for funds has created challenges for the organizations pushing them to find ways to become more effective and provide greater social and economic impact. According to Margaret Plantz et al, the nonprofit sector has been measuring certain aspects of performance for several decades these include financial accountability, inputs, cost, program products or outputs, adherence to quality in service delivery and client satisfaction5. The authors suggest that while these measures yield critical information about the services the nonprofits are providing, they seldom reveal whether the NGOs efforts made a difference. In other words, was anyone better off as a result of the service from the NGO? Consequently, they encouraged NGOs must engage in effective planning and management. This requires systematic assessments

Koenraad Van Brabent, "Organizational and Institutional Learning in the Humanitarian Sector: Opening the Dialogue," (London: Overseas Development Institute, 1997). 5 Margaret C. Plantz, Martha Taylor Greenway, and Michael Hendricks, "Outcome Measurement: Showing Results in the Nonprofit Sector," New Directions for Program Evaluation, no. 75 (1997).

of past activities and their results and utilizing the learning for informed decisionmaking. Strengthening organizational capacity for evaluation and learning systems continue to be growing concerns. Paul Light notes that today NGOs have to make strategic allocation of resources to learning6. He states that much of the lean and mean rhetoric that preoccupied private firms and government agencies during the 1980s and early 1990s has now filtered over to the nonprofit sector. While NGOs devote more time to service delivery than program evaluation, even less is devoted to learning from these evaluations.

The Problem: the under-utilization of evaluation in NGOs

Within the NGO sector it was only in the late 80s, under increasing pressure from donors agencies, that there began an earnest attempt to examine the quality of evaluation utilization7. Given that billions of dollars have been spent by NGOs over the last decade on projects and millions spent on their evaluations, why has it been so difficult to setup a process of critical reflection and learn from their experience? With increasing competition for funding and growing societal problems, how does one distinguish effective from ineffective, efficient from inefficient programs? How can organizations avoid expending precious resources on an evaluation to produce reports that gather dust on bookshelves, unread and more importantly unused?

Paul C. Light, Making Nonprofits Work: A Report on the Tides of Nonprofit Management Reform (Washington, DC: The Aspen Institute Brooking Institution Press, 2000). 7 The terms use and utilization are applied interchangeably.

Several reasons emerge as to why agencies dont maximize the use of evaluation findings. They range all the way from inept and badly conducted evaluations to a deliberate attempt by organization decision-makers to ignore findings and recommendations as it may undercut their program plans8. NGO evaluations were found to have inadequate information to support decision-making. Key deficiencies identified that the methodological set-up of evaluations, data collection methods, limited attention for cross cutting issues and broader lessons learned are not well addressed. Moreover, in the absence of formal, structured follow-up procedures when the evaluation report is completed, it falls into the organizational abyss: low priority, neglect and indifference among the potential users. Evaluations often are viewed as an onerous box to check rather than an opportunity to inform program decisionmaking. An Organization for Economic Cooperation and Development (OECD) commissioned report comparing developmental NGOs evaluations concluded that most evaluations lacked follow-up because they were commissioned by donors without the participation of NGO staff9. These evaluation results were more geared towards decisions on funding rather than critical assessments of the programs.

While the literature of program evaluation has made significant advances in identifying the factors that influence use very little of their recommendations have crossed over and been applied into NGO practice. The primary reason for this has been that NGOs do not have a simple, practical framework that guides then towards increasing utilization. Also often facing a scarcity of resources, in funds and
R. C. Riddell et al., "Searching for Impact and Methods: Ngo Evaluation Synthesis Study," (OECD/DAC Expert Group, 1997). 9 Ibid.
8

personnel, NGOs are overwhelmed by the lists of factors that have to be addressed and the complexity of processes that need to be established to maximize evaluation use10. Recent research evinces a call for a way to make evaluation utilization more simple and scalable; whereby there is a simplified framework for use that enhances NGO strengths and mitigates their constraints.

Purpose of this Research

The purpose of this research was to develop a practical model that enables NGOs to maximize evaluation use. The study examined the factors that influence evaluation use and explored the challenges NGOs face in adapting learning practices and systems that enable use. While there has been much theoretical work done on evaluation use and learning in general how NGOs can build systems and practices to promote use has been missing. The research addressed this gap it developed a utility model that identifies the key factors that influence use and the practical steps NGOs can take to implement the model. To get at the answers, the research reviewed the theoretical models - within evaluation and organizational theory - that promote use; conducted a survey to understand the current state of use within the NGO sector and the systems that provide an effective link between doing evaluations, knowing the results and learning from them. Within this frame it explored the following sequence of questions:

Vic Murray, "The State of Evaluation Tools and Systems for Nonprofit Organiations," New Directions for Philanthropic Fundraising, no. 31 (2001).

10

(1) What is evaluation use? (2) What are the factors that influence evaluation use? (3) How are these factors applied within the NGO sector? (4) What are the challenges in promoting use in NGOs? (5) What are the processes and systems that can increase evaluation utilization in NGOs?

Methodology

In order to understand the motivators and inhibitors of evaluation utilization, this research began with a review of the literature to discover what others have suggested might be factors that influence use. Chapter 3 elaborates on the theories and works of these authors. Methods included the review of existing documentation and survey of NGO staff - including program members and senior management. Primary and secondary documents included published and unpublished works about evaluation use; organizational learning and NGO case studies of tracking use written by scholars, practitioners, evaluation consultants and NGOs.

A survey was conducted in order to gather first-hand, primary evidence of the types of factors that influence use and to better understand the processes and systems that promote use. Altogether, 111 respondents from 40 NGOs provided background and relevant data that contributed significantly to the creation of the utility model. The purpose of the survey was not only to validate the utilization factors that emerged from the literature but also identify what might be additional necessary but missing factors as seen from within the NGO sector.

The single structure and content of the survey maintained uniformity among respondents. The survey was semi-structured with several open-ended questions. After several rounds of communication with potential respondents to explain the purpose of this research and the survey and gauge their willingness to participate, the

survey was sent electronically. Survey of practitioners served to flesh out the intricacies of use within different types of organizations; the political dimensions of use in decision-making and the system they identified as essential to promote use.

In pursuing information through documentation and survey, the author studied the general utilization ecosystem in NGOs at the program and organizational levels. NGOs were chosen for the survey through purposive sampling. This research targeted US based NGOs, with an international programmatic focus. Within the domain of purposive sampling a combination of Expert and Snowball methods were used. Expert sampling involves the assembling of a sample of persons with known or demonstrable experience and expertise in some area. Sampling for the survey first targeted staff in NGOs who are program experts and have a close knowledge of evaluations. Snowball methods were used to expand the participants list within NGOs. First respondents from NGOs were asked to recommend others who they may know who also meet the criteria. Although this method does not lead to representative samples, there were useful to reach populations that might have provided multiple perspectives from within the same organization. Details on the sampling are provided in the Methodology chapter.

Theories that Frame Research

Evaluation is an elastic word that stretches to cover judgments of many kinds11. While there can be n categories and dimensions of evaluation what they all share in common is the notion of judging merit simply put, it is weighing an event against some explicit or implicit yardstick. Since the 1960s when the practice of evaluation emerged with academic rigor there has been a systematic push to mold and shape its content. This effort successfully delivered the different approaches to evaluation; structures of data collections and guidelines to practice. However, what lagged behind was the understanding of how best to use the findings of evaluations. While people were focused on the mechanics of conducting a good evaluation they left the results to automatically affect decisions. Why would an organization spend time and resources to conduct an evaluation if it didnt intend to use the results? One can argue that if a comprehensive evaluation was done and the report presented in a clear manner the results will be used for program decision-making. Unfortunately, this is not what happens in reality.

It wasnt until the late 1970s that evaluation theorists found many factors that intervened between the completion of an evaluation study and its application to practice12. Michael Quinn Pattons theoretical framework of evaluation-utilization identified patterns and regularities that provide a better understanding on where, by

Carol H. Weiss, Evaluation Research: Methods for Assessing Program Effectiveness (New Jersey: Prentice-Hall, 1972). 12 Leonard Rutman, Evaluation Research Methods: A Basic Guide, 2d.ed. (Beverly Hills, CA: Sage Publications, 1984).

11

whom, and under what conditions evaluation results are most likely to be applied. Carol Weiss concluded that some of the problems that plagued evaluation utilization was inadequate preparations; practitioner suspicion and resistance; access to data; limited time for follow-up; and inadequacies of money and staffing. Despite differences in the emphasis and approaches to useful evaluations, the common theme from various researches is based on the premise that the primary purpose of evaluations is its contribution to the rationalization of decision-making13.

While evaluation theorists were struggling to understand utilization, the NGO management theorists were fighting their own battles with efficiency and organizational performance. There has been a steady stream of experimentation with specific methods, especially those focusing on participatory approaches to M&E and impact assessment. A number of NGOs produced their own guides on monitoring and evaluation. Recent books on NGO management are giving specific attention to assessing performance and the management of information14. As well as doing their own evaluations, some NGOs are now doing meta-evaluations (of methods) and syntheses (of results) of their evaluations to date15. Similar but larger scale studies have been commissioned by bilateral funding agencies. All of these efforts have attempted to develop a wider perspective on NGO effectiveness, looking beyond individual projects, across sectors and country programs. Overall, NGOs have become much more aware of the need for evaluation, compared to the 1980s when
M. C. Alkin et al., "Evaluation and Decision Making: The Title VII Experience," in CSE Monograph No. 4 (Los Angeles: UCLA Center for the Study of Evaluation, 1974). 14 Vandana Desai and Robert Potter, The Companion to Development Studies (London: Arnold, 2002). 15 A. Fowler, Striking a Balance: A Guide to Enhancing the Effectiveness of Non-Governmental Organizations in International Development (London: Earthscan, 1997).
13

10

there was some outright hostility16. However, there is still a struggle on how best to structure processes within the organization to increase utilization.

Organization Learning (OL) literature can provide evaluation use researchers a helpful framework for understanding and creating cultural and structural change and promoting long-term adaptation and learning in complex organizations operating in dynamic environments. Constructs within OL literature provide several links between evaluation utilization practices and learning in organizations. It focuses on fostering learning by increasing utilization processes into the everyday practices, leadership, communication and culture of the organization staff becoming involved in the evaluation process and increasing staff interest and ability in exploring critical issues using evaluation logic.

Drawing from these theories this study explored how NGOs can increase the utilization of evaluation findings to affect program and organization effectiveness.

M. Howes, "Linking Paradigms and Practise, Key Issues in the Appraisal, Monitoring and Evaluation of British Ngo Projects," Journal of International Development 4, no. 4 (1992).

16

11

Research Findings and Conclusion

This thesis developed an evaluation utility model that NGOs can implement to increase use. It addresses a key gap that has existed in the field and has moved the dialogue on evaluation utilization forward by identifying the key factors that influence use and by providing a practical framework that highlights the interrelatedness of these factors. The model presents a fundamental shift in how NGOs must approach program evaluation. It challenges the conventional thinking in the NGO sector with the notion that it is no longer sufficient to focus on use only at the program level. The utility model revealed that influencing factors must extend to include the larger context of organizational behavior and learning. This is a significant contribution to the current understanding and derives strongly from the survey of practitioners. Specifically, the primary research highlighted that evaluation use is a multi-dimensional phenomenon that is interdependent with human, evaluation and organizational factors. Within this context, the utilization process is not a static, linear process but one that is dynamic, open and multi-dimensional driven by relevance, quality and rigor. The model attempts to capture this environment focused on the central premise that whether an evaluation is formative or summative, internal or external, scientific, qualitative or participatory the primary reason for conducting evaluations is to increase the rationality of decision-making. The model challenges NGOs to make evaluation utilization an essential function of its operations and offers practical steps on how organizations can operationalize this. This model adds to the knowledge of evaluation

12

use in NGOs by expanding its focus from being restricted to the program level to include the external realities at the organization level.

Dissertation Organization

The six chapters of the dissertation are set forth as follows. Chapter 1, the Introduction, has briefly described the context of the study. Chapter 2 outlines the methodology used to answer the research questions in this study. The chapter discusses the rationale behind the survey and data collection methods. This is followed by the research questions to be examined. The detailed review of literatures will be presented in Chapter 3. This chapter examines the theories and models around evaluation utilization, organizational learning and NGO evaluation practice. Chapter 4 presents the findings of the survey of NGOs. Chapter 5 presents the utility model and draws together the summary findings of this research. This forms the meat of the analysis, explaining how the theories reviewed and survey results respond to the research questions. The Conclusion, Chapter 6 provides an interpretation of the research findings along with suggestions for future research.

13

Chapter 2: Methodology

This chapter explains the methodology that was used to study evaluation utilization in NGOs. It begins with a discussion of the research questions that were explored in the study. Second, it describes the research structure theories explored and data collection strategies used. Third, it describes how the data were analyzed. Fourth, it addresses limitations of the methodology.

Proposition and Research Questions Research Proposition: In NGOs, successful utilization results when the principles of use are embedded throughout the lifecycle of an evaluation planning, implementation and follow-up.

Research Questions What is Evaluation Use? This research began by exploring the concept of evaluation use. What are the theoretical origins of use? How has it evolved? How is it measured? When do we know use occurs? Answering these questions provided the foundation for understanding what are be the mechanisms to achieve and maximize use.

What are the factors that influence evaluation use? Every phenomenon has factors that trigger its behavior. While seeking to understand the processes to increase evaluation utilization, this study examined the push and pull

14

factors that influence use. What are these factors? Is there a pattern in how they are manifested? What is the relationship among them?

How are these factors applied within the NGO sector? What are the challenges in promoting use in NGOs? As this study focused on the NGO sector, it was important to understand how the factors of use are currently operationalized? Why do certain factors result in use and others dont? How are NGOs tracking use? What are the barriers to use? What are their attempts to overcome these barriers?

What are the processes and systems that can increase evaluation utilization in NGOs? This is the final answer that this research ascertained. If there are certain factors that help to maximize use, then how can they be triggered to achieve the results? What must an NGO do to build and/or strengthen these triggers? What are the challenges in implementing such processes and systems? How can they be mitigated?

15

Research Structure

The research was conducted in three stages: 1. Theoretical review A review of evaluation theory; NGO program evaluation practice and organizational learning literature.

2. Survey of NGOs to gather descriptive information on the extent and type of evaluation utilization occurring in NGOs; assess the key factors identified by the literature and understand the systems NGOs employ to promote use.

3. Development of utility model drawing from the data collected, an evaluation utility model for NGOs was developed along with a list of practical steps that can be implemented to increase utilization.

Below is a representation of where the research questions were covered between the first two stages. As evident from the list below, this research was reliant to a large extent on literature review. However, the survey provided an important aspect of validating the factors that influence use and identifying the processes that trigger their effectiveness and the barriers that inhibit them.

16

Questions What is evaluation use? What are the factors that influence evaluation use? How are these factors applied within the NGO sector? What are the challenges in promoting use in NGOs? What are the processes and systems that can increase evaluation utilization in NGOs?

Data Collection Literature review Literature review

Survey & Literature review

Survey & Literature review

Survey & Literature review

17

Stage 1: Theoretical Review


This phase involved an in-depth examination of the different utility models to identify correlations between evaluation theory and NGO practice and develop a systems understanding of evaluation use. Books and journals contributed to nearly 90% of the theoretical review. The rest was supplemented by online references. Key journals included:

American Journal of Evaluation Evaluation Practice Evaluation and Program Planning Journal of Management Studies Nonprofit Management and Leadership Nonprofit and Voluntary Sector Quarterly

First, a study of evaluation theory was undertaken. The university of Minnesota archives (Wilson Library in Humanities and Social Science) provided a valuable microfiche of literature dating back to the 1970s. This formed the basis for further exploration to build a comprehensive bibliography. While the central books that defined evaluation theory were easy to obtain, it was a challenge to track down some key and relevant articles published in conferences and journals that are now discontinued. These were subsequently obtained from the online databases of the

18

Evaluation Center at the Western Michigan University17 and the American Journal of Evaluation18. The theory of evaluation use is presented in a historical review rather than thematic because the concept of evaluation utilization had been an underlying theme from the early years and had only emerged as a distinct sub-branch in the late 1990s. Also, this approach gave a clearer understanding of the challenges and key revisions that helped shape the utilization models as they evolved.

Mining the literature around NGO evaluation practice was a bit more circuitous as there are not many dedicated researchers in this space. This research started out in the context of NGO program management, developing an understanding about program rationale and decision making. In order to understand the challenges of program evaluation use it was important to first understand what drives program decision making and the internal dynamics of organizational management. Questions explored here include how do NGOs decide on programs? What are the organizational structures that enable effective program management? What are the models of decision-making? The interest was to explore the extent to which NGOs incorporate the concept of utilization into their practice and understand the practical challenges to effective use. To this effect, this study draws on the earlier work done by networks like InterAction and ALNAP. InterAction is the largest coalition of U.S.-based international NGOs focused on the worlds poor and most vulnerable people. Collectively, InterActions members work in every developing country. The U.S. contributions to InterAction members totals to

17 18

"The Evaluation Center," www.wmich/edu/evalctr/. "The American Journal of Evaluation," aje.sagepub.com.

19

around $6 billion annually. InterActions comparative advantage rests on the uniquely field and practitioner-based expertise of its members, who assist in compiling data on the impact of NGO programs as a basis for promoting best practices and for evidencebased public policy formulation. The Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP) was established in 1997, following the multi-agency evaluation of the Rwanda genocide. It is a collective response by the humanitarian sector, dedicated to improving humanitarian performance through increased learning and accountability. ALNAP is a unique network in that its 60 members include donors, NGOs, UN and academic institutions. The networks objective is to improve organization performance through learning and accountability. ALNAPs key initiative, the Evaluative Reports Database, was created to facilitate information sharing and lesson learning amongst humanitarian organizations. Evaluative Reports are submitted to the database by member organizations and made available online. Findings from reports in the database are regularly distilled and circulated to a wider audience through ALNAPs publications.

Information from these two networks contributed to this study on two fronts: (a) To provide a strong list of candidates for the survey - NGOs that are interested and committed to improvements around evaluations use. (b) Their research provided a rich background to understand the challenges NGOs face in planning and implementing evaluation use.

20

The final review was in the field of Organizational Learning. The focus of this study within the vast literature of OL was to understand what organizations, in this case NGOs, can do in a practical systems way to increase utilization and learning. While there is a whole branch of study that revolves around building a learning organization this research was focused on understanding the organization and individual indicators necessary to drive effective evaluation use; the key constructs of OL and its linkages with evaluation utilization.

21

Stage 2: Survey

A Survey of NGOs was conducted to understand the extent and type of evaluation utilization occurring in NGOs as well as assess the key factors, identified by the literature, as influencing use. The survey data was collected over a year (2005). The survey questionnaire contained sections pertaining to evaluation use, organizational learning within the framework of NGO practice. Appendix A contains the survey questionnaire.

Data Collection Characteristics of NGOs Surveyed NGOs vary in many different ways - in the size, type of services provided and geographic location. This survey targeted organizations based upon the following two categories: Those with an international program focus and a presence in the United States a strong program evaluation practice

Table 2.1 depicts the breakdown of the primary programming context of the organizations. As shown below, about one third of the surveyed NGOs were primarily concerned with economic development followed by 19% with health, disaster response at 15%, environment at 13% and human rights and social development at 10%. The bottom in the list, with 5% each was education. 5% of the respondents specified other categories. All of these however, seem more like activities or

22

strategies that the organizations use to achieve their objectives. They are not programming contexts. For example, an organization could be using advocacy or research to work in the context of human rights and social development. Here are the responses mapped to the organization of the respondent. Civil society (CARE), advocacy (Conservation International), research based advocacy (Earth Watch Institute), research and policy (Physicians for Human Rights) and campaigning (World Wildlife Fund), it becomes clear that they could map onto any of the options provided in the response list.

Table 2.1 Primary programming contexts of organizations participating in the survey


#4: How would you categorize the overall programming of the NGO, which is the context for your responses? (please select only the most appropriate) Response Answer Options Disaster Response / Humanitarian Assistance Economic Development Environment Education Human Rights and Social Development Health Other (please specify) Percent 15% 34% 13% 5% 10% 19% 5% Response Count 17 38 14 5 11 21 5 111

100%
Other (please specify) Civil Society

23

Advocacy research based advocacy Research and Policy Campaigning

Selection of NGOs and respondents NGOs were chosen for the survey through purposive sampling. In purposive sampling, the sample is selected with a purpose in mind that seeks one or more specific predefined groups. This research targeted US based NGOs with an international program focus that have an active engagement in evaluation improvement. The first step was to verify if the NGOs meet the criteria for being in the sample. (1) First a master list of all US based NGOs who work in the above issues areas and have an international focus was created from the IRS Exempt Database registry (resulting in 492 organizations Appendix B)19. (2) To identify organizations within this master that have an active

engagement/interest in evaluation improvement and learning, the list was crossreferenced with member lists from ALNAP and InterAction networks to create a short list of 163 NGOs. (Appendix C) (3) These 163 NGOs were put in a column in a spreadsheet. Then a second column of random numbers was generated from EXCELs random number generator. By sorting using the second column as the sort key the NGO names were put in a random order.
"Internal Revenue Service - Charities and Non-Profits (Extract Date October 4, 2005)," http://www.irs.gov/charities/article/0,,id=96136,00.html.
19

24

(4) The first 100 NGOs were then contacted via email to request participation in the survey. Of these the acceptance rate was 27% - 27 organizations. (5) To increase the participation rate, the next 50 NGOs on the list were contacted. From these 13 organizations accepted to participate. (6) This resulted in a final response count of 40 NGOs and 111 respondents.

Within the domain of purposive sampling a combination of Expert and Snowball methods were used to solicit survey respondents. Sampling for the survey first targeted staff in NGOs who are program experts and have a close knowledge of evaluations. Targeted personnel within the NGOs were program staff (e.g.: Program Officers) and program senior management (e.g.: Program Director, Vice President). The focus was on those who were either directly involved in program evaluation and/or management and also for them to have been working in the NGO program context for over 6 months. The advantage of doing this is to get to those individuals who understand the issue of program evaluation use. The disadvantage is that even these experts can be wrong in their assessments.

In Table 2.2 below there is a breakdown of the respondents professional level within the organization. Nearly 86% of the respondents were Program staff either as managers or team members; and 11% identified themselves as part of the senior management team. From a decision-making lens, assuming that program managers make decisions about their programs, there is almost 49% representation of decision-

25

makers in the survey (adding the program managers 38% and senior management 11%).

Table 2.2 - Role of the survey respondents #5: Please select one option that relates closely to your current role. Response Answer Options Program Manager Program Team Member Senior Management (Director and 11% above) Board Member Other (please specify) 0% 3% 100% Other (please specify) Operations team not programs Advocacy officer Evaluations manager 0 3 111 12 Percent 38% 48% Response Count 42 53

Table 2.3 below shows the experience level of respondents with NGO programs. This can lead to assess the level of understanding they bring about the program evaluations, their use and the barriers to use. Over half the respondents have significant number of years in programming. Only 4% identified as less than a year. 26

Table 2.3 Experience level of survey respondents #3: No. of years experience with NGO programs? Response Answer Options Less than 1 year Between 1 5 years Between 5 - 10 years Over 10 years Percent 4% 31% 15% 50% 100% Response Count 4 34 17 56 111

Snowball methods were used to expand the participants list within NGOs. Although the survey was addressed to a specific expert at each organization, often within the program team, they were requested to forward the survey to others within the organization involved with evaluation and program decision-making. While the first point of contact was pre-determined the subsequent respondents were not preselected. This approach yielded multiple respondents within several organizations contributing to a potentially diverse understanding of evaluation utilization practices in a specific context as opposed to if there was only one respondent for each organization. However, even in this situation there is a possibility that all of the respondents could have collectively provided an unbalanced depiction of evaluation use within the organization

27

Table 2.4 below shows the distribution of the 111 respondents across 40 NGOs. There are 4 NGOs which had 4 respondents each; 27 NGOs had 3 each; 5 NGOs had 2 each and 4 NGOs had one respondent. Table 2.4 Participating organizations along with the number of respondents from each organization
Organization Name # of respondents ActionAid International Advocacy Institute Africare 2 1 3 Jesuit Refugee Service Mercy Corps National Committee on American Foreign Policy American Red Cross American Refugee Committee CARE International Catholic Relief Services CONCERN Worldwide Conservation International Doctors without Borders Earth Watch Institute Global Fund for Women Grassroots International Habitat for Humanity 4 3 2 3 3 3 3 2 3 PACT Pan American Health Organization Peace Corps Physicians for Human Rights Population Services International Refugees International Salvation Army World Service Office Save the Children The Lutheran World Federation 4 1 3 3 3 3 3 4 3 3 3 Open Society Institute OXFAM 3 3 Organization Name # of respondents 3 3 1

28

International Heifer International 3 Unitarian Universalist Service Committee Human Rights Watch 3 Weatherhead Center for International Affairs Institute for Sustainable Communities Interaction International Rescue Committee IPAS USA 2 World Wildlife Fund 3 3 3 World Council of Churches World Vision 3 4 1 Women's Commission for Refugees 3 2 3

Process for soliciting participants from NGOs An initial email explaining the context for the research. Whenever there was any contact information available, there was a follow-up phone call to clarify questions and to also ensure that the participant was directly involved in evaluation or program management. Then the link to the online survey was shared. In several cases, the initial contact person declined to participate or referred other staff within the organization that was a better fit with the research.

It took over a year from starting to source participants to when the surveys were all completed. The main reasons cited by participants on their interest in this study are as follows:

29

o They acknowledge the problem of evaluation under-utilization o To share their internal systems/approaches o To learn what systems they can put in place to improve use

Development and Distribution of the Survey Among the several online survey tools, SurveyMonkey.com was selected primarily for its ease of design and robust functionality. Survey questions can be divided into two broad types: structured and unstructured. Within the structured format there are (1) dichotomous questions Yes/No; Agree/Disagree responses and (2) questions based on a level of measurement/ranking. Respondents were also allowed to comment on most questions to capture options that may have been overlooked. Unstructured questions are open-ended to gather respondent perspectives on specific issues.

To ensure content validity and technology functions the survey was pretested with 4 organizations CONCERN Worldwide, Human Rights Watch, International Rescue Committee and Jesuit Refugee Services. Pre-testers were asked to answer the following five questions:

(1) How long did it take you to complete the survey? (2) Did you find any questions confusing (in terms of grammar, vocabulary, etc.)? If so, what was confusing?

30

(3) Did you find any answer choices confusing (in terms of grammar, vocabulary, etc.)? If so, what was confusing? (4) Are there any significant questions you feel should have been asked in context that is omitted? (5) Please comment on the technical functionality of accessing and completing the survey online. (6) Is there anything else you feel would be helpful for this research?

The pre-test confirmed that respondents could complete the survey within 15 20 minutes. As a result of the pre-test changes, no new questions were added but openended comment fields were added to some questions to capture options not provided in the choices. Organizations that participated in the pre-test also completed the final survey.

31

Data Analysis

Quantitative data was entered imported into Microsoft Excel from SurveyMonkey.com and analyzed using basic descriptive statistics such as frequencies and cross-tabulations as well as measures of central tendency where appropriate. Qualitative data from open-ended questions were analyzed using an inductive process to identify key themes. Content analysis was used to identify, code, and categorize the primary patterns in the data20. This is a research method that uses a set of procedures to make valid inferences from text. The rules of the inferential process vary according to the theoretical and substantive interests of the investigator. It is often used to code open-ended questions in surveys.

20

Kimberly A. Neuendorf, "The Content Analysis Guidebook Online,"

<http://academic.csuohio.edu/kneuendorf/content/>.

32

Limitations to Survey The limitations can be grouped into three categories: technical, human and logistical. Technical: Since the survey used many definitions there was a threat that they were inadequate or inaccurate representations of meaning. Efforts were made to minimize this threat by pre-testing the survey. Also, in some cases respondents were allowed to add to the choices to provide increased flexibility. The second threat was that the sample set was small and targeted. As a result, research findings may not be generalized widely within the NGO sector. Nevertheless, the research provides a unique opportunity to test and refine the data collection instruments for future use in larger studies that utilize random sample selection. Human: first, because respondents were selected based on their proximity to program evaluations there was a possibility that they would appear to use evaluation findings in their decision-making. To minimize this, the survey ensured respondent confidentiality. Several respondents completed the survey anonymously. Second, respondent bias and inaccurate representation of utilization experiences is a potential limitation. This was minimized to a certain extent by ensuring there were multiple respondents from each organization to provide, as much as possible, a balanced interpretation. Logistical: The main limitation here was the potentially low response rate. To increase the likelihood of responses, initial contacts within the organizations were requested to suggest others who could participate and the survey was provided online to allow for ease.

33

Chapter 3: Literature review

Evaluation Utilization

Definitions
What is an evaluation? A simple definition of the term evaluation is the systematic determination of the quality or value of something. Evaluation may be done for the purpose of improvement, to help make decisions about the best course of action, and/or to learn about the reasons for successes and failures. Even though the context of an evaluation can vary dramatically, it has a common methodology which includes21: (1) Systematic analysis to determine what criteria distinguish high quality/value from low quality/value; (2) Further research to ascertain what levels of performance should constitute excellent vs. mediocre vs. poor performance on those criteria; (3) Measure performance; and (4) Combine all of the above information to make judgments about the validity of the information and of inferences we derive from it.

21

Carol Weiss, Evaluation, 2nd ed. (Saddle River, NJ: Prentice Hall, 1997).

34

Approaches to Evaluation Over the years, evaluators have borrowed from different fields of study to shape the approaches and strategies for conducting evaluations. The three major approaches are presented below: Scientific-experimental approach: Derive their methods from the pure and the social sciences. They focus on the need for objectivity in their methods, reliability, and validity of the information and data that is generated. Most prominent examples of the scientific-experimental models of evaluation are the various types of experimental and quasi-experimental approaches to data gathering22. Qualitative/anthropological approach: Emphasizes the importance of observation and the value of subjective human interpretation in the evaluation process. Included in this category are the various approaches known in evaluation as naturalistic inquiry, where the paradigm allows for the study of phenomena within its natural setting23. Participant-oriented approach: Emphasize the importance of the participants in the process, especially the beneficiaries or users of the object of evaluation. User and utilization-focused, client-centered, and stakeholder-based

approaches are examples of participant-oriented models of evaluation24. A basic tenet of utilization-focused evaluation is that one must prioritize intended users, uses, and evaluation purposes.

Donald T. Campbell and Julian C. Stanley, Experimental and Quasi-Experimetal Designs for Research (Chicago: Rand McNally, 1963). 23 Y. Lincoln and E. Guba, Naturalistic Inquiry (Thousand Oaks, CA: Sage Publications, 1985). 24 M.Q. Patton, Utilization-Focused Evaluation, 2nd edition ed. (Beverly Hills, CA: Sage, 1986).

22

35

In reality, most evaluations will blend these three approaches in various ratios to achieve results as there is no inherent incompatibility between these broad strategies - each of them brings something valuable to the process.

Types of Evaluation There are two broad categories in the types of evaluations: formative and summative25. Formative evaluation, strengthen or improve the object being evaluated and are undertaken when the object is active or forming -- they help by examining the delivery of the program or product, the quality of its implementation, and the assessment of the organizational context, personnel, procedures, inputs, and so on.

Formative evaluations are useful for various purposes.


They may help catch problems early on, while they can still be corrected. They are an evaluation of process, so they may be useful in understanding why different outcomes emerge and improving program management.

They provide an opportunity to collect baseline data for future summative (or "impact") evaluations.

They help identify appropriate outcomes for summative evaluations.

W.R. Shadish, T.D. Cook, and L.C. Leviton, Foundations of Program Evaluation: Theories of Practice (Newbury Park, CA: Sage Publicaitons, Inc., 1991).

25

36

Summative evaluations, in contrast, examine the effects or outcomes -- they summarize it by describing what happens subsequent to delivery of the program or product; assessing whether the object of the evaluation can be said to have caused the outcome; determining the overall impact of the causal factor beyond only the immediate target outcomes; and, estimating the relative costs associated with the object. Some advantages of summative evaluations include:

They can provide evidence for a cause-and-effect relationship. They assess long-term effects provide data on change across time. They can be effective to measure impact. They measure cost-effectiveness to address the questions of efficiency. They allow for a secondary analysis of existing data to address new questions or uses.

They offer a meta-evaluation that integrates the outcomes from multiple studies to arrive at a summary judgment on an evaluation question.

Evaluations can be internal, undertaken by program or organizational staff. However, there are occasions when it is useful and important to conduct an external evaluation, such as when you want to learn about the longer term impact of a program in relation to the broader issues in the field. Some of the advantages and disadvantages of conducting internal and external evaluations are outlined below.

37

Table 3.1 Advantages/Disadvantages of Internal and External Evaluations


Type of Evaluator Internal Familiarity with the program and will need less time to learn about the organization and its interests. May know the program too well and find it difficult to be objective. Also, they may not have any specific evaluation training or experience. Known to staff and therefore is less of a threat. The evaluator could hold a position of power and authority and personal gain may influence his or her findings and/or recommendations. External Not personally involved in the program can therefore be more objective when collecting and analyzing data and presenting the results. The outsider is not a part of the power structure. An outsider may not fully understand the goals and objectives of the program or its context. The external evaluator can take a fresh look at the program or organization. An external evaluation can be expensive, time consuming, and disruptive of ongoing progress. May cause anxiety among program staff when they are unsure of the motives of the evaluation/evaluator. Advantages Disadvantages

38

Evaluation Use For the purpose of this research the definition of evaluation use is derived from Weisss 1966 paper, though four decades old, its relevance still rings true.

The basic rationale for evaluation is that it provides information for action. Its primary justification is that it contributes to the rationalization of decision making. Although it can serve such other functions as knowledge-building and theory-testing, unless it gains serious hearing when program decisions are made, it fails in its major purpose.

Types of Use In a simplistic view of utilization we can say that anytime, anyone uses anything from an evaluation for any purpose that is utilization. With this lens one can argue that utilization occurs in almost every case. On the other end is the restrictive view that says utilization occurs only when an intended user makes a specific decision immediately following the evaluation report and based solely on the findings of that report. The spectrum of evaluation use can be grouped into the following categories26:

(1) Instrumental brings about changes in practice and procedures as a direct result of the evaluation findings. Change occurs through specific action.

Marvin Alkin, Richard Daillak, and Peter White, Using Evaluations: Does Evaluation Make a Difference? (Beverly Hills: Sage Publications, 1979).

26

39

Evidence for this type of utilization involves decisions and actions that arise from the evaluation, including the implementation of recommendations.

(2) Conceptual is more indirect and relates to an increased understanding of the topic. This type of use occurs first in the thoughts and feelings of stakeholders. Over time achieving conceptual use can lead to more actionable instrumental use.

(3) Symbolic is when an evaluation is conducted merely to demonstrate compliance to an external factor or to justify a pre-existing position of an agency. For example, an evaluation is conducted with no intention of utilizing the findings but merely to justify program decisions already made.

(4) Strategic - is to persuade others or to use evaluation findings to gain particular outcomes27. Often seen when findings influence decisions beyond the scope of the evaluation. For example, change the course of programming or inform the larger strategic vision of the organization.

(5) Process - ways in which being engaged in the processes of evaluation can be useful quite apart from the findings that may emerge from these processes. It could lead to changes in beliefs and behaviors of participants and ultimately lead to organizational change.

W.R. Shadish, T.D. Cook, and L.C. Leviton, Foundations of Program Evaluation: Theories of Practice (Newbury Park, CA: Sage Publicaitons, Inc., 1991).

27

40

1960s through 1970s: The Foundation Years


The prominence of evaluation research in the late 1960s and early 1970s can be attributed to studies at that time which document the low degree of utilization of social research data in policy making and program improvement in governmental operations. The mainstream view was that evaluations seldom influence program decision-making and there was little hope that evaluation will ever have any real impact on programs. Therefore the initial debates were on whether evaluations did in fact make a difference?

Carol Weisss 1966 paper Utilization of Evaluation: Toward Comparative Study signaled the beginning of the organized study of evaluation utilization28. In this Weiss laid out what was later widely accepted as the primary argument for doing evaluations: to increase the rationality of program decision-making. Measuring by this standard Weiss not only found some instances of effective utilization but also observed a high standard of non-utilization. In presenting the factors that might account for this non-utilization she focused on two main categories:

organizational systems and evaluation practice

By organizational systems she refers to the informal goals and social structures influencing decision-making that are often overlooked by classic evaluation models geared towards formal goals of the organization. Weiss also strongly criticized the
28

Alkin, Daillak, and White, Using Evaluations: Does Evaluation Make a Difference?

41

evaluation practice at that time of inadequate academic preparationlow status in academic circles.practitioner resistanceinadequate time to follow-

up...inadequacies of money and staffing....etc29 She established the need for a systematic study of conditions and factors associated with the utilization of evaluation results. Her initial groupings included not only organizational and political factors but also more practical, technical and operational factors. Weisss paper generated much excitement in the evaluation circles and it was the impetus to more rigorous research on further categorization of potential factors -- from other fields of study like education theory, decision theory, organizational theory and communication theory30 -- and how they aid or impede utilization.

The second stage of advancement in the study of evaluation utilization came in the mid-70s. Researchers, Marvin Alkin and Michael Quinn Patton, working separately came up with a more comprehensive listing of potential utilization factors. But the short-coming of these lists was that they came from a theoretical base rather than from any empirical evidence.31 It was not until the late 70s that the factors drawn out of program research were published. Through large-scale surveys, smaller interview studies, case studies, observations and collection of anecdotes researchers further discovered how prospective users made use of research and evaluation findings. The mainstream view on evaluation held before the 1960s now began

Carol H. Weiss, ed. Utilization of Evaluation: Toward Comparative Study, Evaluating Action Programs: Readings in Social Action and Education (Boston: Allyn and Bacon,1972). 30 H. R. Davis and S. E. Salasin, eds., The Utilization of Evaluation, vol. 1, Handbook of Evaluation Research (Beverly Hills: Sage Publications,1975). 31 Scarvia B. Anderson and Samuel Ball, The Profession and Practice of Program Evaluation (San Francisco: Jossey-Bass, 1978).

29

42

shifting towards a different conclusion that evaluations do influence programs in important and useful ways.32

1980s through 1990s: The rise of context in evaluation theory


In the early 1980s there was general agreement that evaluation use was a multi-dimensional phenomenon best described by the interaction of several dimensions, namely, the instrumental (decision support and problem solving function), conceptual (educative function), and symbolic (political function) dimensions.33 As researchers continued to produce indicators and predictors of use along these dimensions, in 1986, Cousins and Shulhas (1986) meta-analytic work went a step further to assess the relative weight of factors in their ability to predict use. Their findings indicated that the quality, sophistication and intensity of evaluation methods were among the most potent in influencing the use of findings.34 This report along with Greenes35 observations set the direction for future research, arguing that it is not enough simply to describe different types of use and to catalogue the contributing factors but that the real need was in specifying the relative weight of influential factors. However, soon researchers emerged with findings that contradicted each other, failing to establish a clear hierarchy of influential factors.

Carol Weiss, Social Science Research and Decision-Making (New York: Columbia University Press, 1980). 33 Lyn M. Shulha and J. Bradley Cousins, "Evaluation Use: Theory, Research and Practice since 1986," American Journal of Evaluation 18, no. 1 (1997). 34 ibid 35 Jennifer C. Greene, "Stakeholder Participation and Utilization in Program Evaluation," Evaluation Review 12, no. 2 (1988).

32

43

While Cousins and Leithwood emphasized evaluation methods, Levin36, applying the same framework, concluded that contextual factors were pivotal in explaining patterns of use. Another perspective proposed by the works of Green (1990)37, King (1988)38 and Weiss states political activity as inextricably linked to effective use. They argue that decision makers do not act alone and face an onslaught of decisionrelevant information from competing interests groups and changes in program circumstances. This finding was further strengthened by Mowbray (1992)39, who using political frames of reference described how the loss or acquisition of resources during an evaluation significantly changed the effects of the evaluation. At the same time, another group of researchers linked organizational structure and process to effective use. Mathison (1994)40 and Owen and Lambert (1995)41 research also found that the levels of bureaucracy within an organization, the lines of communication within and across these levels and the degree of decision-making autonomy within program units contributed to increased utility of evaluation findings.

Patton (1997) added an extra dimension to the factors influencing use by examining the interaction between the evaluator and the program context. In arguing that evaluations must serve the intended use of intended users, Patton positions the
B. Levin, "The Uses of Research: A Case Stuffy in Research and Policy," The Canadian Journal of Program Evaluation 2, no. 1 (1987). 37 J. C. Greene, "Technical Quality Vs. User Responsiveness in Evaluation Practice," Evaluation and Program Planning 13 (1990). 38 J.A. King, "Research on Evaluation and Its Implications for Evaluation Research and Practice," Studies in Educational Evaluation 14 (1998). 39 C.T. Mowbray, "The Role of Evaluation in Restructuring of the Public Mental Health System," Evaluation and Program Planning 15 (1992). 40 S. Mathison, "Rethinking the Evaluator Role: Partnerships between Organizations and Evaluators," Evaluation and Program Planning 17, no. 3 (1994). 41 J.M. Owen and F.C. Lambert, "Roles for Evaluation in Learning Organizations," Evaluation 1, no. 2 (1995).
36

44

evaluator in the thick of program context.42 Drawing from work of organizational learning scholars Chris Argyris and Donald Schn, he constructed the theory of userfocused approach to evaluation use.43 In this theory the evaluators task is to facilitate intended users, including program personnel, in articulating their operating objectives. Patton argues that by involving potential users in constructing and planning an evaluation it creates more ownership to the results produced and thereby increases the likelihood of use. Based on his case studies, in 1997 Patton presented a basic framework of the Utilization focused evaluation process.

The flow of processes within this framework is as follows: identify intended users of the evaluation identify intended uses agreement on the methods/measures and design of the evaluation intended users are actively and directly involved in interpreting findings, making judgments based on the data and generating recommendations Finally, the dissemination of findings to intended users

While the framework provides ample room for flexibility within different contexts it does have a major point of vulnerability -- the turnover of primary intended users.44 The framework depends heavily on the active engagement of

Shulha and Cousins, "Evaluation Use: Theory, Research and Practice since 1986." Michael Quinn Patton, Utilization Focused Evaluations (Beverly Hills, CA: Sage Publications, 1997). 44 Ibid.
43

42

45

intended users that to lose users along the way to job transitions, reorganizations and reassignments can undermine eventual use. Patton acknowledges that replacement users who join the process late seldom come with the same agenda as those who were present at the beginning. He offers two solutions to this problem. The first, maintaining a large enough pool of intended users so that the departure of a few will not impact utilization. The second option, in the event of a large scale turnover of intended users, is to renegotiate the design and use commitments with the new set of users. Even though this will delay the evaluation process it will payoff in eventual use. Pattons work set in motion some of the more innovative research on evaluation use: how improving the evaluation process use can lead to organizational learning.

Several studies (Ayers, 198745; Patton, 199446; Preskill, 199447) have shown linkages between intended user participations and increased personal learning, which then led to improved program practice. With the notion of evaluation for organizational learning attracting considerable attention, researchers started looking beyond the effects of evaluation use on specific program practice. Several theorists made strong cases for understanding evaluation impact in an organization context.48 The findings by Preskill (1994) showed signs of relationship between evaluation activities and the development of organizational capacity. While evaluations are undertaken along the lines of an organizations formal goals, the integration of findings into practice have to

Toby Diane Ayers, "Stakeholders as Partners in Evaluation: A Stakeholder-Collaborative Approach," Evaluation and Program Planning, no. 10 (1987). 46 M.Q. Patton, "Development Evaluation," Evaluation Practice 15, no. 3 (1994). 47 H. Preskill, "Evaluation's Role in Enhancing Organizational Learning," Evaluation and Program Planning 17, no. 3 (1994). 48 Shulha and Cousins, "Evaluation Use: Theory, Research and Practice since 1986."

45

46

fit into the numerous informal goals and structures within any organization, some of which might have their own cultures and imperatives. Preskill cautions that an essential element to successful linking evaluation and organizational learning is the willingness of the organizations management to support the collaborative process and accept evaluative information.49

49

Ibid.

47

The 21st Century: Stretching the boundaries beyond use


Carol Weisss 1998 article50 Have we learnt anything about the use of evaluation? sets the tone of the challenges currently facing evaluation use theorists. She states that while there have been many achievements in the last three decades; most of the learning has come from applying new constructs and perspectives than from empirical research on evaluation use. She further argues that with the growing realization of how complicated the phenomenon of use is and how different situations and contexts can be from each other, it is conceptually and theoretically difficult to reduce the elements of use to a set of quantitative factors. Mark and Henry (2004) further corroborate Weisss observations by stating that the study of evaluation use is an overgrown thicket because very different positions have been advocated as to the scope. As a result of a myriad of theories and conflicting literature, they say that even after three decades of research evaluators may not have a common understanding of what it means for an evaluation to be used, or of what an evaluator means when he/she refers to use.

As a response to such an overgrowth within the taxonomies of use, Kirkhart51 developed the integrated theory influence. She broadens the question from how are the results of an evaluation study used to how and to what extent does an evaluation shape, affect, support and change persons and systems. To answer this she proposes a framework that shifts the focus from use to influence as the term use is limited to
Carol Weiss, "Have We Learned Anything New About the Use of Evaluation?," American Journal of Evaluation 19, no. 1 (1998). 51 K. E. Kirkhart, "Reconceptualizing Evaluation Use: An Integrated Theory of Influence," New Directions for Evaluation, no. 88 (2000).
50

48

results based measures and does not include unintended effects of evaluation and the gradual emergence of impact over time. Evaluative influence is defined by Kirkhart as the capacity or power of persons or things to produce effects on others by intangible or indirect means.

In Kirkharts model (Figure 3.1) the source of influence can arise from either Figure 3.1 - Kirkharts integrated theory of influence
the evaluation process or the evaluation results. She does
Lo te ng rm

acknowledge that some of the influence that comes from the evaluation process will impact on
P rocess
E e n cy d o cl f e

R esults

the results of the study; and thus the two sources of influence are interrelated. The second dimension is the
Intention

intention of the influence and is defined as the extent to which evaluation influence is purposefully directed, consciously recognized and anticipated. The final dimension is the timing of the influence immediate (during the study), end of cycle, and long term. One of the key benefits Kirkhart proposes from this model is the ability to distinguish between use and misuse by tracking influences around an evaluation study and the evolving patterns of influence over time she contends that one can map the outcomes of use as beneficial or not.

Unintended

Intended

m Inte ed ria te

S ource

im

49

Henry and Mark (2003)52 and Mark and Henry (2004)53 further advanced the discussion of evaluation use, basing on Kirkharts theory of influence. They propose a set of theoretical categories mediators and pathways through which evaluation can exercise influence. Drawing from social science literature, they developed a theory of change to apply to the consequences of evaluation at individual, interpersonal and collective levels.

In Table 3.2 below Mark and Henry present the General Influence outcomes as the fundamental architecture of change, where even though they may not yield any change by themselves they are indirectly likely to set into motion some change in the cognitive/affective, motivational or behavioral outcomes. For example, let us look at the influence of elaboration - an individual, simple spending time thinking about an evaluation finding, does not create any measurable use unless their thoughts lead to attitude valence (positive or negative). Even though elaboration does not directly deliver use it is an important immediate consequence of evaluation, without which changes in behavior might not occur. Elaboration can be measured by assessing how much time or effort an individual spends thinking in response to a message. An evaluation report, a conversation about an evaluation, or a newspaper article about an evaluation could trigger such cognitive processing. For example, a recently publicized evaluation about the positive effects of primary feeding centers may cause a reader at

Melvin Mark and Gary Henry, "The Mechanisms and Outcomes of Evaluation Influence," Evaluation 10, no. 1 (2004). 53 Gary Henry and Melvin Mark, "Beyond Use: Understanding Evaluation's Influence on Attitudes and Actions," American Journal of Evaluation 24, no. 3 (2003).

52

50

another location to think more about her views on nutrition in refugee camps. Such a change may be exactly what some evaluators consider enlightenment. Of course, an evaluator would be interested not only in whether someone engaged in elaboration, but also in what if any changes this led to in the persons attitudes, motivations and actions. Still, elaboration itself is an important immediate consequence of evaluation, which might in turn produce a change in the individuals opinion about nutrition programs and, perhaps, subsequent change in behavior. General influence processes can occur at all three levels, the individual, the interpersonal, and the collective, as indicated in Table 5.0. Consideration of these influence processes is important for understanding how evaluation can influence attitudes and actions. Cognitive and affective outcomes refer to shifts in thoughts and feelings, such as a step towards action as in agenda setting. Mark and Henry argue that although Motivational outcomes, which refer to human responses to perceived rewards and punishments, has received less attention in the literature it might be more important as an intermediate tool to influence practitioner behavior towards increasing evaluation use rather than a long term outcome. Behavioral outcomes refer to measurable changes in actions that can be both short-term and long-term. For these would include changes in a teachers instructional practices at the individual level or a government policy change at the collective level. Thus, behavioral processes often comprise the long-term outcomes of interest in a chain of influence processes.

Mark and Henry further attempt to tie in the traditional forms of use to the above outcomes. Instances of instrumental use (where change occur in action) fall

51

within the behavioral row of Table 5.0. Conceptual use (where change occurs in thoughts and feelings) corresponds to the cognitive and affective processes row. Symbolic use (where the evaluation is used to justify a pre-existing position) ties into a limited set justification at the interpersonal level and ritualism at the collective level. In contrast, process use does not correspond to specific rows of Table 3.2 as changes occur as a result of the process of evaluation rather than a result of an evaluation finding.

Table 3.2 - A model of Outcomes of Evaluation Influence54 Type of outcome Influence At an Individual Level General Influence Elaboration Heuristics Priming Skills acquisition At an Interpersonal Level Justification Persuasion Change agent Minority-opinion influence Standard setting Policy consideration Ritualism Legislative hearings Coalition formation Drafting legislation At the Collective Level

Cognitive and

Salience

Local descriptive

Agenda setting

54

Mark and Henry, "The Mechanisms and Outcomes of Evaluation Influence."

52

affective Opinion/attitude valence

norms Policy-oriented learning

Motivational

Personal goals and aspirations

Injunctive norms

Structural incentives

Social reward Exchange

Market forces

Behavioral

New skill performance Individual change in practice

Collaborative change in practice

Program continuation, cessation or change Policy change

Diffusion

There is a brief review of an area that has emerged as an important focus within evaluation theory misuse. It is important to differentiation misuse from non-use. Non-use is when there is a rational or unintended reason for ignoring an evaluation. These can be due to the poor quality of the report, change in strategic direction etc. However, misuse on the other hand, can occur if an evaluation is commissioned with no intention of acting upon it or when there are deliberate attempt to subvert the

53

process and/or the findings. One of the first notable researchers on misuse, Alkin and Coyle55, described several distinct variations.

(1) justified non-use: When the user is aware that the evaluation were technically flawed or erroneous, he or she would be justified in not incorporating the information into decision-making (2) Unintentional non-use: When the evaluation was of sufficient technical quality but potential users are unaware of their existence, or inadvertently fail to process the information (3) abuse: When the information is known to be of superior quality but is suppressed or distorted by a potential user for whatever reason (political or otherwise covert reasons)

Stevens and Dial56 (1994) outlined a list of practices that constitute misuse such as changing evaluation conclusions, selectively reporting results, ascribing findings to a study that differ from actual results, oversimplifying results and failing to qualify results. However, as noted in Alkin57 (1990), as with evaluation use, scholars continue to struggle with the complexity of misuse and the challenge in establishing a standardized framework in which to gauge misuse. Below is Alkins attempt to classify the causalities that lead to misuse.

Marvin Alkin and Coyle Karin, "Thoughts on Evaluation Utilization, Misutilization and NonUtilization," Studies in Educational Evaluation 14, no. 3 (1988). 56 C. L. Stevens and M. Dial, eds., What Constitutes Misuse?, New Directions for Program Evaluation: Guiding Principles for Evaluators (San Francisco: Jossey-Bass,1994). 57 Marvin C. Alkin, Debates on Evaluation (Newbury Park, California: Sage Publications, 1990).

55

54

Figure 3.2 Evaluation Use Relationships

Irrespective of how misuse or non-use is categorized the fact remains that precious resources effort, time and money are wasted and opportunity costs incurred when they occur. In reality, use, non-use and misuse can overlap in one evaluation strongly influenced by the interests/motives of the users and the organizational context. It is important to note that this research does not presuppose that all evaluation recommendations are the best and therefore should be implemented. Program evaluations share the complexity of the work they are evaluating. They are at best a set of informed judgments made in specific contexts. As a result, the

55

recommendations of even the best evaluation can be disputed or rejected on perfectly rational grounds resulting in non-use.

The emergence of several evaluation utilization frameworks, over the last decade, based on collective experience and findings from other fields have strengthened the knowledge about the processes underlying evaluation utilization. Weiss (1998) summarized the current state of evaluation use research as we may not have solved the problem but we are thinking about it in more interesting ways.

56

Process Models of Evaluation Use

Theoretical process models, espoused by various evaluation use scholars, attempts to integrate the factors that affect use into systems, showing the interrelationship among factors and their environment. What follows is a list of models derived from the empirical research and from the theoretical literature.58

Implicit evaluation utilization process-models These are models where individual factor influences are implied but not directly depicted in the construct. The first theorist who has an implicit process model is Campbell, who in the 1960s contended that the major responsibility for use of evaluations lies in the political process, not with the evaluator59. He views the evaluator as a scientist who conducts the evaluation using the best methods possible, but does not directly promote the use of findings. His assumption was, similar to other early theorists, that evaluations will be used when they are well done. Figure 3.3 Campbells implicit process-model

Program evaluation reports of past programs

Consideration by policy-makers along with other information

Instrumental Use

Burke R. Johnson, "Toward a Theoretical Model of Evaluation Utilization," Evaluation and Program Planning 21 (1998). 59 Campbell and Stanley, Experimental and Quasi-Experimetal Designs for Research.

58

57

Scrivens model had a summative approach, in which the evaluator examines the comparative strengths and weaknesses of a program and make a final judgment of worth is the program good or bad? Program decision-makers are viewed similar to consumers of other products, in that based on the final judgment they make rational choices60.

Figure 3.4 Scrivens summative model


Organizational Environment Final summative evaluation report Marketplace of ideas and information Use by people interested in the program

Weisss model of evaluation use focuses at the individual level61. She contends that decisions are the result of three major influences: (1) information, (2) ideology and (3) interests. The influence of these three factors is tempered by the organizational environment in which the individual resides. Furthermore, decisions are guided by two questions: does it conform to prior knowledge (truth tests)? And are the recommendations feasible and action oriented (utility tests)?

M. S. Scriven, ed. Evaluation Ideologies, Evaluation Models: Viewpoints on Educational and Human Service Evaluation (Boston: Kluwer-Nijhoff,1983). 61 C. H. Weiss, ed. Ideology, Interest, and Information: The Basis of Policy Decisions, Ethics, the Social Sciences, and Policy Analysis (New York: Plenum,1993).

60

58

Figure 3.5 Weisss implicit decision model

Interests

Truth Tests Decision to use Utility Tests

Organization Environment

Ideology

Information

Wholey based his model around instrumental use, stating that evaluation should directly serve the needs of management and provide immediate, tangible use. He argues that if the potential for use of an evaluation does not exist (which he would determine from an evaluation assessment) then the evaluation should not be done. Taking into account the resource limitations of programs, Wholey recommends a process where evaluations are prioritized and designed to meet program budgets.

Figure 3.6 Wholeys resource-dependent model

Assessment of evaluation needs

Evaluation implementation

Change in program

Continuous instrumental use

Cronbach talked about the need to understand in detail the process going on in a program to effectively use its findings62. He suggests that there are often multiple interactions among factors that can be captured only if the process is examined more closely. Cronbach also suggests that when examining the process if changes are
L. J. Cronbach, Designing Evaluations of Educational and Social Programs (San Francisco: JosseyBass, 1982).
62

59

required, they need to be communicated to the stakeholders during the evaluation rather than wait for a final report. So in this model, the evaluator is called upon to carry out an educational role. Figure 3.7 Cronbachs process model

Analysis of background theoretical literature

Program Development

Continuous feedback and modification of program and evaluation questions

Long term conceptual use

The final model is that of Rossis. He suggests that to increase use evaluators should tailor evaluation activities to local needs63. How this is done depends on the stage and the kind of the program that is being evaluated. This process of fitting evaluations to programs can be viewed as an approach to increasing evaluation use.

Figure 3.8 Rossis process model


Review literature on similar programs Work with program managers to develop model Modify program (instrumental and conceptual use)

Collect data

Compare model with reality

Explicit evaluation utilization process-models

Explicit process-models are those that are constructed by researchers and directly tested on empirical data. A frequently cited explicit process-model of
63

Johnson, "Toward a Theoretical Model of Evaluation Utilization."

60

evaluation utilization was developed by Greene64. She suggested that stakeholder participation in evaluation planning and implementation is an effective way to promote use. Based on her findings, Greene categorized stakeholders into three groups: (1) very involved, (2) somewhat involved and (3) marginally involved. According to this participatory approach, stakeholders must be involved in the formulation and interpretation phases of the evaluation.

Figure 3.9 Greens participatory evaluation process

Iterative, ongoing communication and dialogue with stakeholders

Active discussion of key program issues amidst diverse perspectives

Learning more about the program and agency

Greater understanding of results

Stakeholders substantive decision making role

Affective individual learning of worth and value

Learning more about evaluation Heightened perceptions of the results as valid, credible, persuasive Greater acceptance / ownership of the results

Diversity of stakeholder participants

Voice to the less powerful interest and attention from the most powerful

Greater sense of obligation to follow through on the results

64

Greene, "Stakeholder Participation and Utilization in Program Evaluation."

61

Cousins and Leithwood developed an evaluation utilization model in 1986 and further expanded it in 1993 to the knowledge utilization model65. This model lists seventeen key factors (shown below) that affect use. All three of these sets of factors are shown to directly affect utilization. Additionally, the first two sets are shown to affect the third set of interactive processes.

Figure 3.10 Cousins and Leithwood utilization model

Characteristics of the source of information Sophistication Quality Credibility Relevance Communication Quality Content Timelines

Knowledge Utilization Interactive Processes Involvement Social Processing Ongoing Contact Engagement Diffusion Information Processing Learning Decision

Improvement Setting Information Needs Focus for improvement Political climate Competing information User commitment User characteristics

65

Johnson, "Toward a Theoretical Model of Evaluation Utilization."

62

Alkin, one of the earliest researchers in the evaluation utilization literature, developed an evaluation-for-use model66. In this he includes a list of factors grouped into three categories: human (evaluator and user characteristics), context (fiscal constraints, organizational features, project characteristics) and evaluation (procedures, reporting) factors. Alkin organizes what he sees as the most important of these factors in the concept shown below.

Figure 3.11 Alkins factor model

Setting the stage

Identifying / organizing the participants

Operationalizing the interactive process

Adding the finishing touches

Patton is the founder of Utilization-Focused Evaluation. In this approach, an evaluator is supposed to consider potential use at67 every stage of the evaluation, working closely with the primary intended users. Patton identifies organizational decision makers as the primary users and information that is helpful in decision making is factored into the evaluation design.

66 67

M. C. Alkin, A Guide for Evaluation Decision Makers (Newbury Park, CA: Sage, 1985). Patton, Utilization Focused Evaluations.

63

Figure 3.12 Pattons utilization-focused evaluation framework

Identify primary intended users and stakeholders

Focus the evaluation on stakeholders questions, issues and intended uses

Collect Data

Involve users in the interpretation of findings

Disseminate findings for indirect utilization

Drawing from the various models Johnson (1998) concludes that evaluation utilization is a continual and diffuse process that is interdependent with local contextual, organizational and political dimensions. Participation by stakeholders is essential and continual (multi-way) dissemination, communication and feedback of information and results to evaluators and users (during and after a program) help increase use by increasing evaluation relevance, program modification and stakeholder ownership of results. Different models refer to the nature and role of the organization, as an entity, to facilitate use. Focusing on how people operate in a dynamic learning system, how they come to create and understand new ideas, how they adapt to constantly changing situations and how new procedures and strategies are incorporated into an organizations culture. On reviewing the above literature, it seems clear that evaluation use is a continual process that evolves and changes over time. With each iteration new factors added to the spectrum of those influencing use. Despite attempts to build a simplified framework for evaluation use it remains clear that the utilization process is not a static, linear process but one that is dynamic, open and multi-dimensional.

64

Program Evaluation Systems in NGOs Definitions What is an NGO program?

Typically, NGOs identify several overall goals which must be reached to accomplish their mission. Each of these goals often becomes a program. Nonprofit programs can be viewed as processes or methods to provide certain services to their constituents.

What is program evaluation? Program evaluation entails the use of scientific methods to measure the implementation and outcomes of programs for decision-making purposes.68

For the purpose of this research program evaluation is defined as the systematic study to assess the planning, implementation, and/or results of a program with the aim of improving future work. A program evaluation can be carried out for a variety of different reasons, such as for needs assessments, accreditation, cost/benefit analysis, effectiveness, and efficiency. They can be formative or summative however in practice, it is normally carried out after program completion, usually by external evaluators.

Program evaluation is sometimes interchangeably used, mistakenly, with other measures like monitoring and impact assessment. Though all of these are used to
68

Rutman, Evaluation Research Methods: A Basic Guide.

65

observe a programs performance they are distinct from each other. While monitoring explains what is happening in a program, it is evaluation that attempts to explain why these things are happening and what lessons can be drawn from them. On the other hand, impact assessment tries to assess what has happened as a result of the program and what may have happened without it.

Monitoring is the systematic collection and analysis of information as a project progresses. It is aimed at improving the efficiency and effectiveness of a project. It helps to keep the work on track, and can let management know when things are going wrong to allow for course corrections. It also enables you to determine whether the resources are being used efficiently and assess the capacity to complete the project according to plan.

Evaluation is the comparison of actual project outcomes against the agreed plans. It looks at what you set out to do, at what you have accomplished, and how you accomplished it. It can be formative (taking place during the life of a project) or summative (drawing learning from a completed project).

Impact assessment is used to assess the long term effects of the project. It is not just the evaluation of process, outputs and outcomes of the project, but also their ultimate effect on peoples lives. Impact assessments go beyond documenting change to assess the effects of interventions on individual beneficiaries and their environment, relative

66

to what would have happened without them thereby establishing the counterfactual. It measures the any discernible change attributable to the project. For example: A program that provides K-12 education to inner city children. Monitoring will indicate if the program resources are being utilized efficiently and effectively to the target population. Evaluation will indicate if the objectives of the program were achieved i.e.: education was provided to the target children. Impact will assess if the strategy to provide education was successful. Did it enable the children to then secure higher paying jobs? Were they able to break the cycle of poverty? Etc.

Types of program evaluation Program evaluation types differ in their primary objectives, their subjects, timing and orientation. Listed below are the three most common types of evaluations in NGOs.69

Goals-based evaluations assess the extent to which programs are meeting predetermined goals or objectives. Questions explored include: o What is the status of the program's progress toward achieving the goals? o Will the goals be achieved according to the timelines specified? o Are there adequate resources (money, equipment, facilities, training, etc.) to achieve the goals?

Carter McNamara, Field Guide to Nonprofit Program Design, Marketing and Evaluation (Minneapolis: Authenticity Consulting, 2003).

69

67

Process-Based evaluations assess the programs strengths and weaknesses. Questions explored include: o How does the program produce the results that it does? o What are the levers that make the program successful? What impedes progress? o How are program-related decision made? What influences them and what resulting actions are taken?

Outcomes-Based evaluations assess if the program is doing the right activities to bring about desired outcomes for clients. Questions explored include: o How closely is the program aligned with the organizations mission? o How does the program compare to similar activities that address the same issue? o How close were the achieved outcomes to the planned result? o What are the indicators that need to be tracked to get a comprehensive understanding of how the program has affected the clients?

68

Growth of the NGO Sector Since the 1970s, a profound shift has taken place in the role of nongovernmental organizations (NGOs). In the wake of fiscal crisis, the Cold War, privatization, and growing humanitarian demands, the scope and capacity of national governments has declined. The NGO sector began to fill in the vacuum left by nationstates in relief and development activities, both domestically and internationally. While figures on NGO growth in the last three decades vary widely, most sources agree that since 1970 the international humanitarian and development nonprofit sector has grown substantially. Tables 2.1 and 2.2 illustrate this growth.70

The table below shows that within the United States alone, the number of internationally active NGOs and their revenues grew much faster that the U.S. gross domestic product.

Table 3.3 Changes in U.S. International NGO Sector, 1970-94 ($$ in U.S. Billions)

Year 1970 1994


Growth Since 1970

NGOs 52 419
8.05 times

Revenues $0.614 $6.839


11.3

US GDP $1,010.0 $6,379.4


6.3

times

times

Marc Lindenberg and Bryant Coralie, Going Global: Transforming Relief and Development Ngos (Kumarian Press, 2001).

70

69

The table below shows that similar trends are evident in the twenty-five OECD Northern industrial countries.

Table 3.4 Growth in Revenue of Northern NGOs Involved in International Relief and Development Flow of funds from NGOs to Developing Countries by Source ($$ in U.S. Billions) Year 1970 1997 Private $800 $4,600 Public $200 $2,600 Total $1,000 $7,200 U.S. Share 50% 38%

Within the developing world, the number of local NGOs with a relief and development focus has mushroomed. Although estimates of the size of the NGO sector in any country are often unreliable, one source reports that in 1997 there were more than 250,000 Southern NGOs.71 This growth has been facilitated by the retreat of government provision in many developing countries resulting in a reduced role in welfare services thereby widening the potential for non-state initiatives. Some southern NGOs now reach very large numbers of constituents paralleling government activities. Example: the Grameen Bank that has over 7 million borrowers.72 The 1993 Human Development Report judged that some 250 million people were being touched by NGOs and likely to rise considerably in the 21st century.73

Alliance for a Global Community, "The Ngo Explosion," Communications 1, no. 7 (1997). "The Grameen Bank," http://www.grameen-info.org/bank/GBdifferent.htm 73 United Nations Development Program UNDP, "Human Development Report," (New York: Oxford Press, 1993).
72

71

70

Table 3.5 - Statistic on the U.S. Nonprofit sector74 Overview of the U.S. Nonprofit Sector, 2004 - 2005 501(c)(3) public charities Public charities Reporting public charities Revenues Assets 501(c)(3) private foundations Private foundations Reporting private foundations Revenues Assets Other nonprofit organizations Nonprofits Reporting nonprofits Revenues Assets Giving Annual, from private sources From individual and households As a % of annual income Average, from households that itemize deductions Average, from households that do not itemize deductions Volunteering Volunteers 65 million $551 $3.58 $260 billion $199 billion 1.9 464,595 112,471 $250 billion $692 billion 103,880 75,478 $61 billion $455 billion 845,233 299,033 $1,050 billion $1,819 billion

"The Nonprofit Sector in Brief - Facts and Figures from the Nonprofit Almanac 2007," (2006), http://www.urban.org/UploadedPDF/311373_nonprofit_sector.pdf.

74

71

With this growth however have come several challenges for the NGO community both within and outside the organization. First, the new waves of complex emergencies have overwhelmed global institutional-response capacity and heightened risks to those the NGO assist and their own staff. Second, the declining capacity of national governments has forced many agencies to taken on responsibilities they are not trained or equipped to hold. Often agencies face a dilemma of deciding whether to function as a substitute for state services or to pressure the state to play a stronger role again. Third, as resources become tighter, NGOs face new pressures for greater accountability for program impact and quality. These pressures come from donors, private and public, who want to know if their resources were used effectively. From NGO staff, who want to know if their programs matter and from the beneficiaries, demands for greater participation in program design and implementation.

As the demand for NGO services seems only likely to increase in the future there is immense pressure on the NGO sector to engage in efforts to try and alleviate some of these challenges. Interviews conducted by Hudson and Bielefeld75 and Fisher76 show that one solution most NGO leaders believe in is that they should transform their increasingly bureaucratic organizations into dynamic, live organizations with strong learning cultures. Lindenberg and Bryant (2001) based on their work with large international NGOs conclude that they must increasingly develop learning cultures

Bryant Hudson and Wolfgang Bielefeld, "Structures of Multinational Nonprofit Organizations," Nonprofit Management and Leadership 9, no. 1 (1997). 76 Julie Fisher, Nongovernments: Ngos and Political Development of the Third World (Connecticut: Kumarian Press, 1998).

75

72

in which evaluation is not thought of as cause for punishment but rather as a process of partnership among all interested parties for organizational learning and improvement.

73

Current Use of Evaluations in NGOs Current practice indicates that there is weak evaluation capacity in NGOs.77 Although most agencies have monitoring and evaluation (M&E) processes to assess their programs, almost all of them are limited by budgetary constraints. Donors who demand that NGOs become more professional show little willingness to pay for increased professionalism, as it translates into increased overhead costs.78 Internally as well NGOs face numerous problems with evaluation systems. For starters, it requires organizational commitment of budget and staff to make it happen. Another challenge is to figure out how to undertake evaluation of programs over time most efficiently as well as effectively. Finally, NGOs are constantly challenged on when and whether to share the findings from evaluations, and how to do so effectively.

There has also been a varying degree of evaluation practice in NGOs. For example, compared with the application of evaluation in development programs, its application to humanitarian action has been slower. According to ALNAP (2001)79 first evaluations of humanitarian action werent undertaken until the second half of the 1980s. It was not until the early 1990s that evaluations took off. (Figure 1.3)

Michael Edwards and David Hulme, Beyond the Magic Bullet: Ngo Performance and Accountability in the Post-Cold War World (Connecticut: Kumarian Press, 1996). 78 Jonathan Fox and David Brown, The Struggle for Accountability (Cambridge, MA: MIT Press, 1998). 79 ALNAP, "Humanitarian Action: Learning from Evaluation," ALNAP Annual Review Series (London: Overseas Development Institute, 2001).

77

74

Figure 3.13 Evaluations filed in ALNAP Evaluative Reports Database80 By year of publication
40 35

No. of Evaluations ('000)

30 25 20 15 10 5 0 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000

The boom undoubtedly represents a significant investment by the humanitarian systems, and presents a considerable opportunity for critical reflection and learning in humanitarian operations. Similarly, Riddell81 estimated that since the 1970s, some 12% of the US $420 million channeled in net aid has been subject to evaluation. This number increased in the late 1990s to at least 20%82. Researchers caution that despite the growing investment in evaluations, NGOs are lacking behind in the effective use of findings83. So far the main focus in NGOs has been on streamlining evaluation methods and design and establishing evaluation structures within the organizations and among partners. Less evident are the utilization perspectives, looking at evaluation findings as a learning tool and establishing processes to identify and maximize this use. Carlsson et al relate the problem of underutilization of evaluations to the perception of decision-making in NGOs. They
ALNAPs Evaluative Reports Database (ERD) was setup in 1997 to facilitate access to evaluative reports of humanitarian action and improve inter-agency and collective learning. 81 R.C. Riddell, Foreign Aid Reconsidered (Baltimore: Johns Hopkins Press, 1987). 82 Carlsson, Kohlin, and Ekbom, The Political Economy of Evaluation: International Aid Agencies and the Effectiveness of Aid. 83 Ibid.
80

75

state that organizations are perceived to make decisions according to a rational model: where they define problems, generate options, search for information and alternatives and then, on the basis of the collected information, make a choice. Evaluations in this model are expected to provide careful and unbiased data on project performances. Through feedback loops, this process will improve learning and thus lead to better decisions.

However, in reality, we see organization as a political system. Political considerations enter the decision-making process in several ways. The context is political as the programs that are evaluated are defined and funded through political processes. The evaluation itself is political because it makes implicit political statements about issues such as the legitimacy of program goals and usefulness of various implementation strategies. Carlsson et al give an example where the political context affects evaluations. They argue that donor agencies have an inherent pressure to give, because they either commit themselves in advance to a certain amount either through annual budget allocations (in case of government agencies) or through capital subscriptions (from individual members). This pressure from agencies affects the NGOs that receive funds in such a way that they no longer face financial penalties for poor quality projects. All they need to show is that a program, that meets the donors objectives, is executed within budget. Alan Fowler84 concluded that an almost universal weakness of NGOs is their limited capacity to learn, adapt and continuously improve the quality of what they do. He urged NGOs to put in place systems which

Fowler, Striking a Balance: A Guide to Enhancing the Effectiveness of Non-Governmental Organizations in International Development.

84

76

ensure that they know and learn from what they are achieving and then apply what they learn.

While much has been written about the shortcomings and the critical reviews of NGO evaluations, the positive news is that there is a growing number of NGOs committing to improve their organization structures and operations to facilitate change. Recent books on NGO management are giving specific attention to assessing performance (Fowler85; Letts86; Smillie and Hailey87) and the management of information (Powell88). Lindenberg and Bryant89 list several accomplishments by leading international NGOs, since 2000, to build their evaluation capacity and systems. Some of these are

Oxfam GB produced a guide Monitoring and Assessing Impacts, that reflects Oxfams internal change processes in conducting assessments.

Save the Children UK published Toolkits A practical guide to assessment, monitoring, review and evaluation, a collection of tools for improving how their staff and partners conducted M&E

CARE USA developed their Impact Guidelines, a menu of impact indicators for use in strengthening their programming goals

Ibid. Christine Letts, High Performance Nonprofit Organizations: Managing Upstream for Greater Impact (New York: Wiley, 1999). 87 Ian Smillie and John Hailey, Managing for Change (London: Earthscan, 2001). 88 Mike Powell, Information Management for Development Organisations, 2nd ed., Oxfam Development Guidelines Series (Oxford: Oxfam, 2003). 89 Lindenberg and Coralie, Going Global: Transforming Relief and Development Ngos.
86

85

77

Additionally, networks like ALNAP have recommended the adoption of evaluation standards, similar to the U.S. Program Evaluation Standards which are the main set of standards in the wider evaluation field. On a smaller scale, NGOs have produced their own guides on monitoring and evaluation.90 NGOs have utilized advancements in technology to create centralized evaluation electronic libraries, inter and intranet linkages and web-based discussion boards to effectively share findings among stakeholders.91 These efforts have effectively bridged the communication gap within agencies that operate globally. However, the communication style in many large NGOs has tended to be either too heavy that the information and learning sink without trace or too light that they evaporate.92

An ALNAP study conducted a survey of its member agencies to assess current practice of evaluation use and follow-up.93 It concluded that two types of factors play a key role in the utilization of evaluation outcomes: (1) cultural, organizational and managerial factors within agencies; and (2) factors related to the quality of evaluations and the means of dissemination of results. The following grid captures some of the responses as to what factors contribute to underutilization of evaluation findings.

90 91

Evaluation subject

Desai and Potter, The Companion to Development Studies. Some web-based links are www.aidworkers.net , Monitoring and Evaluation News: www.mande.co.uk , www.alnap.org/discus, International NGO training and research center: www.intrac.org, DAC Evaluation Network, www.edlis.org 92 Bruce Britton, "The Learning Ngo," INTRAC Occasional Paper Series, no. 17 (1998). 93 Bert Van de Putte, "Follow-up to Evaluations of Humanitarian Programmes," (London: ALNAP, 2001).

78

o Security situations in complex emergencies precluding access. o The essentially short term nature of many interventions of this nature. o The fact that humanitarian emergencies tend to be context-specific and that, as a result, not all lessons are replicable. Evaluation process o Delays in the finalization of the evaluation made people lose interest, key persons were transferred and new emergencies drew attention. o Lack of ownership and a sense of control among the main stakeholders o It is unclear when starting the evaluation what it is that needs to be changed at the end and who is responsible for this. o Quality of the evaluation, buy-in to evaluation process beforehand, agreement with recommendations, perceived authority and

competence of the evaluators, recommendations too difficult to deal with or not politically/institutionally acceptable, too many

recommendations, evaluation took too long to complete and stakeholders have moved to other things.

Follow-up process o Lack of a "champion" who sees it through distribution, meetings, "after actions" and other follow-up.

79

o Once a report is finalized, there is not enough discussion and interaction with the staff concerned on how they intend to implement some of the recommendations and overcome constraints.

Organizational Characteristics o Mix of factors including organizational priorities, resources, perceived importance of the evaluation. o Reluctant attitude of regional offices or units. o Lack of time among the staff as well as staff capacity and knowledge. o Turn over of staff. o Organization staff resistant to change. o Lack of understanding, appreciation of the role of evaluation in improving the programming/ management of our humanitarian operations.

The report recommended that NGOs make evaluation follow-up an integral part of there operations and invest resources to build systems and process that enhance use. Facilitators of utilization were linked to the presence of positive structural and cultural characteristics that predispose organizational learning. In larger organizations, the existence of a well-resourced evaluation unit was identified as an important determinant of use. In such an environment there are dedicated resources to ensure accountability and learning. There are clear decision-making structures, mechanisms and lines of authority in place. Vertical and horizontal links between 80

managers, operational staff and policy-makers enable dissemination and sharing of learning. There are permanent and opportunist mechanisms for facilitating organization-wide involvement and learning. For smaller organizations, the report called for a scaled back version of these characteristics but stressed their importance nevertheless.

Similarly, a survey conducted by BOND a network of over 280 UK-based development NGOs looked at the views of its members about the concept of learning as well as whether and how it happens in the context of their day-to-day work.94 Only 29% of NGOs stated that they regularly refer to lessons learnt during a project. When asked what factors inhibit their ability to use past evaluation lessons, most NGOs cited 'time pressure' as the most important factor; this was followed by inadequate organizational capacity (resources and facilities), and lack of clarity about what is available and relevant. On what factor aids utilization the most, 59% agreed that participation by stakeholders during planning of the evaluation increased ownership of findings and further utilization.

A study conducted by the Canadian Centre for Philanthropy95 (2003) found that the NGOs they had systems in place for evaluation utilization used them in the following manner: 68% for improvement of programs and 55% for strategic planning. The survey found that findings were least likely to be used for fundraising purposes to

Jawed Ludin and Jacqueline Williams, Learning from Work: An Opportunity Missed or Taken? (London: BOND, 2003). 95 Michael H. Hall et al., "Assessing Performance: Evaluation Practices and Perspectives in Canadas Voluntary Sector," ed. Norah McClintock (Toronto: Canadian Centre for Philanthropy, 2003).

94

81

information sharing within the sector. What triggered the higher use in the respondents was a direct involvement in the evaluation process by senior management and in some cases the Board.

The Swedish International Development Agency96 (SIDA) conducted a study on evaluation use and concluded that in order for evaluation to be useful human factors, e.g.: knowledge of stakeholders about evaluation has to be considered. They also concluded that for effective use the evaluation process must allow for the involvement and effective participation of management and staff. The importance of the organizational context and organizational support structures (e.g.: impacts of power inequalities, conflicting interests and differing views on the reality among stakeholders) must be factored while planning for evaluation use.

Drawing from development NGOs literature and practice, Engel et al (2003)97 outline three different steps to increase internalization of program evaluation results. 1. Participatory monitoring and evaluation involving stakeholders 2. Emphasis on results-based planning and management among staff, and 3. Improved organizational learning

SIDA, "Are Evaluations Useful? Cases from Swedish Development Co-Operation.," SIDA Studies in Evaluation (Swedish International Development Agency, 1999). 97 P. Engel, C. Carlsson, and A. van Zee, "Making Evaluation Results Count: Internalizing Evidence by Learning," in ECDPM Policy Management Brief No. 16 (Maastricht: European Centre for Development Policy Management, 2003).

96

82

Engel et al also identify several donor agency initiatives to promote learning within themselves and the agencies they support. Some of these are DFIDs Performance Reporting Information System (PRISM) - a computer-based system to combine basic project management information with qualitative information on the nature and objectives of the program, and the World Banks Communities of practice a learning network centered on particular themes designed to establish trust and a culture of sharing between staff. Another significant contribution by donor agencies in promoting evaluation feedback and use was the DAC98 Working Party on Aid Evaluations organized in Japan, 2000.This workshop highlighted the widespread concern of DAC members about the current practices for disseminating lessons from evaluations and the need for improved evaluation use to enhance aid policies and programs.99

The RAPID (Research and Policy in Development) Framework developed by the Overseas Development Institute, Britains leading think-tank on development issues, identified four dimensions that influence use of evaluation and research.100
98

The political context The evidence and communication

The Development Assistance Committee (DAC) is a specialized unit within the Organization for Economic cooperation and Development (OECD), whose members have agreed to secure an expansion of aggregate volume of resources made available to developing countries and to improve their effectiveness. To this end, members periodically review their amounts and nature of contributions to aid programmes, bilateral and multilateral, and consult each other on relevant aspects of their development assistance policies. 99 Organization for Economic co-operation and Development, "Evaluation Feedback for Effective Learning and Accountability," in Evaluation and Effectiveness, ed. Development Assistance Committee (Paris: OECD). 100 "Research and Policy in Development (Rapid)," Overseas Development Institute, http://www.odi.org.uk/RAPID/.

83

The links among stakeholders The influence of the external environment

Figure 3.14 the Research and Policy in Development Framework

Political Context The framework views the evaluation process is in itself a political process, from the initial agenda-setting exercise through to the final negotiations involved in implementation of findings. Political contestation, institutional pressures and vested interests matter greatly. So too, the attitudes and incentives among stakeholders, program history, and power relations greatly influence use. Potential use to the majority of staff in an organization may be discarded if those findings elicit disapproval from the leadership. Political context includes: learning and knowledge-

84

management systems, structural proximity of evaluation units to decision-makers, political structures and institutional pressures.

Evidence and Communication Second, the framework identified the quality of the evaluation as essential for use. Influence is affected by topical relevance and the operational usefulness of the findings. The other key set of issues highlighted concern communication. The sources and conveyors of information, the way findings are packaged and targeted can all make a big difference in how the evaluation is perceived and utilized. The key message is that communication is a very demanding process and it is best to take an interactive approach. Continuous interaction with users leads to greater chances of successful communication than a simple or linear approach. Quality includes: the evaluation design, planning, approach, timing, dissemination and the quality and credibility of the evidence.

Links Third, the framework emphasizes the importance of links among evaluators, users and their links to influential stakeholder, relationships among stakeholders etc. Issues of trust, legitimacy, openness and formal and informal partnerships are identified as important. The interpersonal and conflict management skills needed to manage defensiveness and opposition to findings are essential competencies in staff conducting evaluations. Overall, there needs to be more attention paid to the

85

relational side of evaluation. This framework cautions that using evaluation is as much a people issue as it is a technical one and perhaps more so.

External Influences Fourth, the framework includes the ways in which the external environment influences users, uses and the evaluation process. Key issues here include the impact of external politics and processes, as well as the impact of donor policies and funding. Trends within the issue area and relationships with peer organizations or networks also affect to the extent to which evaluations findings are used. It includes indirectly involved stakeholders (not direct users) whose actions can affect the use (or non-use) of an evaluation.

A recent, innovative tool developed within the NGO sector to track and measure effective use is International Development Research Centres (IDRC) Outcome Mapping (OM). It offers a methodology that can be used to create planning, monitoring, and evaluation mechanisms enabling organizations to document, learn from, and report on their achievements.101 OM is initiated through a participatory workshop, involving program stakeholders, led by an internal or external facilitator who is familiar with the methodology. Using a set of worksheets and questionnaires the facilitator engages the participants to be specific about the clients it wants to target, the changes it expects to see, and the strategies it employs to be more effective in the results it achieves. The originality of the methodology is its shift away from

101

Sarah Earl, Fred Carden, and Terry Smutylo, Outcome Mapping: Building Learning and Reflection into Development Programs (Ottawa: The International Development Research Center, 2001).

86

assessing the development impact of a program (defined as changes in state: for example, policy relevance, poverty alleviation, or reduced conflict) and toward changes in the behaviors, relationships, actions or activities of the people, groups and organizations with which a program works directly. This shift significantly alters the way a program understands its goals and assesses its performance and results. The authors of this methodology claim it benefits those programs whose results and achievements cannot be measured with quantitative indicators alone.

There are three components to OM: (1) Intentional Design: helps a program establish a programs vision and operational guidelines (like who are its partners, how will the program contribute to the overall mission of the organization) (2) Outcome and Performance Monitoring: provides a framework for the ongoing monitoring of the programs actions toward the achievement of outcomes. (3) Evaluation Planning: helps the program identify evaluation priorities and develop an evaluation plan.

87

Figure 3.15 Outcome Mapping Framework


INTENTIONAL DESIGN Vision; Mission; Outcome challenges; Progress markers; strategy maps; Organizational practices

OUTCOME AND PERFORMANCE MONITORING Monitoring priorities; outcome journals; performance journals

EVALUATION PLANNING Evaluation plan

The key innovation introduced by this approach, which relates to evaluation use, is that in its evaluation planning component it takes a learning-based view of evaluation guided by principles of participation and iterative learning. OM operates under the premise that the purpose of an evaluation is to encourage program decision-making to be based on data rather than on perceptions and assumptions. OM emphasizes stakeholder participation at all stages of the evaluation and identifies certain key factors that are likely to enhance utilization of evaluation findings. They are grouped into two categories: organizational factors and factors related to the evaluation.

88

Table 3.6 Outcome Mapping factors that enhance utilization Organizational Factors Managerial support Promotion of evaluation through a learning culture Evaluation-Related Factors Participatory approach Timely findings (completion matches organizations planning or review cycle) High quality and relevant data Findings that are consistent with the organizational context Skilled evaluator

Hatry and Lampkin102 suggest that NGOs use evaluation findings to make informed management decisions about ways to allocate scarce resources and methods and approaches to program delivery that will help the organization improve its outcomes. NGOs must find significant value in evaluations to consider the trade-off in staff, time and funding that is directed to program implementation for an administrative report. This requires a mind-shift where NGOs view evaluation and evaluation use as a necessary component to providing services to their beneficiaries and changing the organizational culture and including numerous stakeholders in the process. The findings from evaluations must be transferred from a written report to the agenda of managers and decision-makers.103 Getting NGOs to view evaluation as

Hatry and Lampkin, "An Agenda for Action: Outcome Management for Nonprofit Organizations." Anthony Dibella, "The Research Manager's Role in Encouraging Evaluation Use," Evaluation Practice 11, no. 2 (1990).
103

102

89

a tool for learning instead of a mandate from a donor or an additional administrative chore can be a challenge.

Many types of decision making models are used in NGOs. Understanding these models allows staff to make intentional choices about which model might be most appropriate for the various decisions that they confront. We will examine these models for the purposes of decision-making around the use of evaluation findings. The six models below describe how behavior can work to affect and manipulate the decision-making process, sometimes in productive ways and at times in detrimental ways for team decisions (Johnson and Johnson, 2000)104.

Method 1: Decision made by authority without group discussion The designated leader makes all decisions without consulting group members. Appropriate for simple, routine, administrative decisions; little time available to make decision; team commitment required to implement the decision is low. Strengths Takes minimal time to make decision Commonly used in organizations (so we are familiar with method) High on assertiveness scale Weaknesses No group interaction Team may not understand decision or be unable to implement decision Low on cooperation scale

D.W. Johnson and F.P. Johnson, Joining Together: Group Theory and Group Skills (Boston: Allyn and Bacon, 2000).

104

90

Method 2: Decision by expert An expert is selected from the group. The expert considers the issues, and makes decisions. Appropriate when result is highly dependent on specific expertise and team commitment required to implement decision is low. Strengths Useful when one person on the team has the overwhelming expertise Weaknesses Unclear how to determine who the expert is (team members may have different opinions) No group interaction May become popularity issue or power issue

Method 3: Decision by averaging individuals' opinions Each team member provides his/her opinion and the results are averaged. Appropriate when time available for decision is limited; team participation is required, but lengthy interaction is undesirable; team commitment required to implement the decision is low. Strengths Extreme opinions cancelled out Weaknesses No group interaction, team members are not truly involved in the decision Error typically cancelled out Opinions of least and most knowledgeable members may cancel Group members consulted Commitment to decision may not be

91

strong Urgent decisions can be made Unresolved conflict may exist or escalate May damage future team effectiveness

Method 4: Decision made by authority after group discussion The team creates ideas and has discussions, but a designated leader makes the final decision. Appropriate when available time allows team interaction but not agreement; clear consensus on authority; team commitment required to implement decision is moderately low. Strengths Team used more than methods 13 Listening to the team increases the accuracy of the decision Weaknesses Team is not part of decision Team may compete for the leaders attention Team members may tell leader what he/she wants to hear Still may not have commitment from the team to the decision

Method 5: Decision by majority vote Discussion occurs until 51% or more of the team members make the decision. Appropriate when time constraints require decision; group consensus supporting voting process; team commitment required to implement decision is moderately high.

92

Strengths Useful when there is insufficient time to make decision by consensus Useful when the complete teammember commitment is unnecessary for implementing a decision

Weaknesses Taken for granted as the natural, or only, way for teams to make a decision Team is viewed as the winners and the losers; reduces the quality of decision

Minority opinion not discussed and may not be valued May have unresolved and unaddressed conflict Full group interaction is not obtained

Method 6: Decision by consensus Collective decision arrived at through an effective and fair communication process (all team members spoke and listened, and all were valued). Appropriate when time available allows a consensus to be reached; the team is sufficiently skilled to reach a consensus; the team commitment required to implement the decision is high and all team members are good communicators. Strengths Most effective method of team decision making All team members express their thoughts and feelings Takes psychological energy and high degree of team-member skill (can be Weaknesses Takes more time than methods 15

93

negative if individual team members not committed to the process) Team members feel understood Active listening used

94

Barriers to Evaluation Use in NGOs

Beyond the limitations of qualitative and quantitative data, evaluation use faces challenges from political, social and organization forces105. For example, disagreement amongst staff about priority issues, conflicts around resources, staff turnover, inflexible organizational procedures and changes in external conditions (donors, issue area). Shadish, Cook, and Leviton106 grouped the obstacles into the following categories: (1) findings can threaten ones self interests (2) fear that the program will get eliminated (3) program staff are not motivated by seeking efficacy (4) the slow and incremental nature of change (5) stakeholders often have limited influence on policies and programs

Evaluations findings often include mixed objectives and multiple stakeholders, without prioritizing or considering what this may mean in terms of approach. Multiple purposes may unintentionally undermine another where use by one set of stakeholders may counter the intended learning use for others. For example, from the point of view of those whose work is being evaluated, the knowledge that judgments

105 106

Weiss, "Have We Learned Anything New About the Use of Evaluation?." Shadish, Cook, and Leviton, Foundations of Program Evaluation: Theories of Practice.

95

will be made and communicated in writing can create defensiveness. What is often lacking is clarity and agreement about the purpose of the evaluation107. NGOs are also challenged by the lack of robust mechanisms that recall and make available findings from past evaluations to decision-makers. The process of storing and recalling knowledge is complex, and its translation to action is a highly individual and personal process which can be difficult to track. Often information tends to be supply drive, with evaluations pumping out findings on the assumption that it will be automatically picked up. The challenge is to successfully point staff to relevant evaluation lessons as and when they need the information.

Other impediments described in the literature include an organizational culture that does not value learning, staff members who do not understand evaluation, bureaucratic imperatives such as the pressure to spend regardless of quality, and the lack of real incentives to change. The unequal nature of the aid relationship is also a significant barrier. Why and by whom an evaluation is commissioned affects ownership and hence use. For example: evaluations viewed in the field as serving only headquarters needs, not the needs of the program. Performance issues can also inhibit use. Just as utilization is enhanced by motivated individuals willing to champion the findings and promote use, it is also constrained by individuals who block or fail to act. Some organizations have a culture where accountability tends to be associated with blame. Evaluation reports can present a risk to an organizations

Kevin Williams, Bastiaan de Laat, and Elliot Stern, "The Use of Evaluation in the European Commission Services - Final Report," (Paris: Technopolis France, 2002).

107

96

reputation. The perceived risk may lead staff members to suppress and reject findings in the interests of protecting their survival.108

Beyond these practical problems of using evaluation findings, there are a few philosophical challenges as well. Decision making in organizations is never linear, and is often determined by a group of decision-makers. While evaluation findings can change perceptions it is unlikely to bring all parties to agree on which facts are relevant or even on what the facts are. Another problem for NGOs is their motivation for conducting the evaluation may be different from that of the funder. Carson109 notes that if a major motivation is to direct funding to a project with proven results, there is little evidence that this happens with any frequency. A continuing source of tension between the donors and NGOs is that there is seldom an agreement beforehand about what benchmarks are important to measure and how the results will be used. Vic Murray110 says that regular and systematic use of evaluation findings is still relatively uncommon. This is partly because utilization efforts do not produce the value for the money, and are quickly abandoned.

The lack of adequate planning to time evaluations to inform key decision dates such as funding cycles and annual program planning is also identified as a barrier. A study of Doctors without Borders use of evaluations indicates that evaluations were not

Barb Wigley, "The State of Unhcr's Organization Culture: What Now?," http://www.unhcr.org/publ/RESEARCH/43eb6a862.pdf 109 Emmet D. Carson, "Foundations and Outcome Evaluation," Nonprofit and Voluntary Sector Quarterly 29, no. 3 (2000). 110 Murray, "The State of Evaluation Tools and Systems for Nonprofit Organiations."

108

97

used because they took place too late: the decisions they should have influenced had already been made111.

Rosenbaum112 acknowledges that there are costs associated with evaluation follow-up and use; but suggests that NGOs view these costs as opportunity costs. She offers a few suggestions on how these costs can be managed: (a) NGOs should allocate a percentage of their general operating budget for learning and evaluation use that fits within the organizations strategic plan; (b) build the evaluation follow-up costs into the programs budget as a fixed cost line item. Brett, Hill-Mead and Wus (2000)113 examination of evaluation use of NGOs demonstrated the complexities and challenges they face. While they mirrored the resources constraints mentioned above some organizations with global operations also struggled with managing information among its various locations. Not having dedicated staff for evaluation and use hampered organizations attempts to incorporate learning into planning. The authors suggest that establishing a culture of evaluation use must be a gradual process that allows for staff to find uses for data in their daily work and simple enough to allow them to embrace the process that led them to the data. Andrew Mott114 suggests that strengthening the internal learning capacity of NGOs must be a critical priority of donors. A strong, increasingly knowledgeable and effective organization can

Putte, "Follow-up to Evaluations of Humanitarian Programmes." Nancy Rosenbaum, "An Evaluation Myth: Evaluation Is Too Expensive," National Foundation for Teaching Entrepreneurship (NFTE), http://www.supportctr.org/images/evaluation_myth.pdf. 113 Belle Brett, Lynnae Hill-Mead, and Stephanie Wu, "Perspectives on Evaluation Use and Demand by Users: The Case of City Year," New Directions for Program Evaluation, no. 88 (2000). 114 Andrew Mott, "Evaluation: The Good News for Funders," (Washington, DC: Neighborhood Funders Group, 2003).
112

111

98

maximize grantee funding and lead to desired impact. He recommends that funders incorporate a utilization and learning component into evaluations.

Barriers to evaluation use can be summarized into the following categories: Political Political activity is inextricably linked to effective use. Programs are results of political decisions, so evaluations implicitly judge those decisions. Also, evaluations feed decision making and compete with other perspectives within the organization (Green, 1990; King, 1988; Weiss, 1997; Mowbray, 1992; Carlsson et al, 1994). NGOs practice shows that political considerations enter the evaluation process from start to finish -- from what gets evaluated to how data gets interpreted. Findings from any evaluation are only partly logical and deductive; it relies equally on perspectives and interests of stakeholders. Organizations face a challenge in navigating political interests to promote use because it is likely to result in actions and decisions that shift power, status and resources. Procedural Throughout the lifecycle of an evaluation NGOs face challenges to use. The lack of resources, time and staff capacity and knowledge to conduct evaluations and follow-up emerged repeatedly as a barrier to use. This constraint was reported both at the organizational and program level where intended users and intended uses were not identified during evaluation planning. On completion of evaluations, NGOs were challenged to get the right information to the right people who are open to and know how to use findings. Impeding factors were the timing of the evaluation, levels of

99

bureaucracy within an organization, the lines of communication within and across these levels and the degree of decision-making autonomy within program units. Poor quality of reports also surfaced as affecting use as stakeholders were either unclear on how to transfer findings to instrumental use or did not see the information as credible. Social The enthusiasm and engagement of staff is critical to the success of evaluation utilization. Research highlighted barriers to staff engagement range from lack of ownership of the process; resistance to change; low motivation to seek efficacy and excessive control by a few stakeholders. When potential user involvement was driven by symbolic use to meet donor requirements or management directive it resulted in minimal use. In larger organizations with multiple programs and competing agendas, reluctance of teams to partner in evaluations resulted in limited to no conceptual use. Personal resistance to use can be attributed to situations when findings could threaten individual self interests. Finally, in some organizations there is a lack of incentive to use and learn this is particularly the case when there is rotation for staff and no longer motivated to observe the consequences of their decisions. Organizational Absent or inflexible systems and structures were identified as barriers to use. NGOs lack the infrastructure to effectively disseminate and retrieval evaluation results to inform in-time decisions. Information was often stored locally and in inaccessible formats that inhibit sharing resulting in a poor understanding of what is available and relevant. Even when information was available and shared, organizational decision-making models limit potential users from using the findings.

100

NGOs were unable to engage in conceptual and strategic use in environments that did not provide an overarching framework on evaluation use and organization learning. Staff remained focused on individual program evaluations but missed the larger utilization opportunities. Finally, staff turnover emerged as a major barrier especially among primary intended users as utilization processes are dependent on their active engagement throughout the evaluation cycle. New users who join the process midstream seldom come with the same interests and agenda as those originally involved. Additionally, this leads to a loss of institutional memory.

All of the studies on evaluations indicate that NGOs have become much more aware of the need for evaluation, within their operations, and have moved a step closer to using evaluation as a mechanism to develop a wider perspective on NGO effectiveness, looking beyond individual projects, across sectors and country programs.

If evaluation is to continue to receive its current levels of attention and resources in NGOs, and be embraced by all whether at policy or operational level it needs to demonstrate clearly its contribution to improved performance. - ALNAP 2001

101

Organizational Learning

Definitions
Table 3.7 Organizational Learning Definitions Author(s) Chris Argyris and Donald Schn115 Definition of Organizational Learning OL occurs when members of the organization act as learning agents for the organization, responding to changes in the internal and external environments of the organization by detecting and correcting errors in the organizational theory-in-use, and embedding the results of their inquiry in private images and shared maps of the organization. Marlene Fiol and Marjorie Lyles116 OL refers to the process of improving actions through the development and interpretation of the environment, through which cognitive systems and memories results. Observable organizational actions are a key criterion for learning. George P. Huber117 OL is a consequence of discussion and shared interpretations, changing assumptions and trial and error activities. Increasing the range of potential organizational behaviors is both necessary and sufficient as the minimal

Chris Argyris and Donald Schn, Organizational Learning: A Theory of Action Perspectives (Reading, MA: Addison-Wesley, 1978). 116 C. M. Fiol and M. A. Lyles, "Organizational Learning," The Academy of Management Review 10, no. 4 (1985).

115

102

condition for learning. Peter Senge118 OL is where people continually expand their capacity to create the results they truly desire, where new expansive patterns of thinking are nurtured, where collective aspiration is set free and where people are continually learning how to learn together.

For the purposes of this research the definition of OL is summarized as: learning which serves a collective purpose, is developed through experience and reflection, is shared by a significant number of organizational members, stored through institutional memory; and is used to modify organizational practices.

George P. Huber, "Organizational Learning: The Contributing Processes and the Literatures," Organization Science 2, no. 1 (1991). 118 Peter Senge, The Fifth Discipline: The Art and Practice of the Learning Organization (New York: Doubleday, 1990).

117

103

Types of Learning
One of Argyris and Schons most influential idea, theories of action are the routines and practices that embody knowledge.119 They are theories about the link between actions and outcomes, and they include strategies for action, values that determine the choice among strategies, and the assumptions upon which strategies are based. The practices of every organization reflect the organizations answers to a set of questions; in other words, a set of theories of action. For example, a relief agency embodies in its practices particular answers to questions of how to access and assist vulnerable populations. The particular set of both questions and answers (e.g., to assist populations by providing supplementary feeding centers) are the agencys theories in action. Once theories of action are established, the process of learning involves changes in these theories either by refining them (single-loop learning) or by questioning underlying assumptions, norms, or strategies so that new theories-in-use emerge (double-loop learning).

Single-loop learning occurs within the prevailing organizational frames of reference. It is concerned primarily with effectiveness how best to achieve existing goals and objectives.120 Single-loop learning is usually related to the routine, immediate task. According to Dodgson (1993), single-loop learning can be equated to activities that add to the knowledge-base or organizational routines without altering the fundamental nature of the activities. This is often referred to as Lower-level

119

Chris Argyris and Donald Schn, Organizational Learning Ii: Theory, Method and Practice (Reading, MA: Addison-Wesley, 1996). 120 Argyris and Schn, Organizational Learning: A Theory of Action Perspectives.

104

Learning (Fiol and Lyles 1985); Adaptive Learning (Senge 1990) and Non Strategic Learning (Mason 1993).

Double-loop learning changes organizational frames of reference. This occurs when, in addition to detection and correction of errors, the organization questions and modifies its existing norms, procedures, policies and objectives. Double-loop learning is related to the non-routine, the long-range outcome. This type of learning is considered to be non-incremental because the organizational response will occur with a newly formulated mental map (Levitt and March, 1988; Senge 1994). The resulting learning reflects in fundamental ways change in the culture of the organization itself (Simon, 1991). Double-loop learning is also called Higher-Level Learning (Fiol and Lyles 1984); Generative Learning (Senge 1990) and Strategic Learning (Mason 1993).

Deutero-Learning was identified when organizations carry out both single and double-loop learning. This is considered by theories to be the most important level, as it is the organizations ability to learn how to learn. This awareness makes the organization create then appropriate environment and processes for learning121.

E. C. Nevis, A. J. DiBella, and J. M. Gould, "Understanding Organizations as Learning Systems," Sloam Management Review 36, no. 2 (1995).

121

105

Levels of Learning

Several authors noted that learning can occur at three levels: individual, group and organizational.

Individual Level Watkins et al122 described individual learning as a natural process in which individuals discover discrepancies in their environment, select strategies based on cognitive and affective understanding of these discrepancies, implement these strategies and evaluate their effectiveness, and eventually begin the cycle again. Argyris and Schn123 commented that individual learning is a necessary but insufficient condition for organization learning. Senge124 argued that organizations learn only through individuals who learn. Individual learning does not guarantee organizational learning, but without it no organizational learning occurs.

Group Level Senge noted that group learning is vital because they, not individuals, are the fundamental learning unit in organizations. This is where the rubber meets the road unless groups can learn, the organization cannot learn. Argyris and Schn125 noted that group learning occurs when team members take part in dialogue and exchange of

K. Watkins, V. Marsick, and J. Johnson, eds., Making Learning Count! Diagnosing the Learning Culture in Organizations (Newbury Park, CA: Sage,2003). 123 Argyris and Schn, Organizational Learning: A Theory of Action Perspectives. 124 Senge, The Fifth Discipline: The Art and Practice of the Learning Organization. 125 Argyris and Schn, Organizational Learning Ii: Theory, Method and Practice.

122

106

ideas and information. This allows underlying assumptions and beliefs to be revealed and, thereby allows for the creation and sharing of knowledge.

Organizational level Organizational level learning is not merely the sum of individual learning126. Learning at the individual level may not result in OL unless the newly created knowledge is shared and communicated among individuals who constitute an organization-level interpretation and learning system127. Organizations develop mechanisms - such as policies, strategies and explicit models to capture and retain knowledge, despite the turnover of staff128.

Fiol and Lyles, "Organizational Learning." R. L. Daft and K. E. Weick, "Toward a Model of Organizations as Interpretation Systems," The Academy of Management Review 9, no. 2 (1984). 128 B. S. Levitt and J. G. March, eds., Organizational Learning, Organizational Learning (Thousand Oaks, CA: Sage,1996).
127

126

107

Leading Theorists
During the past 30 years, and especially during the past decade, organizational learning has emerged as a fundamental concept in organizational theory (Arthur & Aiman-Smith, 2002, p. 738). By the early 21st century, the learning organization and the concept of organizational learning had become indispensable core ideas for managers, consultants and researchers. With its popularity and the proliferation of literature on the subject, organizational learning has a multitude of constructs and principles that define it. For the purposes of this research, the focus will remain on the key thought-leaders who have contributed to the advancement of the field and examination of concepts that relate to evaluation utilization. Despite the explosive growth in publications on organizational learning the literature has been plagued by widely varying theoretical and operational definitions and a lack of empirical study. (Lant, 2000, p. 622) A major factor of this fragmentation is that organizational learning has acted as a kind of conceptual magnet, attracting scholars from many different disciplines to focus on the same phenomenon (Berthoin-Antal, Dierkes, et al., 2001). The learning metaphor has offered fertile ground in which each discipline could stake its claim, generating its own terminology, assumptions, concepts, methods, and research. For example, the Handbook of Organizational Learning and Knowledge (Dierkes, Berthoin-Antal, Child,&Nonaka, 2001) included separate chapters for each of the following disciplinary perspectives on organizational learning: psychology, sociology, management science, economics, anthropology, political science, and history.

108

In 1978, Argyris and Schn wrote what is now considered by many to be the first serious exploration of organizational learning. Their seminal book, Organizational Learning, provided a foundation for the field and defined the explicit or implicit approaches taken by different social science disciplines to learning and to organization structures. Over the years, the more organizational learning and related phenomena have been observed and studied, the more conceptually complex and ambiguous they have become (e.g., Argyris, 1980; Barnett, 2001; Castillo, 2002; Ortenblad, 2002). Recognizing that only individuals can act as agents of learning, Argyris and Schn (1978) suggested that organizational learning occurs when individual members reflect on behalf of the organization. Individual learning is guided by theories of actioncomplex system of goals, norms, action strategies, and assumptions governing task performance (Argyris & Schn, 1978, pp. 14-15). Theories of action are not directly observable but can be inferred from what people say and do. To account for organizational learning, Argyris and Schn129 simply extended the concept of individual level to organizational-level theories of action. Organizational learning may be said to occur when the results of inquiry on behalf of the organization are embedded explicit organizational, so-called maps (e.g., rules, strategies, structures). For learning to become organizational, there must be roles, functions, and procedures that enable organizational members to systematically collect, analyze, store, disseminate, and use information relevant to their own and other members performance.

129

Argyris and Schn, Organizational Learning: A Theory of Action Perspectives.

109

March and Olsen130 asked what organizations could actually learn in the face of barriers such as superstitious learning and the ambiguity of history. Argyris and Schn also focused on the limits to learning but argued that these limits could be overcome if people or organizations replace Model I single loop learning - with Model II double loop learning. Their approach implied a fundamental change in thinking and behavior that could be created only through new kinds of consulting, teaching, and research131. More than a decade later, Hubers evaluation of the literature still focused on the obstacles to organizational learning from experience and evaluations132.

Without a doubt, organizational learning received its greatest thrust from Senges The Fifth Discipline (1990). Senges book synthesized a number of innovative streams of social science (e.g., action science, system dynamics, dialogue) into a vision of the learning organization: Where people continually expand their capacity to create the results that they truly desire, where new and expansive patterns of thinking are nurtured, where collective aspiration is set free, and where people are continually learning how to learn together(p. 3). The field of organizational learning has injected a rich new terminology into the language of researchers and practitioners alike. The new terminology including concepts such as double-loop learning, systems thinking, mental models, organizational memory, competency traps, dialogue, tacit knowledge, reflection, defensive routines, absorptive capacity, and knowledge creation. Once
J. G. March and J. P. Olsen, Ambiguity and Choice in Organizations (Bergen: Universitetsforlaget, 1976). 131 Chris Argyris, Robert Putnam, and Diane McLain Smith, Action Science: Concepts, Methods and Skills for Research and Intervention (San Francisco: Jossey-Bass, 1985). 132 Huber, "Organizational Learning: The Contributing Processes and the Literatures."
130

110

again, given the widespread adoption of OL these terms have come into wide usage without necessarily conveying consistent meanings.

An important turning point in the literature on organizational learning occurred when Senge reframed organizational learning as the art and practice of the learning organization133. Senge writes that learning organizations embody five major disciplines. By incorporating these disciplines, organizations can transform themselves into learning organizations, able to overcome obstacles and thrive in todays and tomorrows markets. Senges first discipline is systems thinking involves being able to see the big picture and understanding the interconnectedness of the people, functions and goals of the organizations. The second is personal mastery the idea that the individuals within the organization can help it by first becoming clear about their own personal visions, and then focus on helping the organization succeed. The third is mental models the difference between two individuals understanding of reality. Recognizing that people see the world through their own mental models followed by an attempt to build shared models within the organization practices this discipline. The fourth discipline shared vision builds on the shared mental models theme. Involving members of an organization to contribute and be a part of developing the vision will lead to its success. The final discipline is team learning where individual mastery is shared for the collective learning of the organization. Learning for the organization as a whole is greater than the sum of individual learning of its staff134.

133 134

Senge, The Fifth Discipline: The Art and Practice of the Learning Organization. Fiol and Lyles, "Organizational Learning."

111

Individual learning and organizational learning are similar in that they involve the same phases of information processing: collection, analysis and retention. They are dissimilar in two respects: information processing is carried out at different system levels by different structures, and organizational learning involves an additional phase of dissemination135. One framework that attempts to relate individual and organization level learning is an Organizational Learning Mechanism (OLM). OLMs are institutionalized structural and procedural arrangements that allow organizations to collect, analyze, store, disseminate and use systematically information that is relevant136. OLMs link learning in organizations to learning by organizations in a concrete, directly observable fashion they are organizationallevel processes that are operated by individuals. The most frequently discussed OLM in the literature is the post project review; which examines the role of evaluations to inform learning.

The field of organizational learning presents both a challenge and an opportunity, demanding creative research designs conducted by multidisciplinary teams that take into account multiple views of reality137. Multidisciplinary approaches are easy to espouse but difficult to actually produce. The existence of interdisciplinary teams does not necessarily enable social scientists to overcome deeply entrenched

P. M. Senge et al., The Dance of Change: The Challenges of Sustaining Momentum in Learning Organizations (New York: Currency/Doubleday, 1999). 136 M. Popper and R. Liptshitz, "Organizational Learning Mechanisms: A Cultural and Structural Approach to Organizational Learning," Journal of Applied Behavioral Science 34 (1998). 137 Ariane Berthoin-Antal et al., Handbook of Organizational Learning and Knowledge (Oxford University Press, 2001).

135

112

paradigmatic differences. As Berthoin-Antal et al. pointed out, researchers them selves need to learn how to learn better . . . they need to apply some of the lessons from the study of organizational learning to their own research practice (p. 936). In considering the different views of OL highlighted above, several important points of agreement emerged among the different perspectives. There is considerable agreement among the above-mentioned theorists that OL:

Involves multilevel learning: OL needs to consider the individual, group and organization levels of knowledge. Sharing ideas, insights and innovations within these levels is a key component of learning. 138

Requires inquiry: Inquiry is a necessary and sufficient condition for OL Whether inquiry is formal or informal, the cyclical process of questioning, data collection, reflection, and action may lead to generating alternative solutions to problems.139

Results in shared understandings: OL involves shared understanding that integrates lessons about the relationship between actions and outcomes that underlie organizational practices.140

P. Shrivastava, "A Typology of Organizational Learning Systems," Journal of Management Studies 20, no. 1 (1983). 139 J. Dewey, How We Think: A Restatement of the Relation of Reflective Thinking to Educative Process (Lexington, MA: D.C. Heath, 1960). 140 Argyris and Schn, Organizational Learning: A Theory of Action Perspectives.

138

113

Main Constructs
Huber (1991) frames OL through the following constructs: 1. Knowledge acquisition: the process by which knowledge is obtained either directly or indirectly. 2. Information Distribution: the process by which an organization shares information among its members. 3. Information interpretation: the process by which distributed information is given one or more commonly understood interpretations. 4. Organizational memory: the means by which knowledge is stored for future use.

Knowledge Acquisition

Organizations engage in many activities that acquire information. These can be formal activities (like evaluations, research and development and market analysis) or informal activities (like reading articles; conversations). These activities can further be grouped into two distinct learning processes that guide them:

1. Trial-and-error experimentation. According to Argyris and Schn learning occurs when there is a discrepancy between what is expected to occur and what the actual outcome is. This error detection is considered to be a triggering event for learning.

114

2. Organizational search. An organization draws from a pool of alternative routines, adopting better ones when they are discovered. Since the rate of discovery is a function both of the richness of the pool and of the intensity and direction of search, it depends on the history of success and failure of the organization.

In simple discussions of experiential learning based on trial-and-error learning or organizational search, organizations are described as gradually adopting those routines, procedures, or strategies that lead to favorable outcomes; each routine is itself a collection of routines, and learning takes place at several nested levels. In such multilevel learning, organizations learn simultaneously both to discriminate among routines and to refine the routines by learning within them.

A familiar contemporary example is the way in which organizations learn to use some software systems rather than others and simultaneously learn to refine their skills on the systems that they use. As a result of such learning, efficiency with any particular procedure increases with use, and differences in success with different procedures reflect not only differences in the performance potentials of the procedures but also an organizations current competences with them. Multilevel learning typically leads to specialization. By improving competencies within frequently used procedures, it increases the frequency with which those procedures result in successful outcomes and thereby increases their use. Provided this process leads the organization both to improve the efficiency and to increase the use of the procedure with the highest

115

potential, specialization is advantageous. However, a competency trap can occur when favorable performance with an inferior procedure leads an organization to accumulate more experience with it, thus keeping experience with a superior procedure inadequate to make it rewarding to use.

Information Distribution

Information distribution is a determinant of both the occurrence and breadth of organizational learning. Organizations often do not know what they know. Except for their systems that routinely index and store "hard" information, organizations tend to have only weak systems for finding where a certain item of information is known to the organization. But when information is widely distributed in an organization, so that more and more varied sources for it exist, retrieval efforts are more likely to succeed and individuals and units are more likely to be able to learn141. Thus, information distribution leads to more broadly based organizational learning. Program groups with potentially synergistic information are often not aware of where such information could serve, and so do not route it to these destinations. Similarly, senior management who could use information synergistically often does not know of its existence or whereabouts. Linking those who possess information to those who need this information is what promotes organization-wide learning.

K. J. Krone, F. M. Jablin, and L. L. Putnam, eds., Communication Theory and Organizational Communication: Multiple Perspectives, Handbook of Organizational Communication (Newbury Park, CA: Sage,1987).

141

116

Combining information from different programs leads not only to new information but also to new understanding. This highlights the role of information distribution as a precursor to aspects of organizational learning that involves information interpretation. In addition to traditional forms of information distribution such as telephone, facsimile, face-to-face meetings, and memorandums, computer-mediated communication systems such as electronic mail, bulletin boards, computerized conferencing systems, electronic meeting systems, document delivery systems, and workflow management systems can facilitate the sharing of information. Studies have shown that such systems increase participation and result in better quality program decisions since they are made by consensus and not by domination142. The development of such information systems-enabled communities results in better interpretation of information and greater group understanding. More importantly, it enables equal participation at all levels and supports staff learning from each other simultaneously (unlike traditional learning systems which are usually top-down and time-consuming).

Information interpretation

Huber143 stated that organizational learning occur when organizations undertake sense-making and information interpretation activities. The lessons of experience are drawn from a relatively small number of observations in a complex, changing ecology of routines. What has happened is not always obvious, and the
Senge et al., The Dance of Change: The Challenges of Sustaining Momentum in Learning Organizations. 143 Huber, "Organizational Learning: The Contributing Processes and the Literatures."
142

117

causality of events is difficult to untangle. Nevertheless people in organizations form interpretations of events and come to classify outcomes as good or bad. Certain properties of this interpretation of experience stem from features of individual inference and judgment144. They make systematic errors in recording the events of history and in making inferences from them. They use simple linear and functional rules, associate causality with spatial and temporal contiguity, and assume that big effects must have big causes. These attributes of individuals lead to systematic biases in interpretation145. Organizations devote considerable energy to developing collective understandings of history. They are translated into, and developed through, story lines that come to be broadly, but not universally, shared146. Some of the more powerful phenomena in organizational change surround the transformation of statusquo and the redefinition of concepts through consciousness raising, culture building, double-loop learning, or paradigm shifts147. Within the evaluation context, interpretation of findings is strongly influenced by the political nature of the organization148. Different groups in an organization often have different targets related to a program and therefore evaluate the same outcome differently. As a result, evaluation findings are likely to be perceived more negative or more mixed in organizations than they are in individuals.

D. Kahnerman, P. Slovic, and A. Tversky, Judget under Uncertainty: Heuristics and Biases (New York: Cambridge University Press, 1982). 145 Ibid. 146 Daft and Weick, "Toward a Model of Organizations as Interpretation Systems." 147 Argyris and Schn, Organizational Learning: A Theory of Action Perspectives. 148 Levitt and March, eds., Organizational Learning.

144

118

Huber149 identifies four factors that affect shared interpretation of information: (1) the uniformity of prior cognitive maps possessed by the organizational units, (2) the uniformity of the framing of the information as it is communicated uniform framing is likely to lead to uniform interpretation, (3) the richness of the media used to convey the information - Communications that can overcome different frames of reference and clarify ambiguous issues to promote understanding in a timely manner are considered more rich. Communications that take a longer time to convey understanding are less rich, (4) the information load on the interpreting units interpretation is less effective if the information exceeds the receiving unit's capacity to process the information adequately, and (5) the amount of unlearning that might be necessary before a new interpretation could be generated. This is the process through which learners discard knowledge in this case, obsolete and misleading knowledge, to facilitate the learning of new knowledge.

Organizational memory

Despite staff turnover, organization memory is built and sustained through routines like rules, procedures, technologies and cultures. Such routines not only record organizational history but also shape its future path, and the details of that path depend significantly on the processes by which the memory is maintained and consulted. Organizations process vast amounts of information but not everything is built into its memory. The transformation of experience into routines and the recording of those routines involve costs. A good deal of experience is unrecorded
149

Huber, "Organizational Learning: The Contributing Processes and the Literatures."

119

either because the costs are too great or the organizations assessment of the low value of the experience towards future actions and outcomes. Examples of these are when certain experiences are deemed to be an exception to a rule and are not viewed as precedents for the future.

Organizations vary in the emphasis placed on formal routines. Innovation-driven organizations rely more heavily on tacit knowledge than do bureaucracies150. Organizations facing complex uncertainties rely on informally shared understandings more than do organizations dealing with simpler, more stable environments. There is also variation within organizations. Higher level managers rely more on ambiguous information (relative to formal rules) than do lower level managers151. Despite these differences experiential knowledge, whether in tacit form or in formal rules, is recorded in an organizations memory. However, it will exhibit inconsistencies and ambiguities. Some of the contradictions are a consequence of inherent challenges of maintaining consistency in inferences drawn sequentially from a changing experience. Others reflect differences in experience, the confusions of history, and conflicting interpretations of that history. These latter inconsistencies are likely to be organized into deviant memories, maintained by subcultures, subgroups, and subunits152. With a change in the fortunes of the dominant coalition, the deviant memories become more salient to action.

W. G. Ouchi, "Markets, Bureaucracies, and Clans," Administrative Science Quarterly, no. 25 (1980). 151 R. L. Daft and R. H. Lengel, eds., Information Richness: A New Approach to Managerial Behavior and Organizational Design, Research in Organizational Behavior (Homewood, IL: JAI Press,1984). 152 J. Martin, Cultures in Organizations: Three Perspectives (New York: Oxford University Press, 1992).

150

120

Retrieval of memory depends on the frequency of use of a routine and its organizational proximity. Recently and frequently used routines are more easily evoked than those that have been used infrequently153. The effects of organizational proximity stem from the ways the memory is linked to responsibility. As routines that record lessons of experience are structured around organizational responsibilities they can be retrieved more easily when referenced through those structures which act as advocates for those routines154. Availability is also partly a matter of the direct costs of finding and using what is stored in memory. Information technology has reduced those costs and made relatively complex organizational behavior economically feasible, for example in the preparation of reports or presentations or the analysis of financial statements155.

Linda Argote, Organizational Learning: Creating, Retaining and Transferring Knowledge (New York: Springer-Verlag, 1999). 154 Ibid. 155 Daft and Lengel, eds., Information Richness: A New Approach to Managerial Behavior and Organizational Design.

153

121

Evaluation Use and Organization Learning


Several authors argue that evaluation findings can have impact not only when stakeholders adopts its conclusions directly, but also when they reflect on its potential and possibilities. Reflecting on the types of evaluation uses Cousins and Leithwood156 opined that instrumental use results in single-loop learning whereas conceptual use can bring about major shifts in understanding by promoting doubleloop learning. Caracelli and Preskill157 hypothesized that evaluation utilization has significant potential for contributing to organizational learning and systematic change. They suggest that including stakeholders in the planning and implementation of the evaluation gives them opportunities to be reflective, share and build interpretations (conceptual use) and finally place findings into action (instrumental use). Levitt and March158 framed three organizational behaviors that promote learning through activities.

1. Behavior in an organization is based on routines. Actions are driven by matching existing procedures to situations rather than being intention drive-choices. 2. Organizational actions are history-dependent. Routines are based on

interpretations of the past more than anticipations of the future. 3. Organizations are oriented to targets -- their behavior depends on the relation between the outcomes they observe and the aspirations they have for those
J. Bradley Cousins and Kenneth A. Leithwood, "Current Empirical Research on Evaluation Utilization," Review of Educational Research 56, no. 3 (1986). 157 Vaerie J. Caracelli and Hallie Preskill, "The Expanding Scope of Evaluation Use," New Directions for Evaluation, no. 88 (2000). 158 B. Levitt and J. G. March, "Organizational Learning," Annual Review of Sociology, no. 14 (1988).
156

122

outcomes. Sharper distinctions are made between success and failure than among gradations of either.

Within such a framework, organizations are seen as learning by encoding inferences from history into routines that guide behavior. The generic term "routines" includes the forms, rules, procedures, conventions, strategies, and technologies around which organizations are constructed and through which they operate. Routines are independent of the individual actors who execute them and are capable of surviving considerable turnover in individual actors. Routines are transmitted through socialization, education, imitation, professionalization and personnel movement. Evaluation is a key mechanism that allows an organization to assess these routines and provide feedback for improvements. Levitt and March recognized that even though routines are independent of individuals, to bring about changes in routines the organization needs to involve not only the individuals who directly perform the routines but also those who rely on it indirectly. The general expectation is that evaluation utilization will become common when it leads to favorable routines.

Learning occurs best among individuals who regard the information they are reviewing (i.e., evaluation findings) as credible and relevant to their needs. Involving stakeholders in designing and conducting an evaluation helps assure their ownership of, and interest in, its findings. Learning also occurs best among individuals who have an opportunity to ask questions about evaluation methods, consider other sources of

123

information about the topic in question (including their own direct experiences), and at the same time hear others perspectives.159

A learning approach to evaluation is contextually-sensitive and ongoing, and supports dialogue, reflection, and decision making based on evaluation findings160. The authors conclude that the primary purpose of an evaluation is to support learning that can ultimately lead to effective decision making and improvement in department, programmatic, and organization-wide practices. They argue that to achieve learning the evaluation planning must: Consider the organizational context (stakeholders needs, political realities etc) Be conducted often enough to become organizational routines Actively engage stakeholder participation in planning and interpretation

A learning approach can be taken with any kind of evaluation. The factors to consider while learning from an evaluation is that the findings remain relevant and credible to potential users and there are processes to facilitate action-oriented use. This means establishing a balance between accountability and learning roles for evaluation.161

Rosalie T. Torres, "What Is a Learning Approach to Evaluation?," The Evaluation Exchange VIII, no. 2 (2002). 160 R. T. Torres and H. Preskill, "Evaluation and Organizational Learning: Past, Present and Future," American Journal of Evaluation 22, no. 3 (2001). 161 R.T. Torres, H. Preskill, and M.E. Piontek, Evaluation Strategies for Communicating and Reporting: Enhancing Learning in Organizations (Thousand Oaks, CA: Sage, 1996).

159

124

Chapter 3 provided a review of evaluation use and organizational learning theories. The chapter also discussed the challenges in evaluation utilization specific to the NGO sector and answered several of research questions that guided this study. It also highlighted themes that were built into the practitioner survey and summarized below: (1) The different types of uses and their relative importance (2) The human factors that influence use - role of stakeholders; user biases and interests; (3) The evaluation factors that influence use the quality, structure and content and timing of evaluations (4) The organizational factors that influence use decision making models; organizational learning frames; systems and tools to enable use.

The next chapter 4 presents the responses from the survey that add to the knowledge gathered in the literature review. Together they informed the utility model presented in chapter 5.

125

Chapter 4: Presentation of Survey Results

The purpose of this chapter is to present the results from the survey of 111 staff from 40 NGOs. The data is presented in three sections correlating with the different stages of an evaluation planning; execution and follow-up. The survey was used to collect information about how organizations use evaluations; how the factors that trigger evaluations are applied throughout the lifecycle of an evaluation; and what systems and processes currently support use and how can they be improved.

Note: the number at the beginning of each table represents the question on the survey

Stage 1: Evaluation Planning

The questions below attempt to understand how the concept of utilization is incorporated in the planning of an evaluation. They explore around how respondents define intended users, intended uses and the involvement of users in planning.

Table 4.1 and the corresponding Chart 4.1 show how respondents grouped intended users. All selected Program Staff, highlight the importance of those working at the program level as an essential user of findings. Respondents also indicated senior management as an important user group with 81%. Donors came in third at around 66%. Fewer cited Board members (27%) and beneficiaries (27%).

126

Table 4.1 Intended users grouping #7: Who do you consider as a potential user of program evaluations? You can make multiple selections. Response Answer Options Program Beneficiaries Program Staff Senior Management Board Donors Issue Experts (outside the organization) Others (please specify) Percent 27.0% 100.0% 81.1% 27.0% 65.8% 40.5% 0.0% Response Count 30 111 90 30 73 45 0

Chart 4.1 Intended users grouping

Issue Experts Donors Board Senior Management Program Staf f Program Benef iciaries

40.5% 65.8% 27.0% 81.1% 100.0% 27.0%

127

The survey sought to identify each stakeholder groups involvement during the evaluation planning phase. They were asked to select only one response per group that closest represented the average. As seen in Table 4.2 program staff was involved over 50% of the time while senior management was close with 50%. What stands out is the involvement of Donors in planning an evaluation. They were in the lower range (54% < 20% of the time). This is interesting as in the previous question they were identified as the top three potential users groups of evaluations.

Table 4.2 Involvement of potential users in planning an evaluation


#8: How often are potential users involved in planning an evaluation? < 20% of the Answer Options Program Beneficiaries Program Staff Senior Management Board Donors Issue Experts (outside the organization) time 63% 0% 9% 81% 54% 84% Between 20% 50% 23% 18% 48% 16% 24% 14% Between 50% 80% 10% 55% 32% 3% 16% 3% > 80% of the time 5% 27% 12% 0% 5% 0%

In the next question, there is alignment among over two-thirds of the respondents on the importance of involving potential users in planning an evaluation.

128

Table 4.3 Importance of involving potential users #9: What do you think of the following statement: "Evaluations get used only if potential users are involved in the planning of the evaluation" Response Answer Options Strongly Agree Somewhat Agree Somewhat Disagree Strongly Disagree Percent 46.0% 31.0% 19.0% 4.0% 100% Response Count 51 34 21 4 111

In the question on use, respondents anticipated using the evaluation results in a variety of ways, the most common being for program improvement. Although addressing donor needs was cited in the literature as a major reason for evaluations, the responses reveal that respondents rank it on par or lower than program improvement and assessing the impact of the organization. The survey also asked respondents to provide an example of how the evaluation results were used. A total of 72 examples were provided by 111 respondents. Analysis reveals that 54% of the examples can be classified as using results as a basis for direct action, 34% to influence peoples thinking about an issue, 7% for donor compliance and 5% 129

pertained to understanding program measurement in general. Of the examples grouped under direct action 65% was to improve program processes, 21% to inform strategic and program planning and 13% to make funding allocations and 1% to reorganize staff. The influence examples consisted primarily of ways that the organization used results to obtain or justify funding to donors. There were a few that used it to inform the field, such as through conferences and publications.

Table 4.4 Uses of program evaluations #10: What are program evaluations mostly used for? Rank the following with 1 being most important and 4 being least important. Response Answer Options Program course correction Report to funder/donor Inform beneficiaries Understand overall impact of 21 organization 42 35 13 111 1 63 11 11 2 26 42 0 3 16 58 5 4 6 0 95 Count 111 111 111

130

Chart 4.2 Uses of program evaluations


Responses 120% 100% 80% 60% 40% 20% 0% 1 Increase efficiency of program Inform beneficiaries 2 Ranking Fundraise/donor relations Understand overall impact of organization 3 4

The next question provides an insight to understand the factors that influence use. Responses indicate that involvement of senior management (which we could interpret as key internal decision makers) (80%) and donors (91%) significantly increase use. While a lack of interest among staff (58%) and poor quality (48%) of the evaluation leads to low use. Respondents expressed a majority viewpoint (62%) that in the absence of a policy of process to guide use findings are not utilized with might signify that there may be an opportunity to increase use if there was a policy or process in place. Also, resource constraint often cited as a reason for non-use in the literature does not factor in the respondents view (85% consider it neutral).

131

Table 4.5 Criteria that impact evaluation use Question #11: How do you think the following criteria impact evaluation use? (A selection is required for each item on the list). Neutral - no Eval not Answer Options Evaluation findings that are too critical of the 11 program Low quality of the evaluation content and report Recommendations are unclear or articulated 53 badly Time and budget constraints within the 21 organization Staff's lack of interest in the program or 58 evaluation Involvement of senior management in the 5 evaluation Involvement of program donors in the 0 evaluation There is no process/policy to guide evaluation 62 use 27 22 111 20 91 111 26 80 111 21 32 111 85 5 111 48 10 111 48 42 21 111 90 10 111 used impact on use Eval used Response Count

132

Chart 4.3 Criteria that impact evaluation use

90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Not used No impact on use Used

Evaluation findings that are too critical of the program Low quality in the presentation of the evaluation report Recommendations are unclear or articulated badly Time and budget constraints w ithin the organization Staff's lack of interest in the program or evaluation Involvement of senior management in the evaluation Involvement of program donors in the evaluation There is no process/policy to guide evaluation use

Stage 2: Evaluation Implementation The questions below attempt to understand how users and uses are factored during the course of the evaluation.

On the question of how often respondents have participated in planning their program evaluations, respondents indicated a high level of participate in the planning and finalizing stages of an evaluation and relatively lower in the implementation. To highlight here, while 48% respondents indicated in the previous question that low quality of the evaluation results in non-use, in the table below we can infer that very few of them are involved in designing the methodology of the evaluation which can influence the quality and rigor of the study.

133

Table 4.6 Participation in evaluation planning #13: How often have you participated in each of the following planning activities of an evaluation? Answer Options Never or very little Setting evaluation objectives Selecting the evaluator Designing of methodology Conducting the evaluation Analyzing/interpreting the data Designing the report 0% 5% 10% 14% 0% 10% 0% 10% 26% 9% 4% 23% 19% 15% 38% 7% 23% 41% 30% 29% 12% 51% 30% 19% Around 25% Around Around Almost 50% 75% all the time 51% 41% 14% 18% 43% 7%

The next question explores the relative important of the various components in an evaluation report. 83% of the respondents indicated the analysis and

recommendations as the most important aspect of a report. 67% of respondents express a preference for follow-up steps targeting use.

134

Table 4.7 Evaluation report interests #17: Rank the following in their order of importance (1 being the most important - you are required to assign a unique rank to each line): "In the evaluation report of your program, you are interested in...." Response Answer Options The research methods The analysis and recommendations The follow-up of how you can use the 11 findings 67 33 111 1 17 83 2 33 11 3 61 17 Count 111 111

Chart 4.4 Evaluation report interests


Re s pons e s 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 1 Follow -up on use of f indings Analysis and recommendations 2 Rank 3

Research methods

The responses below support the literature that says periodic and consistent evaluations of programs promote use. This question was framed to gauge the

135

respondents consideration of timing as a factor that influences use. 57% indicated that current evaluations are done only at the completion of projects. While a nearly identical number (55%) indicated that mapping evaluations to key program decision making cycles is ideal. While there is always a decision to be made at the end of completion of the project on whether to continue funding; the responses on the ideal model reflect a need to guide interim evaluations also around program decisionpoints.

Table 4.8 Program evaluation timing #18: When should a program be evaluated to promote use of findings? Avg Ideal Answer Options More than once a year Annually At key program milestones At the end of the program Model 3% 10% 55% 32% Current Practice 13% 9% 22% 57%

While there is strong support for tailoring evaluation recommendations to users, respondents seem more balanced when it comes to formatting multiple reports. This relates well to the literature, which suggests that use is promoted if there is information relevance and specificity. As for tailoring reports, if an organization has

136

an active information exchange internally then evaluation findings can be extracted when there is a need into specific formats to meet user requirements. As respondents preference indicates maintaining a minimal number of reports customized to users can ensure uniform interpretation of findings. Table 4.9 Evaluation reports expectations #16: How often do evaluation reports meet your expectations? Response Answer Options Less than 20% of the time Between 20% - 50% Between 50% - 80% More than 80% of the time Percent 9.5% 47.6% 33.3% 9.5% 100% Response Count 11 53 37 11 111

137

Table 4.10 Evaluation recommendations specificity #1 #14: "Evaluation recommendations must come with specific recommendations for specific users" Response Answer Options Strongly Agree Somewhat Agree Somewhat Disagree Strongly Disagree Percent 65.0% 30.0% 5.0% 0.0% 100% Response Count 72 33 6 0 111

Table 4.11 Evaluation recommendations specificity #2 #15: "In order to promote use, there needs to be multiple versions of the evaluation report - matching findings with user interests/needs" Response Answer Options Strongly Agree Somewhat Agree Somewhat Disagree Strongly Disagree Percent 25.0% 45.0% 30.0% 0.0% 100% Response Count 28 50 33 0 111

138

Stage 3: Evaluation Follow-Up The questions below attempt to understand the organizational context and extract the contextual barriers to using evaluation findings. The responses highlight the value stakeholders, in this case potential users, place on evaluation follow-up and the importance of allocating resources towards that activity. This response is also in sync with what respondents expressed to Question #17 on the importance of follow-up actions towards use within the report. Comments underscored the significance of evaluation as a strategic management tool. Respondents reflected that when used effectively, evaluations promote a culture of organizational learning and enhance accountability for results. Some specific actions called for the organizations management to give careful consideration to evaluation findings, recommendations and lessons learned. Table 4.12 Evaluation follow-up #12: "The costs of investing in an evaluation follow-up process outweigh the benefits" Response Answer Options Strongly Agree Somewhat Agree Somewhat Disagree Strongly Disagree Percent 46.0% 31.0% 19.0% 4.0% Response Count 51 34 21 4

139

Responses related to decision-making models indicate a strong preference for a model that allows for the team to create ideas and have discussions, but then have a designated leader makes the final decision (66%). However, in practice there seems to a strong split between this and a model where the designated leader makes all decisions without consulting group members (40%). Literature informs us that such a model is not conducive for group ownership of evaluation results and/or learning from it. Table 4.13 Decision-making models #19: Of the decision-making models below which do you think promotes evaluation use? And which model is practiced within your program? (you can select the same model for both questions) Ideal Answer Options Decision by averaging team members' opinions Decision by majority vote Decision by team consensus Decision made by authority after group 66.0% discussion Decision made by authority without group 2.0% discussion Decision made by evaluation expert / evaluator 6.0% 4.0% 40.0% 32.0% Model 0.0% 4.0% 22.0% Current Practice 0.0% 12.0% 12.0%

140

Other (please specify)

0.0%

0.0%

Literature often cited the power and influence of donors with respect to evaluations and program decision making. Respondents support this finding. However, they ranked changes in the organizational mandate as the lead in driving program changes. What this question does not infer is role of evaluation findings in influencing a change in the organizations mandate. Is there a link between evaluations and organization level learning?

Table 4.14 Drivers of program change #20: What drives program changes? Rank the following with 1 being the most important. Response Answer Options Change in organizational 63 mandate Donor requests Client/beneficiary requests Evaluation findings Other (please specify) 23 23 0 0 53 23 18 0 12 24 64 0 23 41 29 0 111 111 111 0 18 12 18 111 1 2 3 4 Count

141

Chart 4.5 Drivers of program change


Responses
120% 100% 80% 60% 40% 20% 0% 1 Evaluation findings Client/beneficiary requests Donor requests Organizational mandate

Rank

The responses below indicate the opportunity this research presents to NGOs even if it does not lead to an organization-wide approach to evaluation use there might be ways in which current practices and policies can be enhanced. The next set of questions indicates the strong respondents preference for such a model that links evaluation use to the organization level, going beyond program level engagement. An observation here is that despite the strong history of evaluation use theorists, pressures from donors and resources spent it seems few organizations are committed to a formal evaluation system that maximizes use.

142

Table 4.15 Prevalence of evaluation use process #21: is there a process to evaluation use in your organization? Response Answer Options Yes, we have a formal process where evaluation reports are shared, reviewed, analyzed and findings applied, where applicable. No, it's up to the individual staff members to do 24.0% as they please with the evaluation report. Some departments have a formal process some 56.0% don't. There is no organization-wide policy. Other (please specify) 4.0% 100% Other (please specify) We have a formal policy but implementation is not as systematic as it should be we are establishing formal processes No formal process (i.e. Policy) but still all items in point 1 still apply program is stand-alone, so our use of evaluation reports is very localized 4 111 62 27 16.0% 18 Percent Response Count

143

The following questions were open-ended to captured respondent feedback on organizational processes that influence evaluation use.

#22: Please answer the following in the space provided using your own words: What is the most effective tool or method to keep evaluation findings current in organization memory?

All respondents answered this question. The comments are grouped into the following five categories: Policy, Systems, Relevance, Accountability and Transparency. Figure 4.1 Tools to keep evaluation findings current in organization memory

Policy: 43% of comments fall under this category. Suggestions included formal processes/organizational policy that offered structured guidance on how to incorporate evaluation findings into future program planning; creation of a process that continuously reviewed and adapted findings into next planning cycle.

144

Systems: 27% commented on the need for tools and technology that allow for storage and easy retrieval of learning. The importance of investing in supporting systems that builds efficiency into staff processes.

Accountability: 13% recommend building utilization into individual staff work plans. Responses called for clear structures of responsibility within the organization to successfully track compliance. Some proposed the carrot and stick approach. They felt that improved quality and targeted dissemination of findings may be insufficient to promote use. There needs to be incentive structures built into the system, or penalties established for not considering use (recognizing that there could be legitimate reasons for non-use).

Relevance: 7% suggested linking the role of evaluation and its findings to the overall mission of the organization. While evaluations address specific program issues, tying the findings to the strategic questions posed at the organization-level can increase the relevance and acceptance of findings among staff.

Transparency: 5% suggested that sharing findings throughout the organization could lead to informal learning and cross-checking that would keep findings in memory. The comments ranged from just making the findings available in an easily accessible platform to some who called for structured and targeted dissemination that guided individual staff on their planning.

145

#23: Please complete this sentence: Any process or model that is adopted to increase evaluation use MUST consider the following...

Respondents were asked to describe what process can increase use in their own words. 96 comments were offered. The results are grouped into the following categories: People, Systems and Organization.

Figure 4.2 Processes that can increase use

People
Representation of ALL stakeholders

Systems
Simple and practical

Organization
Commitment to use

Buy-in from decision-makers

Flexibility

Resource allocation

Quality

Ongoing Learning

People: Within this category, 56% of the comments identified stakeholder involvement as critical. 32% emphasized that buy-in from decision-makers will ensure that required resources are allocated for follow-up activities. The rest of the comments included clarity of staff roles in use, access to evaluation experts and the role of senior management and leadership as champions of evaluation use.

146

Systems: 42% of comments called for a system that can be easy to use and simple to implement throughout the organization. 38% focused around a quality system and 12% reflected on a model that will be easy to maintain. Comments included the need to manage information overload. Consulting users to identify preferred communication style and desired content.

Organization: Majority of the comments (82%) related to the important of the organizations commitment to use findings and to learning, overall. Some comments also related to the organizations commitment of resources for evaluation follow-up activities i.e.: dissemination of findings, building shared interpretation and tracking utilization. Caution around the disclosure of negative or controversial evaluation findings can obviously create difficulties for the organizations however a few respondents called for the view that the long-term benefits of disclosure outweigh the short-term setbacks. Greater disclosure can enhance the credibility of the organization and boosts the validity of favorable findings. Respondents called for evaluation use to become more reliable and systematic. As one respondent put it, organizations need to emphasize that learning is not optional.

#24: Please provide ONE reason why you would or would not refer to a past evaluation during program planning.

147

There were 83 responses to the would refer question and 91 for the would not refer question. The responses are once again grouped, based on content analysis, into categories that reflect a significant percentage of the responses.

Figure 4.3 Reasons why evaluations get referred or not WOULD REFER
Ongoing Program Increased Issue Knowledge

WOULD NOT REFER


Concluded Program Irrelevant Findings

Organizational Practice

High Quality Results

Capacity Constraints

Lack of Guidance

Would refer when: (1) The evaluation is conducted mid-stream of the program. (63%) (2) The organization has a process and practice to use findings. (24%) (3) The findings from the previous survey increased issue/program knowledge (6%) (4) The quality and content of the past evaluation is good. (7%)

Would not refer when: (1) The program has concluded or is in its final stage. (19%) (2) The findings are of poor quality and recommendations are not practical. (52%) (3) There is no policy or process around evaluation follow-up and/or learning (23%) (4) There are time and resource constraints. (6%)

148

Chapter 5: The Utility Model

The purpose of this chapter is to present the evaluation use model that was developed using the literature review and the data gathered. The chapter begins with an explanation of the utility model. This is followed by a list of practical steps on how NGOs can implement this model. Building on the review of literature, past practice and the survey of practitioners this model is innovative in that it incorporates external realities to the project that influence evaluations. Traditionally NGOs approach evaluation and utilization within the preview of the specific program. Questions revolve around what the evaluation seeks to accomplish; the data collection methods; the qualification of the evaluator and the publication of the findings. While these are all necessary steps to aid utilization this study has found them to be insufficient. The utility model provides a unique insight into NGO evaluation practice by weaving two key links into the current thinking human and organizational factors. Incorporating the intended users interests and capabilities increases how evaluation gets used and re-used. And, linking it to the organization level shifts the view of evaluation utilization from a narrow, program restricted lens to impact the effectiveness of the entire organization. While the inclusion of these may already be occurring in some organizations; based on the findings from this research there is a strong dearth of understanding in the NGO community on how to increase evaluation use. This model provides a list of factors that influence use and the practical steps that can be implemented to increase use.

149

Explanation of Model Figure 5.1: The Utility Model

Ev Pro alua ce tion du res

l na sio s fes ilitie Pro pab Ca

rs Facto ation Evalu

Human Factors

Int

ere sts /Bi as es

Intended Users

INCREASES USE Conceptual Instrumental Process Strategic


nizati cultu onal re
nd sa s ne uti sse Ro roce P

f eo nc n sta atio b Su form In

Reporting

Organizational Factors

The Core is USE Understanding how evaluations are used is the focal point of this model. Evaluation findings serve three primary purposes: rendering judgments; facilitating

improvements and generating knowledge. These need not be conflicting purposes, there can be overlap among them and in some cases an evaluation can strive to achieve all three. What becomes important is to understanding the purpose of the evaluation in order to determine intended uses.

Orga

150

Evaluations that seek judgment are summative in nature and ask questions that lead to instrumental use. Did the program work? Were the desired client outcomes achieved? Was implementation in compliance with funding mandates? Should the program be continued or ended? In such evaluations, primary intended users are donors, program staff and decision-makers closely related to the program -- who can use findings for direct course corrections. Improvement-oriented evaluations on the other hand are formative in nature. Instead of offering judgments they seek to facilitate learning and makes things better. The questions tend to be more open ended and lead to process and strategic use. What are the programs strengths and weaknesses? What are the implementation challenges? Whats happening that wasnt expected? How are stakeholders interacting? How is the programs external environment affecting internal operations? Where are efficiencies realized? Intended users of improvement-oriented evaluations tend to be donors, program managers, senior management and Board.

Where evaluation findings contribute to increasing knowledge it invokes conceptual use. This can be to clarify a model, prove a theory, generate patterns of success or explore policy options. Conceptual use enlightens users often beyond the program team and include Board, donors and the larger issue-area community. The knowledge generated is used beyond the effectiveness of a particular program to policy formulation in general in the form of sharing best practices. Studies of use also indicate that individuals and organizations can learn through the process of an evaluation, irrespective of the findings. An increasing prevalence and recognition of

151

such process use like participants increased ownership of evaluation findings, greater confidence in the evaluation process and in applying the results and evidence of personal learning - combine to produce a net subsequent effect on program practice.

Expanding out, the model frames three categories that influence use: (1) Human (2) Evaluation (3) Organizational

As seen from the literature and survey, there are several factors that play a role in increasing use. The eight framed in this model were drawn from these and further developed to capture the key characteristics that influence use. The survey of practitioners validated the importance of these factors and in particular contributed to the developed of the organizational culture factor. Irrespective of the size or complexity of the NGOs, this research puts forth the notion that if these eight factors were triggered the organization would observe a significant increase in evaluation use. So questions arise as to what happens if anyone of the factors is not present? The eight factors were identified and developed as capturing a unique aspect of influencing use, so this research proposes that all of the factors must be present to maximize use. That said it is logical to conclude that implementing the processes and procedures that enable these factors takes time and resources and organizations have to balance this need with other competing priorities. The depth and breadth of

152

engaging these factors depends entirely on the complexity of the NGOs programs; the magnitude of its operational and organizational structure and the availability of resources (staff, time and funding). However, this model proposes that until all of the factors have been engaged, at their fullest level within the context, the NGO is not maximizing its evaluation utilization. Organizations can measure their progress by conducting a stocktaking of the current state of these factors in the evaluation process and track their growth over time to see if they track increased use. For example, at baseline identifying intended users could be occurring in 60% of evaluations while an organizational culture towards learning might be non-existent. Let us say, hypothetically, continuing to strengthen the involvement of users while building a learning organization could in a year increase these to 80% and 40% respectively, while all other factors are held constant. Then the organization would be able to observe increase in its evaluation utilization. Similarly when all of the factors are engaged at the highest level within the organization it would have reached its maximum utilization potential.

HUMAN FACTORS That Increase Use

Intended Users Involving intended users in the evaluation process is a key factor of increasing use. This was cited throughout the literature and also validated in the practitioner survey. Given a range of potential stakeholders, use can be maximized when they represent all levels of the program decision-making hierarchy as each group uses evaluation

153

findings differently. Excluding program staff could potentially affect instrumental use; excluding leadership/senior management could affect strategic and conceptual use. Another important user group identified was the donor. While most evaluations are conducted at the behest of the donor, actively involving them in the evaluation process suggests an increase in the use of findings within the organization example: to support ongoing fundraising efforts. Engaging donors also expands use outside the organization to influence the larger issue area. Depending on the desired level of use, intended users need to be identified, prioritized and included in the evaluation planning.

The practitioner survey identified the following as the top three users of evaluation findings: Program staff, senior management; donors. The graphic below illustrates examples of how different decision-making groups can use evaluation findings. Figure 5.2 Evaluation use and decision-making groups

154

Evaluations assist NGO leadership and senior management to decide the future of programs. Whether they are meeting objectives and advancing the mission of the organization. Whether resources (staff and money) allocated to programs are proportional to the impact of those programs. Understanding program and strategic expansion are also informed by evaluations. On a programmatic level, evaluations help staff track program progress and effectiveness. It also can highlight opportunities for realignment of resources and leverage of opportunities to maximize results. For funders, the key use of evaluation is to determine the continued support of a program. But on a larger level it can also inform/educate donors on best practices and effective interventions within their issue area of interest.

Interests and Biases With the involvement of multiple users comes the challenge of balancing individual and group interests in promoting use. The politics of use must be recognized and managed during all phases of the evaluation. The survey respondents indicated that a lack of interest in the evaluation by users results in the findings not being used. Irrespective of the size of the program or the depth of the evaluation, user interests may be divergent some focusing on efficient use of resources, others on the impact of the program and actual results, still others on the process of evaluation and organizational learning. Capturing and communicating intended uses while planning for the evaluation can promote shared understanding and manage disappointments in the final deliverable. It is important to pay special attention to the negative predispositions by intended users. Some reasons for such a reaction could be past

155

critical evaluations; non involvement or cursory participation; earlier findings not applied or time and cost constraints. Acknowledging these during initial user consultation and planning can lead to a realistic framework for use. Finally, no matter what the program size or complexity, if the potential for use of an evaluation among stakeholders does not exist then organizations should seriously consider not conducting the evaluation.

Professional Capabilities The extent of administrative and organizational skills present in users influences evaluation use. Some users may be organizers, some procrastinators or some unable to get tasks finished. Additionally, the alignment of user capabilities with types of uses can yield different results. For example, conceptual use requires an ability to grasp and develop a new idea or method. Strategic use occurs when users are open to new ideas or change. User ability and inclination to receive and process information also affects use. For example if findings are shared electronically (impersonal) versus in face-to-face or group meetings. While this aspect was not included in the practitioner survey, the literature strongly supports that understanding user capabilities can lead to better utilization planning. Organizations can focus training for staff around frequently used procedures that influence use.

156

EVALUATION FACTORS That Increase Use

Evaluation Procedures Once intended uses are identified the evaluation plan needs to reconcile these with the evaluation objectives. Active participation by intended users is essential along with a continual (multi-way) dissemination, communication and feedback of information during and after the evaluation. By involving users in key evaluation decision-making there is an increase in user ownership of results and application of uses. Participation in the formulation and interpretation phases of the evaluation helps increase use by increasing evaluation relevance and user ownership of results. Individuals and organizations are more disposed to change if they are familiar with the information and mentally prepared. The involvement of senior managers and decision-makers traditionally has only been at the final reporting stage. In some cases, the sudden exposure to proposed changes that are complex and politically challenging increases the risk of rejection. The quality of research methods and the application of rigor also influences intended users as evident from the survey. Within this context however, it is important during evaluation planning to be mindful of the differentiation between theoretical perceptions of rigor versus that of the user. For example: a user might be more concerned with how beneficiaries were interviewed versus whether the answers were statistically analyzed.

157

Substance of Information

Survey respondents strongly agreed that use is promoted if there is information relevance and specificity. This becomes important when there are multiple groups of intended users for one evaluation. A single evaluation report may not promote use at all levels, so matching findings to users emerged as important. Building consistency among users, while at the same time sharing relevant and pertinent information, can present a challenge to organizations. Linking those who possess information to those who need this information is what promotes organization-wide use. Combining information from different programs findings leads not only to new information but also to new uses. Timing of evaluations also emerges as an influencing factor use increases when release of findings coincides with key decision-making cycles. If recommendations are made after the next project cycle has resumed it may have very little instrumental use however there is always an opportunity for conceptual use that links to the overall learning within the organization.

Reporting When it comes to reporting evaluation findings the survey indicates that besides targeted content the style or presentations of findings must also be targeted to users. Time and again, excessive length and inaccessible language, particularly evaluation jargon, are cited as reasons for non-use. Reports need to strike a balance between building credibility to the process and the messages for action. Program level users might value detailed statistical data to inform instrumental use while senior

158

management, Board and donors may seek a balanced mix of quantitative and qualitative information to guide conceptual use. A balanced mix of graphics (tables, charts, figures); technical presentation and non-technical narrative enhance use potential of reports. If an organization has an active information exchange with peers or issue networks then evaluation findings need to be presented in a specific format that supports this shared use. Successful organizations are able to strike a balance between user needs and the uniform interpretation of findings.

ORGANIZATIONAL FACTORS That Increase Use

Organizational Culture As evident from the responses in the survey: policy (to promote evaluation use), systems (to enable use), relevance (linking findings to organization mission), accountability (part of staff work plan) and transparency (acceptance of findings) highlight the close links between organization context and evaluation use. These include processes that enable use; inclusive and participatory models of decisionmaking; facilitated conceptual and strategic use (beyond programmatic use) and organization-wide commitment to use. Evaluation findings might jeopardize funding and future of the programs being evaluated. The extent of the organizations tolerance for failure and focus on learning will affect the extent of use. In an environment where learning is encouraged and facilitated utilization of evaluations flourish. An

159

open commitment to use within the organization can shift potential users to become actual users.

Routines and Processes In any organization, overtime, program procedures and expectations get institutionalized as routines making it work habit. By reviewing and restructuring these routines NGOs can build an environment conducive to use. For example, instituting processes to capture and retrieve memory contributes to periodic reinforcement of findings and promotes cycles of use. Investing in systems can facilitate the sharing of information and interpretation which in turn increase intended user participation and result in higher utilization. The routine of conducting stakeholder analysis prior to any evaluation planning is another example where routines and processes can help reinforce the other factors that influence use.

Summarizing the model, the evidence unearthed in this study indicates the need to consider external realities that play a significant role in influencing evaluation use. While for decades organizations have focused on streamlining and refining the evaluation process and methodology within a program context to increase use. Those efforts have resulted only in marginal success. Without actively incorporating the human and organizational factors outlined above NGOs will continue to struggle with maximizing evaluation use and the gap between expended resources in conducting evaluations to the resulting value of such efforts will continue to be disparate.

160

Steps to Implement the Model

This section explains how the above described utility model can be made operational. It identifies the practical steps that organizations can take to trigger each of the eight factors in the model. The table below presents an overview of these steps mapping them to the factors. The columns represent the eight factors grouped within the three categories: human, evaluation and organizational. The rows list the practical steps. The X marks the factors that are triggered through a particular action step. Following the table, there is a description of each step.

While analyzing the steps and mapping them to the factors it became clear that they could be divided into two groups. Those action steps that happen at a program level and those that happen at the organizational level. For example, the action of conducting stakeholder analysis is done by staff that works closely at the individual program level as this action will be unique to each program. On the other hand, investing in technology and tools is an action that benefits all programs and is implemented at an organizational level. By grouping the actions steps at the program and organization levels it brings to the forefront the types of staff, the level of engagement and the depth of resources that need to be involved in implementing them. Some actions can be implemented immediately and do not incur significant additional costs (example: define ongoing user engagement in the evaluation) while some have to be factored into the organizations long-term operations and budgetary planning (example: staff training). Therefore this grouping can assist the NGO to

161

prioritize and customize the implementation of the action steps according to its needs and resources. In mapping the relationship of which action triggers which action, this table identifies a particular pattern. It allows for decision-makers to better capture (and plan for) the action steps that lead from one to another, cascading forth to increase the utilization of evaluations. In other words, a particular action step may trigger more than one of the factors in Table 5.1, with one step stimulating another. To take a relatively simple, linear example: 1. mapping intended uses to intended users captures a users interests and biases about a program, which might 2. result in presenting reports with specificity that make her attitude toward the program more positive, which might in turn 3. facilitate reuse and lead her to take on the interpersonal role of a change agent within her organization, which might 4. link to organizational learning and result eventually in reconsideration of organizational policy There could be several alternate interpretations of how these actions trigger the factors that can result in different patterns. So it is important to note that this table shows a set of relationships that is not finite and not linear. The larger interest is to identify steps that trigger the eight factors that influence use. The action steps in this study attempt to provide one approach. It is by no means the only method to achieve the desired objectives.

162

Table 5.1 Mapping practical steps to the factors that influence evaluation use
FACTORS THAT INFLUENCE EVALUATION USE PRACTICAL STEPS TO TRIGGER THE FACTORS THAT INFLUENCE USE Conduct Stakeholder analysis to identify users Map intended uses to intended users Get user buy-in on use and methods Time evaluations to decision-cycles Define on-going user engagement Present reports with specificity Distribute to secondary users Facilitate reuse Link to organization learning Explicit commitment to use Staff training Invest in technology and tools Allocate program level resources Provide incentives to use HUMAN Intended Users X X Interests Biases X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X Professional Capabilities X Evaluation Procedures EVALUATION Substance of Information Reporting ORGANIZATIONAL Organizational Culture Routines and Processes X X X X X X X X X

AT PROGRAM LEVEL

AT ORGANZATIONAL LEVEL

163

Practical Steps at the Program Level At the program level, evaluation activity can be split into two phases: (a) planning and execution and (b) follow-up. The steps below are grouped into these two phases to assist organizations on when to engage in these activities.

Figure 5.3 Practical Steps at the Planning and Execution Phase

Conduct stakeholder analysis to identify users This is a fundamental step to utilization. The process of identifying users involves taking into account the varied and multiple interests; information needs, abilities to process the evaluation findings and political sensitivities within the organization. By involving multiple users organizations can overcome the effect of staff turnover so the departure of some will not affect utilization. Also, in the event of a large scale turnover of intended users, the process of identifying new group of users needs to be revisited. Although this might delay the evaluation

164

process it will payoff in eventual use. Starting with identifying users will allow for providing specificity and relevant information at the reporting stage.

Map intended uses to intended users Depending on the type of use that is desired the corresponding user group must be involved. For example, to derive instrumental use involvement of key program decision-makers becomes essential. Similarly, for conceptual use it may require senior management participation. Focusing on intended uses also helps balance the reality of resource constraints as it is impossible for any evaluation to address all the needs of each user. In this context it becomes imperative to make deliberate and informed choices on how each user will use the findings and prioritize the users and their uses. Mapping these during the planning stage of an evaluation allows for negotiations leading to commitment and buy-in from stakeholders ahead of time.

Get user buy-in on use and methods Intended users interests can be nurtured and enhanced by actively involving them in making significant decisions about the evaluation. Use can only occur when there is credibility with the findings. Understanding and meeting user expectations on quality and rigor is essential. Involvement increases relevance, understanding, and ownership, all of

165

which facilitate informed and appropriate use. Actively engaging users in the planning and implementation of the evaluation also gives them opportunities to be reflective, share and build interpretations (conceptual use) and finally place findings into action (instrumental use). However, within this context, the focus must still remain on quality and not quantity involving multiple users and identifying multiple uses does not necessarily result in higher utilization. Conversations on use and evaluation methods can also help identify training needs that users might have to actively participate in evaluations. (For example: statistical analysis to interpret quantitative data). At the organization level, acknowledging user bias and engaging in open conversation on conflicting interests builds a healthy practice towards collective learning.

Time evaluations to decision-cycles In projects with multiple donors, decision-making milestones may be varied. Conducting multiple evaluations to correspond with these milestones might not be practical. Focusing on the objective which is to maximize use will allow organizations to structure evaluations at intervals that are meaningful to multiple users and tied to critical decision-making cycles. Intended uses also can guide when evaluations are conducted. Mid-term evaluations might be necessary for programs that allow for course-corrections; whereas an evaluation at the end of the program might be used to feed directly into subsequent planning.

166

Define ongoing user engagement Engagement of users must be factored into the entire cycle of the evaluation. While they have an active role during planning; it might be critical to keep some users apprised of the progress regularly. In complex evaluations, maintaining such engagement can ensure that users start to gain visibility to emerging key learning that could significantly shift the future of the program. Once again, the evaluation planners must seek to balance this need with ensuring the evaluation does not get bogged down in conflicts among multiple user needs. Priority of intended users and uses in planning can help guide this engagement. Prioritization also helps focus on users who may have specialized skills or capabilities to engage in certain aspects of the evaluation. Also critical is to ensure that the engagement of users, outlined during the planning phase, is adhered to during the execution. Specifically called out here is the step of involving users in the interpretation of findings. Just as getting user buy-in on the research methods during the planning stage was important, how data is interpreted and presented also benefits from user engagement.

Present reports with specificity The process of sharing and targeted dissemination of findings plays a key role in ensuring intended uses are facilitated. Whether it is through information technology or in a meeting, engaging users immediately following the evaluation allows for discussion and decisions on use.

167

Allowing for this conversation and debrief creates a learning loop for the users who were involved in the planning, execution and follow-up to the evaluation. While writing multiple reports is not reasonable; presenting findings in a way that allows different groups of users to absorb the recommendations and take action is invaluable. This can be captured in the Terms of Reference of the evaluation to ensure that the variety of reporting needs is clearly identified ahead of time.

The evaluation follow-up phase is when the evaluation is completed and the reports shared with primary users. The steps below explain what needs to happen subsequently to expand the reach of the evaluation findings can keep the learning current to facilitate reuse.

168

Figure 5.4 Practical Steps at the Follow-up Phase

Distribute to secondary users Once an evaluation is completed and the findings are shared with intended users, there remains a window of opportunity to expand the learning to a new set of users those not directly connected with the program but who can benefit from the recommendations. These users can also be external to the organization like partners in the issue area; academics and targeted messaging for the general public. This action can assist NGOs convert program learning into a marketing and fundraising tool. In the current global economic crisis, with NGOs facing unprecedented financial challenges, it becomes imperative that they have to use evaluations to effectively allocate resources and maximize impact. NGOs can also find ways to share evaluation findings within their sector to leverage opportunities. Given that the funding environment is highly competitive complete transparency by NGOs may not always be rewarded by donors. However, NGOs can share evaluation findings with

169

peer-networks that can collectively leverage resources for the issue or strengthen the movement for their cause. Publishing reports on websites; presenting findings through workshops and conferences and targeted media communications can enable reaching a wider set of users.

Facilitate reuse Although evaluations provide a snap-shot in time, the findings and learning can continue to inform program managers. Putting in place processes and routines that encourage review of past evaluations and reuse of findings where applicable extends the return-on-investment of an evaluation. One step is requiring the review of the most recent evaluation while planning any changes to the program cycle. This creates a formal process for staff to reconnect with the findings.

Link to organization learning Organizations are seen as learning by encoding inferences from history into routines that guide behavior. Evaluation is a key mechanism that allows an organization to assess these routines and provide feedback for improvements. By extracting key learning from a program-level and linking it to the higher-level objectives of the organization, NGOs can track how its numerous programs are contributing to accomplishing the mission. Involving users who do not work directly with the program allows for the findings to be expanded beyond a narrow scope. Also,

170

creating a network of users, across departments or functionality enables the cross-pollination of findings and creates linkages throughout the organization.

Practical Steps at the Organization Level Commitment to use Requiring a commitment from leadership to provide an accountability framework that leads to increased trust and build on shared values of learning. Emphasizing the value of learning, regardless of what the evaluation results show, help staff be astute information users rather than hold on to prior positions. Users develop a long-term view of learning, improvement and knowledge use; whereby short-term negative results are less threatening when placed in a longer term context of ongoing development. Processes that engage intended users can help to manage internal conflicts around resources through conversations on how evaluation results ultimately benefits the organizations beneficiaries.

Staff training Training stakeholders and potential users in evaluation methods and utilization processes addresses both the short-term and long-term uses. Making decision makers more sophisticated about evaluation can contribute to greater use over time. Different intended users will bring varying perspectives to the evaluation which will affect their

171

interpretation. Users need the skills that help them differentiate between analysis, interpretation, judgment and recommendations. By placing emphasis on organizational learning, action research, participatory evaluation and collaborative approaches the evaluation process can defuse fear of and resistance to negative findings. Training also can be directed towards improving the quality and rigor of evaluations.

Invest in technology and tools An almost universal weakness in NGOs was identified as their limited capacity to learn, adapt and continuously improve the quality of what they do. There is an acute need for systems which ensure that they know and learn from what they are achieving and then apply what they learn (Deutero-learning). The use of technology like groupware tools, Intranets, e-mail, and bulletin boards can facilitate the processes of information gathering (e.g.: identifying users), distribution (e.g.: sharing findings) and interpretation (e.g.: linking findings to intended uses). IS also strengthens the elements of the Organizational Memory so evaluation findings can be shared and used over time. However, technology must not be seen as a one-stop solution to utilization. There is often a strong tendency to design IS solutions around supply side criteria information available rather than a clear understanding of the way information is actually used. IS can be a highly effective tool that can allow for increased efficiency of resources (money; staff time) and

172

guide effective interpretation of information. However, it must not be viewed as a substitute to conventional information sharing approaches. Technology fixes can often mask the more complex and structural barriers in organizations to evaluation use like political conflicts, ineffective decision-making models and limited staff

competencies/skills. It is not enough to have trustworthy and accurate information; staff needs to know how to use information to weigh evidence, consider contradictions and inconsistencies, articulate values and examine assumptions.

Allocate program level resources The allocation of adequate resources emerged as the key impediment to promoting use. This included resources towards systems and technology, staff skills training and to support post-evaluation follow-up. Some NGOs have taken the issue of resources beyond the organization to educate and engage donors in the opportunity and the need to support strengthening evaluation use infrastructure. On action item can be to encourage donors to add an evaluation utilization component to program delivery costs. NGOs should see learning as an essential component of their operations and must take the necessary steps to allocate a percentage of their general operating budget for systems that support evaluation use. NGOs can increase use by ensuring that evaluation follow-up is an integral part of its operations and investing resources to

173

build systems and process to support it. Dedicated follow-up individuals, the developed of evaluation skills, clear allocation of responsibility and specific mechanisms for action increase the likelihood of evaluation use, particularly if follow-up was planned from the beginning of the evaluation.

Provide incentives Incentives can encourage use. Tying evaluation use and learning to individual performance measurements encourage staff to actively participate in the process. Recognizing that only individuals can act as agents of learning organizations must create roles, functions, and procedures that enable staff to systematically collect, analyze, store, disseminate, and use information relevant to their performance. Finally, it is important to cultivate evaluation as a leadership-function of all managers and program directors in the organization. Then the person responsible for the evaluation plays a facilitative, resource and training function in support of managers rather than spending time actually conducting the evaluation. In this framework, evaluation becomes a leadership responsibility focused on decision-oriented use rather than a data-collection task focused on routine internal reporting. Empowering managers to identify users and uses not only nurtures accountability but also makes the evaluation process thoughtful, meaningful and credible.

174

This chapter presented an evaluation utility model that identified eight factors that influence use. This was followed by a list of action steps that organizations can take to trigger these factors and operationalize the model both at a program and organizational level. While the focus was to develop a model that enhances use, the final product also succeeds in limiting the barriers to use identified in the literature review chapter. This model adds to the knowledge of evaluation use in NGOs by expanding its focus from being restricted to the program level to include the external realities at the organization level.

175

Chapter 6: Conclusion

This study began with the purpose of understanding the fundamentals of evaluation use. How do we know there is use? What helps and hinders use? Within the program evaluation context in the NGO sector how do these factors manifest themselves? What can be done to improve utilization?

Based on independent research and the review of literature, an evaluation utility model was developed. This model presents a fundamental shift in how NGOs must approach program evaluation. In order to maximize use it is no longer sufficient to focus on program level processes. Evaluation use is a multidimensional phenomenon that is interdependent with organizational context, systems and evaluation practice. Within this context, the utilization process is not a static, linear process but one that is dynamic, open and multidimensional driven by relevance, quality and rigor. The model outlined attempts to capture this environment focused on the central premise that whether an evaluation is formative or summative, internal or external, scientific, qualitative or participatory the primary reason for conducting evaluations is to increase the rationality of decision-making.

Embedding the principles of use throughout the lifecycle of an evaluation enhances utilization. The responsibility of evaluation lies in identifying the strengths and weaknesses of programs, which it can do extremely well, and in

176

facilitating utilization, which it has been doing less well. Serious participation and a far greater focus on the intended user and uses would help to expose the practice of inappropriate or ritual evaluation and prevent evaluation further contributing to the current mistrust and saturation in the sector. It is equivalent to mapping how the constructs from OL acquisition, distribution, interpretation and memory are applied within the lifecycle of an evaluation with utilization as a focus. The utility model revealed that influencing factors extend to include the larger context of organizational behavior and learning. The finding from evaluations must be transferred from a written report to the agenda of managers and decision-makers. The challenge within the nonprofit sector is to make evaluation utilization an essential function of its operations similar to accounting practices. While in the past decade, there has been a paradigm shift in NGOs to dedicate resources and build their evaluation practice; they now need to complete this transformation and link findings to learning at an organization level.

At present, there might also be a sense within the sector of inertia generated by an overload of information, systems and policies. Evaluation itself may be inadvertently contributing to the workload. Given this, the decision to carry out an evaluation should itself be considered as part of an information prioritization process by all stakeholders. Extending the utilization principle to even before an evaluation is commissioned may allow for staff to absorb existing information and identify how new information will increase overall effectiveness.

177

Far from being a discrete operation, evaluation use must be seen as being at the heart of the organizational learning process. Developing the virtuous circle of linking evaluations use to learning to effectiveness requires an explicit commitment in all levels of the organization. What is evident is that evaluation utilization is no longer an option. It is essential if the NGO community is to deliver on the ambitious targets it has set itself. This research concludes that the utility model presented moves the dialogue on utilization further than it has been and positions organizations, wherever they are in the continuum of use, to maximize their results.

178

Recommendations for Future Research

Blending theory and practitioner feedback this research provides a model that can increase evaluation use. Even so, if you take each step outlined in isolation there might be challenges to implementation within any specific organization. Whether the application is at big international NGOs or small ones at the community level, the model outlined in this research is less about a universal application but more about what can be done, however small, to increase utilization within the existing context. The diversity among agencies includes their background, institutional context, specific priorities, target audiences and objectives. Future researchers could test this model through in-depth case studies among diverse NGOs. If the eight essential factors in the model were triggered would there be increased utilization? How would these apply in a small, community based NGO versus a big, international NGO? How critical is the organizational learning environment to effective use?

This research has presented information that supports the premise for evaluation utilization acknowledging the complexity of the factors that influence use and the systems that enhance it. However, in the end, this approach to evaluation use must also be judged by its usefulness. Experimental research on whether the practical steps outlined in this research can be collectively implemented and do they result in increased use can only add clarity and deeper understanding toward evaluation utilization in NGOs. This model was developed with an in-

179

depth review of literature and a survey within the NGO sector. It remains to be seen if the model holds firm when it is tested in organizations that did not participate in this research and/or operate in contexts different from the survey respondents. Also, while the context of the research was NGOs, is this model a reflection of evaluation use in any sector? Can it be extrapolated, wholly, to other types of institutions? For example, how would the model work within the Academic sector? Do some factors become more important in those settings? Another opportunity for further research, given the current economic crisis and dire straits under which NGOs are operating, might be a need to understand how NGOs can leverage existing resources and partnerships to advance evaluation use. This research has indicated that pooling resources for evaluation could lead to increased use and promote shared learning. Research on how these networks can be created/facilitated; maintained and leveraged for the purpose of sharing evaluation resources and findings can be beneficial for the sector.

180

REFERENCE LIST Alkin, M. C. A Guide for Evaluation Decision Makers. Newbury Park, CA: Sage, 1985. Alkin, M. C., J. Kosecoff, C. Fitzgibbon, and R. Seligman. "Evaluation and Decision Making: The Title VII Experience." In CSE Monograph No. 4. Los Angeles: UCLA Center for the Study of Evaluation, 1974. Alkin, Marvin C. Debates on Evaluation. Newbury Park, California: Sage Publications, 1990. Alkin, Marvin, Richard Daillak, and Peter White. Using Evaluations: Does Evaluation Make a Difference? Beverly Hills: Sage Publications, 1979. Alkin, Marvin, and Coyle Karin. "Thoughts on Evaluation Utilization, Misutilization and Non-Utilization." Studies in Educational Evaluation 14, no. 3 (1988): 331-40. ALNAP. "Humanitarian Action: Learning from Evaluation." London: Overseas Development Institute, 2001. "The American Journal of Evaluation." aje.sagepub.com. Anderson, Scarvia B., and Samuel Ball. The Profession and Practice of Program Evaluation. San Francisco: Jossey-Bass, 1978. Argote, Linda. Organizational Learning: Creating, Retaining and Transferring Knowledge. New York: Springer-Verlag, 1999. Argyris, Chris, Robert Putnam, and Diane McLain Smith. Action Science: Concepts, Methods and Skills for Research and Intervention. San Francisco: Jossey-Bass, 1985. Argyris, Chris, and Donald Schn. Organizational Learning Ii: Theory, Method and Practice. Reading, MA: Addison-Wesley, 1996. . Organizational Learning: A Theory of Action Perspectives. Reading, MA: Addison-Wesley, 1978. Ayers, Toby Diane. "Stakeholders as Partners in Evaluation: A StakeholderCollaborative Approach." Evaluation and Program Planning no. 10 (1987): 9. 181

Berthoin-Antal, Ariane, Meinolf Dierkes, John Child, and Ikujiro Nonaka. Handbook of Organizational Learning and Knowledge: Oxford University Press, 2001. Brabent, Koenraad Van. "Organizational and Institutional Learning in the Humanitarian Sector: Opening the Dialogue." London: Overseas Development Institute, 1997. Brett, Belle, Lynnae Hill-Mead, and Stephanie Wu. "Perspectives on Evaluation Use and Demand by Users: The Case of City Year." New Directions for Program Evaluation no. 88 (2000). Britton, Bruce. "The Learning Ngo." INTRAC Occasional Paper Series no. 17 (1998). Campbell, Donald T., and Julian C. Stanley. Experimental and QuasiExperimetal Designs for Research. Chicago: Rand McNally, 1963. Caracelli, Vaerie J., and Hallie Preskill. "The Expanding Scope of Evaluation Use." New Directions for Evaluation no. 88 (2000). Carlsson, Kerker, Gunnar Kohlin, and Anders Ekbom. The Political Economy of Evaluation: International Aid Agencies and the Effectiveness of Aid. New York: St. Martin's Press, 1994. Carson, Emmet D. "Foundations and Outcome Evaluation." Nonprofit and Voluntary Sector Quarterly 29, no. 3 (2000): 479-81. Community, Alliance for a Global. "The Ngo Explosion." Communications 1, no. 7 (1997). Cousins, J. Bradley, and Kenneth A. Leithwood. "Current Empirical Research on Evaluation Utilization." Review of Educational Research 56, no. 3 (1986): 331-64. Cronbach, L. J. Designing Evaluations of Educational and Social Programs. San Francisco: Jossey-Bass, 1982. Daft, R. L., and R. H. Lengel, eds. Information Richness: A New Approach to Managerial Behavior and Organizational Design. Edited by L. L. Cummings and B. M. Straw, Research in Organizational Behavior. Homewood, IL: JAI Press, 1984.

182

Daft, R. L., and K. E. Weick. "Toward a Model of Organizations as Interpretation Systems." The Academy of Management Review 9, no. 2 (1984): 284-95. Davis, H. R., and S. E. Salasin, eds. The Utilization of Evaluation. Edited by E. L. Struening and M. Guttentag. Vol. 1, Handbook of Evaluation Research. Beverly Hills: Sage Publications, 1975. Desai, Vandana, and Robert Potter. The Companion to Development Studies. London: Arnold, 2002. Development, Organization for Economic co-operation and. "Evaluation Feedback for Effective Learning and Accountability." In Evaluation and Effectiveness, edited by Development Assistance Committee. Paris: OECD. Dewey, J. How We Think: A Restatement of the Relation of Reflective Thinking to Educative Process. Lexington, MA: D.C. Heath, 1960. Dibella, Anthony. "The Research Manager's Role in Encouraging Evaluation Use." Evaluation Practice 11, no. 2 (1990). Earl, Sarah, Fred Carden, and Terry Smutylo. Outcome Mapping: Building Learning and Reflection into Development Programs. Ottawa: The International Development Research Center, 2001. Edwards, Michael, and David Hulme. Beyond the Magic Bullet: Ngo Performance and Accountability in the Post-Cold War World. Connecticut: Kumarian Press, 1996. Engel, P., C. Carlsson, and A. van Zee. "Making Evaluation Results Count: Internalizing Evidence by Learning." In ECDPM Policy Management Brief No. 16. Maastricht: European Centre for Development Policy Management, 2003. "The Evaluation Center." www.wmich/edu/evalctr/. Fiol, C. M., and M. A. Lyles. "Organizational Learning." The Academy of Management Review 10, no. 4 (1985): 803-13. Fisher, Julie. Nongovernments: Ngos and Political Development of the Third World. Connecticut: Kumarian Press, 1998.

183

Fowler, A. Striking a Balance: A Guide to Enhancing the Effectiveness of NonGovernmental Organizations in International Development. London: Earthscan, 1997. Fox, Jonathan, and David Brown. The Struggle for Accountability. Cambridge, MA: MIT Press, 1998. "The Grameen Bank." http://www.grameen-info.org/bank/GBdifferent.htm Greene, J. C. "Technical Quality Vs. User Responsiveness in Evaluation Practice." Evaluation and Program Planning 13, (1990): 267-74. Greene, Jennifer C. "Stakeholder Participation and Utilization in Program Evaluation." Evaluation Review 12, no. 2 (1988): 91-116. Hall, Michael H., Susan D. Phillips, Claudia Meillat, and Donna Pickering. "Assessing Performance: Evaluation Practices and Perspectives in Canadas Voluntary Sector." edited by Norah McClintock. Toronto: Canadian Centre for Philanthropy, 2003. Hatry, Harry P., and Linda M. Lampkin. "An Agenda for Action: Outcome Management for Nonprofit Organizations." Washington DC, The Urban Institute, 2001. Henry, Gary, and Melvin Mark. "Beyond Use: Understanding Evaluation's Influence on Attitudes and Actions." American Journal of Evaluation 24, no. 3 (2003): 293-314. Howes, M. "Linking Paradigms and Practise, Key Issues in the Appraisal, Monitoring and Evaluation of British Ngo Projects." Journal of International Development 4, no. 4 (1992). Huber, George P. "Organizational Learning: The Contributing Processes and the Literatures." Organization Science 2, no. 1 (1991): 88-115. Hudson, Bryant, and Wolfgang Bielefeld. "Structures of Multinational Nonprofit Organizations." Nonprofit Management and Leadership 9, no. 1 (1997). "Internal Revenue Service - Charities and Non-Profits (Extract Date October 4, 2005)." http://www.irs.gov/charities/article/0,,id=96136,00.html. Johnson, Burke R. "Toward a Theoretical Model of Evaluation Utilization." Evaluation and Program Planning 21, (1998): 93-110. 184

Johnson, D.W., and F.P. Johnson. Joining Together: Group Theory and Group Skills. Boston: Allyn and Bacon, 2000. Kahnerman, D., P. Slovic, and A. Tversky. Judget under Uncertainty: Heuristics and Biases. New York: Cambridge University Press, 1982. King, J.A. "Research on Evaluation and Its Implications for Evaluation Research and Practice." Studies in Educational Evaluation 14, (1998): 285-99. Kirkhart, K. E. "Reconceptualizing Evaluation Use: An Integrated Theory of Influence." New Directions for Evaluation no. 88 (2000). Krone, K. J., F. M. Jablin, and L. L. Putnam, eds. Communication Theory and Organizational Communication: Multiple Perspectives. Edited by F. M. Jablin, L. L. Putnam, K. H. Roberts and L. W. Porter, Handbook of Organizational Communication. Newbury Park, CA: Sage, 1987. Letts, Christine. High Performance Nonprofit Organizations: Managing Upstream for Greater Impact. New York: Wiley, 1999. Levin, B. "The Uses of Research: A Case Stuffy in Research and Policy." The Canadian Journal of Program Evaluation 2, no. 1 (1987): 44-55. Levitt, B., and J. G. March. "Organizational Learning." Annual Review of Sociology no. 14 (1988): 319-40. Levitt, B. S., and J. G. March, eds. Organizational Learning. Edited by M. D. Cohen and L. S. Sproull, Organizational Learning. Thousand Oaks, CA: Sage, 1996. Light, Paul C. Making Nonprofits Work: A Report on the Tides of Nonprofit Management Reform. Washington, DC: The Aspen Institute Brooking Institution Press, 2000. Lincoln, Y., and E. Guba. Naturalistic Inquiry. Thousand Oaks, CA: Sage Publications, 1985. Lindenberg, Marc, and Bryant Coralie. Going Global: Transforming Relief and Development Ngos: Kumarian Press, 2001. Ludin, Jawed, and Jacqueline Williams. Learning from Work: An Opportunity Missed or Taken? London: BOND, 2003.

185

March, J. G., and J. P. Olsen. Ambiguity and Choice in Organizations. Bergen: Universitetsforlaget, 1976. Mark, Melvin, and Gary Henry. "The Mechanisms and Outcomes of Evaluation Influence." Evaluation 10, no. 1 (2004): 35-57. Martin, J. Cultures in Organizations: Three Perspectives. New York: Oxford University Press, 1992. Mathison, S. "Rethinking the Evaluator Role: Partnerships between Organizations and Evaluators." Evaluation and Program Planning 17, no. 3 (1994): 299-304. McNamara, Carter. Field Guide to Nonprofit Program Design, Marketing and Evaluation. Minneapolis: Authenticity Consulting, 2003. Mott, Andrew. "Evaluation: The Good News for Funders." Washington, DC: Neighborhood Funders Group, 2003. Mowbray, C.T. "The Role of Evaluation in Restructuring of the Public Mental Health System." Evaluation and Program Planning 15, (1992): 403-15. Murray, Vic. "The State of Evaluation Tools and Systems for Nonprofit Organiations." New Directions for Philanthropic Fundraising no. 31 (2001): 39-49. Neuendorf, Kimberly A. "The Content Analysis Guidebook Online." <http://academic.csuohio.edu/kneuendorf/content/>. (2007) Nevis, E. C., A. J. DiBella, and J. M. Gould. "Understanding Organizations as Learning Systems." Sloam Management Review 36, no. 2 (1995): 75-85. "The Nonprofit Sector in Brief - Facts and Figures from the Nonprofit Almanac 2007." (2006), http://www.urban.org/UploadedPDF/311373_nonprofit_sector.pdf. Ouchi, W. G. "Markets, Bureaucracies, and Clans." Administrative Science Quarterly no. 25 (1980): 129-41. Owen, J.M., and F.C. Lambert. "Roles for Evaluation in Learning Organizations." Evaluation 1, no. 2 (1995): 237-50.

186

Patton, M.Q. "Development Evaluation." Evaluation Practice 15, no. 3 (1994): 311-19. . Utilization-Focused Evaluation. 2nd edition ed. Beverly Hills, CA: Sage, 1986. Patton, Michael Quinn. Utilization Focused Evaluations. Beverly Hills, CA: Sage Publications, 1997. Plantz, Margaret C., Martha Taylor Greenway, and Michael Hendricks. "Outcome Measurement: Showing Results in the Nonprofit Sector." New Directions for Program Evaluation no. 75 (1997): 15-30. Popper, M., and R. Liptshitz. "Organizational Learning Mechanisms: A Cultural and Structural Approach to Organizational Learning." Journal of Applied Behavioral Science 34, (1998): 161-78. Powell, Mike. Information Management for Development Organisations. 2nd ed, Oxfam Development Guidelines Series. Oxford: Oxfam, 2003. Preskill, H. "Evaluation's Role in Enhancing Organizational Learning." Evaluation and Program Planning 17, no. 3 (1994): 291-97. Putte, Bert Van de. "Follow-up to Evaluations of Humanitarian Programmes." London: ALNAP, 2001. "Research and Policy in Development (Rapid)." Overseas Development Institute, http://www.odi.org.uk/RAPID/. Riddell, R. C., S. E. Kruse, T. Kyollen, S. Ojanpera, and J. L. Vielajus. "Searching for Impact and Methods: Ngo Evaluation Synthesis Study." OECD/DAC Expert Group, 1997. Riddell, R.C. Foreign Aid Reconsidered. Baltimore: Johns Hopkins Press, 1987. Rosenbaum, Nancy. "An Evaluation Myth: Evaluation Is Too Expensive." National Foundation for Teaching Entrepreneurship (NFTE), http://www.supportctr.org/images/evaluation_myth.pdf. Rutman, Leonard. Evaluation Research Methods: A Basic Guide. 2d.ed. Beverly Hills, CA: Sage Publications, 1984.

187

Scriven, M. S., ed. Evaluation Ideologies. Edited by G. F. Madaus, M. Scriven and D. L. Stufflebeam, Evaluation Models: Viewpoints on Educational and Human Service Evaluation. Boston: Kluwer-Nijhoff, 1983. Senge, P. M., Charlotte Roberts, Rick Ross, George Roth, Bryan Smith, and Art Kleiner. The Dance of Change: The Challenges of Sustaining Momentum in Learning Organizations. New York: Currency/Doubleday, 1999. Senge, Peter. The Fifth Discipline: The Art and Practice of the Learning Organization. New York: Doubleday, 1990. Shadish, W.R., T.D. Cook, and L.C. Leviton. Foundations of Program Evaluation: Theories of Practice. Newbury Park, CA: Sage Publicaitons, Inc., 1991. Shrivastava, P. "A Typology of Organizational Learning Systems." Journal of Management Studies 20, no. 1 (1983): 7-28. Shulha, Lyn M., and J. Bradley Cousins. "Evaluation Use: Theory, Research and Practice since 1986." American Journal of Evaluation 18, no. 1 (1997): 195-208. SIDA. "Are Evaluations Useful? Cases from Swedish Development CoOperation.": Swedish International Development Agency, 1999. Smillie, Ian, and John Hailey. Managing for Change. London: Earthscan, 2001. Stevens, C. L., and M. Dial, eds. What Constitutes Misuse? Edited by C. L. Stevens and M. Dial, New Directions for Program Evaluation: Guiding Principles for Evaluators. San Francisco: Jossey-Bass, 1994. Torres, R. T., and H. Preskill. "Evaluation and Organizational Learning: Past, Present and Future." American Journal of Evaluation 22, no. 3 (2001): 387-95. Torres, R.T., H. Preskill, and M.E. Piontek. Evaluation Strategies for Communicating and Reporting: Enhancing Learning in Organizations. Thousand Oaks, CA: Sage, 1996. Torres, Rosalie T. "What Is a Learning Approach to Evaluation?" The Evaluation Exchange VIII, no. 2 (2002).

188

UNDP, United Nations Development Program. "Human Development Report." New York: Oxford Press, 1993. Watkins, K., V. Marsick, and J. Johnson, eds. Making Learning Count! Diagnosing the Learning Culture in Organizations. Newbury Park, CA: Sage, 2003. Weiss, C. H., ed. Ideology, Interest, and Information: The Basis of Policy Decisions. Edited by D. Callahan and B. Jennings, Ethics, the Social Sciences, and Policy Analysis. New York: Plenum, 1993. Weiss, Carol. Evaluation. 2nd ed. Saddle River, NJ: Prentice Hall, 1997. . "Have We Learned Anything New About the Use of Evaluation?" American Journal of Evaluation 19, no. 1 (1998): 21-33. . Social Science Research and Decision-Making. New York: Columbia University Press, 1980. Weiss, Carol H. Evaluation Research: Methods for Assessing Program Effectiveness. New Jersey: Prentice-Hall, 1972. , ed. Utilization of Evaluation: Toward Comparative Study. Edited by Carol H. Weiss, Evaluating Action Programs: Readings in Social Action and Education. Boston: Allyn and Bacon, 1972. Wholey, J. S., H. P. Hatry, and K. E. Newcomer. Handbook of Practical Program Evaluation. 2nd ed. San Francisco, CA: Jossey-Bass, 2004. Wigley, Barb. "The State of Unhcr's Organization Culture: What Now?" http://www.unhcr.org/publ/RESEARCH/43eb6a862.pdf Williams, Kevin, Bastiaan de Laat, and Elliot Stern. "The Use of Evaluation in the European Commission Services - Final Report." Paris: Technopolis France, 2002.

189

Appendix A Evaluation Use in Non-Governmental Organizations Survey

190

191

192

193

194

195

196

197

Appendix B Master List of US Based NGOs with an International Focus Complied from the IRS Exempt Database registry Date of Extract: January 4, 2006
# 1 Organization Name A Jewish Voice for Peace Academy for Educational Development Action Against Hunger Action Against Hunger (USA) 4 5 6 ActionAid International USA Adventist Community Services Adventist Development and Relief Agency International Advocacy Institute Afghan Community in America 9 Africa Action 10 Africa Faith and Justice Network 11 12 Africa News Service Africa-America Institute 13 Africa-American Institute - New York AFRICALINK African Community Refugee Center 259 257 258 256 255 250 251 252 # 247 Organization Name International Center International Center for Research on Women International Center in New York International Crisis Group, Washington Office International Development Association International Diplomacy Council International Federation of Ophthalmological Societies International Forum on Globalization International Healthcare Safety Professional Certification Board International Institute for Energy Conservation International Institute of Rural Reconstruction, U.S. Chapter International Medical Corps International Orthodox Christian Charities International Pen Friends 260 261 International Relief and Development International Relief Friendship Foundation

2 3

248 249

7 8

253 254

14 15

16

262

198

17 18

African Development Foundation African Development Institute African Medical & Research Foundation, Inc. African Medical and Research Foundation Africare Aga Khan Foundation U.S.A.

263 264

International Relief Teams International Rescue Committee International Rescue Committee - USA

19

265 International Rescue Committee-San Diego International Rescue Committee-Seattle International Research & Exchanges Board International Social Service, United States of America Branch International Third World Legal Studies Association International Visitors Council of Philadelphia Interplast Interreligious and International Federation for World Peace InterServe/U.S.A. 274 275 Intervida Foundation USA Irish American Partnership 276 Irish American Unity Conference 277 Japan External Trade Organization 278 279 Japan Information Access Project Japan US Community Education and Exchange Japan-America Society of Washington,

20 21

266 267

22 Agri-Energy Roundtable 23 Aid for International Medicine 24 Aid to Artisans 25 26 Air Serv International Alliance for Communities in Action 27 Alliance for Southern African Progress Alliance of Small Island States American Association for International Aging American Association for the International Commission of Jurists American Association for World Health American Civic Association American College of International Physicians American Committee for KEEP

268

269

270

271 272

273

28 29

30

31

32 33

34 35

280 281

199

D.C. American Committee for Rescue and Resettlement of Iraqi Jews American Disaster Reserve American Ditchley Foundation American Friends Service Committee American Fund for Czechoslovak Relief American Ireland Fund 41 American Jewish Joint Distribution Committee American Jewish Philanthropic Fund American Jewish World Service American Near East Refugee Aid American Peace Society American Red Cross International Services American Red Cross National Headquarters American Red Cross Overseas Association American Red Magen David for Israel - American Friends of Magen David American Refugee Committee 51 52 53 American Rescue Dog Association American Sovereignty Task Force 297 298 299 287 Jesuit Refugee Service/U.S.A. 282 283 284 Jesuit Refugee Service/USA Jewish National Fund Just Act: Youth Action for Global Justice 285 Katalysis Partnership 286 Korean American Sharing Movement, Inc. Lalmba Association 288 Latter-day Saint Charities 289 290 291 292 Lay Mission-Helpers Association Liberty's Promise Life for Relief and Development Los Ninos 293 Lutheran Immigration and Refugee Service Lutheran Immigration and Refugee Service, North Dakota Chapter Lutheran World Relief 296 Macedonian American Friendship Association MAP International Mayor's International Cabinet

36 37 38

39

40

42

43 44 45 46

47

48

294

49

295

50

200

54 55 56

American Task Force on Palestine Americares Foundation AmeriCares Foundation Inc. America's Development Foundation AMG International Amigos de las Americas Ananda Marga Universal Relief Team Angelcare Ashoka: Innovators for the Public Asian Resources Associate Missionaries of the Assumption Association for India's Development Association for the Advancement of Dutch-American Studies Association for the Advancement of Policy, Research and Development in the Third World Association for World Travel Exchange Association of Cambodian Survivors of America Association of Concerned African Scholars Association of Third World Studies

300 301 302

Media Associates International Mennonite Central Committee Mennonite Disaster Service Mennonite Economic Development Associates Mercy Corps Meridian International Center Minnesota International Health Volunteers Mirrer Yeshiva Central Institute Mission Doctors Association Mobility International USA National Association of Catastrophe Adjusters National Association of Social Workers

57 58 59

303 304 305

60 61 62 63

306 307 308 309

64

310

65

311 National Coalition for Asian Pacific American Community Development National Coalition for Haitian Rights 313 National Committee on American Foreign Policy National Committee on United StatesChina Relations National Council for International Visitors 316 National Democratic Institute for International Affairs National Disaster Search Dog Foundation

66

312

67

68

314

69

315

70

71 Association on Third World Affairs 72

317

318

201

Austrian Cultural Forum 73 Baltimore Council on Foreign Affairs Baptist World Alliance/Baptist World Aid Board of International Ministries 76 BorderLinks 77 78 79 80 Bread for the World Brothers Brother Foundation, The Brother's Brother Foundation Business Alliance for International Economic Development Business Council for International Understanding CARE 83 84 85 CARE International USA Caribbean-Central American Action Carnegie Council on Ethics and International Affairs Carnegie Endowment for International Peace Catholic Medical Mission Board Catholic Network of Volunteer Service Catholic Relief Services Catholic Relief Services (U.S. Catholic Conference) 329 330 331 323 324 325 326 322 319

National Memorial Institute for the Prevention of Terrorism National Peace Corps Association

74

320 National Ski Patrol System 321 National Student Campaign Against Hunger and Homelessness National Voluntary Organizations Active in Disaster Need New England Foreign Affairs Coalition New Forests Project New York Association for New Americans North American Center for Emergency Communications North American Conference on Ethiopian Jewry Northwest Medical Teams Northwest Medical Teams International Open Society Institute 332 Open Voting Consortium 333 334 Operation Crossroads Africa Operation Smile 335 336 Operation U.S.A. Operation Understanding 337

75

81

327

82

328

86

87 88

89 90

91

202

92

Center for International Disaster Information Center For International Health and Cooperation Center for Migration Studies of New York Center for New National Security Center for Russian and East European Jewry Center for Taiwan International Relations Center for Third World Organizing

Operation USA 338 Opportunity International-U.S. 339 Oregon Peace Works 340 341 Organization of Chinese Americans Organization of Chinese Americans Central Virginia Organization of Chinese Americans Columbus Chapter Organization of Chinese Americans Dallas-Fort Worth Chapter Organization of Chinese Americans Delaware Organization of Chinese Americans Eastern Virginia Chapter Organization of Chinese Americans Greater Chicago Chapter Organization of Chinese Americans Greater Houston Chapter Organization of Chinese Americans Greater Los Angeles Chapter Organization of Chinese Americans Greater Washington, DC Chapter Organization of Chinese Americans Kentuckiana Chapter Organization of Chinese Americans New England Chapter Organization of Chinese Americans Orange County Organization of Chinese Americans Saint Louis Chapter

93

94 95

96

342

97

343

98 Center for War/Peace Studies 99 Central American Resource Center 100 Centre for Development and Population Activities Centre for Development and Population Activities, The Children International Headquarters Children's Corrective Surgery Society China Connection 105 China Medical Board of New York 106 Christian Childrens Fund 107 Christian Children's Fund 108

344

345

346

101

347

102

348

103

349

104

350

351

352

353

354

203

109

Christian Foundation for Children and Aging Christian Medical and Dental Associations Christian Reformed World Relief Committee Christian Relief Services Christians for Peace in El Salvador Church World Service Church World Service, Immigration and Refugee Program Citizen Diplomacy Council of San Diego Citizens Development Corps Citizens Network for Foreign Affairs Claretian Volunteers and Lay Missionaries Coalition for American Leadership Abroad Collaborating Agencies Responding to Disasters of San Mateo County Columbus Council on World Affairs

355

Organization of Chinese Americans Silicon Valley Chapter Organization of Chinese Americans Westchester Hudson Valley Chapter Our Little Brothers and Sisters

110

356

111 112 113 114

357 358 359 360 OXFAM America OXFAM International Advocacy Office Pacific Basin Development Council PACT 361 Panos Institute 362 363 364 Partners for Democratic Change Partners for Development Pathfinder International 365 Pax World Service 366 Peace Action 367 Peace Action Texas, Greater Houston Chapter People to People International 369 People-to-People Health Foundation 370 Phoenix Committee on Foreign Relations Physicians for Human Rights 372

115

116 117 118

119

120

121

122 Commission of the Churches on International Affairs Commission on International Programs Committee for Economic Development Committee for the Economic Growth of Israel

368

123

124

125

371

126

204

127

Committee on Missionary Evangelism Committee on US/Latin American Relations Concern America CONCERN Worldwide US Inc. Conflict Resolution Program Congressional Hunger Center Consultative Group on International Agricultural Research Consultative Group to Assist the Poor Consumers for World Trade Council on Foreign Relations Counterpart - United States Office Counterpart International

373

Piedmont Triad Council for International Visitors PLAN International

128 129 130 131 132

374 375 376 377 378 Planet Aid Planning Assistance Plenty International Pontifical Mission for Palestine Population Action International 379 Presbyterian Disaster Assistance and Hunger Program Presbyterian Hunger Program Project Concern International Project HOPE Rav Tov International Jewish Rescue Organization Red Sea Team International Refugee Mentoring Program Refugee Women in Development Refugees International 388 Relief International 389 Research Triangle International Visitors Council Rights Action/Guatemala Partners Sabre Foundation

133

134 135 136 137

380 381 382 383

138 139 140 141 Counterpart International, Inc. CRISTA Ministries Cuban American National Council Development Group for Alternative Policies Diplomatic and Consular Officers, Retired Direct Relief International 144 145 146 Disaster Psychiatry Outreach DOCARE International, N.F.P.

384 385 386 387

142

143

390 391 392

205

147

Doctors for Disaster Preparedness Doctors of the World, Inc.

393

Salesian Missioners Salvation Army World Service Office, The San Antonio Council for International Visitors San Diego World Affairs Council Save the Children Secretary's Open Forum Self Help International September 11 Widows and Victims' Families Association Servas-U.S.A. Seva Foundation SHARE Foundation

148 Doctors to the World 149 150 151 152 153 Doctors Without Borders Doctors Worldwide East Bay Peace Action East Meets West Foundation East West Institute 154 155 156 East-West Center Edge-ucate Educational Concerns for Hunger Organization Egyptians Relief Association Eisenhower Fellowships El Rescate 160 Episcopal Church Missionary Community Episcopal Relief and Development Estonian Relief Committee Ethiopian Community Development Council Families of September 11 165 166 167 FARMS International Federation for American

394

395 396 397 398 399

400 401 402

157 158 159

403 404 405 Shelter For Life International Sister Cities International Society for International Development USA Society of African Missions 407 408 409 Society of Missionaries of Africa South-East Asia Center Southeast Asia Resource Action Center 410 Southeast Consortium for International Development Spanish Refugee Aid Student Letter Exchange

406

161 162 163

164

411 412 413

206

Immigration Reform 168 169 Feed the Children Fellowship International Mission Filipino American Chamber of Commerce of Orange County Financial Services Volunteer Corps Floresta U.S.A. Flying Doctors of America Food for the Hungry Food for the Poor 175 Foreign Policy Association 176 Foreign-born Information and Referral Network Foundation for International Community Assistance Foundation for Rational Economics and Education Foundation for the Support of International Medical Training Fourth Freedom Forum Fourth World Documentation Project Freedom from Hunger Friends of Liberia Friendship Ambassadors Foundation Friendship Force International 422 421 414 415 Student Pugwash U.S.A. Survivors International Task Force for Child Survival and Development TechnoServe Teen Missions International The Hospitality and Information Service The International Foundation The Joan B. Kroc Institute for International Peace Studies The Russian-American Center/Track Two Institute for Citizen Diplomacy Third World Conference Foundation 423 Tibetan Aid Project 424 TransAfrica Forum 425 Trees for Life 426 427 Trickle Up Program Trickle Up Program, The 428 429 430 Trilateral Commission Trust for Mutual Understanding Tuesday's Children 431 432 Tuscaloosa Red Cross

170 171 172 173 174

416 417 418 419 420

177

178

179

180 181

182 183 184

185 186

207

Friendship Force of Dallas 187 188 Futures for Children GALA: Globalization and Localization Association GeoHazards International Global Health Council 191 192 Global Interdependence Center Global Options 193 194 195 Global Outreach Mission Global Policy Forum Global Resource Services 196 Global Studies Association North America 197 Global Teams 198 Global Volunteers 199 200 201 202 203 GOAL USA God's Child Project Golden Rule Foundation Grand Triangle Grassroots International 204 205 Habitat for Humanity International 450 451 445 446 447 448 449 444 443 442 439 440 441 437 438 433 434

U.S. Association for the United Nations High Commissioner for Refugees U.S. Committee for UNDP U.S.A. - Business and Industry Advisory Committee to the OECD U.S.A. for Africa U.S.-China Peoples Friendship Association U.S.-Japan Business Council Unitarian Universalist Service Committee United Jewish Communities United Methodist Committee on Relief United Nations Development Programme United Nations Development Programme - Regional Bureau for Asia and the Pacific United States Canada Peace Anniversary Association United States Catholic Conference/Migration and Refugee Services United States Committee for Refugees United States-Japan Foundation Uniterra Foundation Upwardly Global US Committee for Refugees and Immigrants US Fund for UNICEF

189 190

435 436

208

Haitian Refugee Center 206 207 208 Healing the Children Health Volunteers Overseas Heartland Alliance 209 210 211 212 213 214 Hebrew Immigrant Aid Society Heifer International Heifer Project International Helen Keller International Henry L. Stimson Center Henry M. Jackson Foundation 215 216 217 218 Hermandad Hesperian Foundation High Frontier Organization Hispanic Council on International Relations Holt International Children's Services Hope International 221 222 Hospitality Committee Humanitarian Law Project International Education Development Humanitarian Medical Relief Humanity International 225 471 467 468 461 462 463 464 455 456 457 458 459 460 452 453 454

USA for the United Nations High Commissioner for Refugees Visions in Action Voices in the Wilderness Volunteer Missionary Movement - U.S. Office Volunteers in Technical Assistance War Child USA Washington Institute of Foreign Affairs Washington Office on Africa Water for People Weatherhead Center for International Affairs Welfare Research, Inc. Win Without War Windows of Hope Family Relief Fund Wings of Hope

219

465 Winrock International 466 Wisconsin/Nicaragua Partners of the Americas WITNESS Women for Women International 469 470 Womens EDGE Womens Environment and Development Organization

220

223 224

209

226

Hungarian American Coalition Idaho Volunteer Organizations Active in Disasters Immigration and Refugee Services of America Indian Muslim Relief Committee of ISNA INMED

472

World Affairs Council World Affairs Council of Pittsburgh

227

473 World Bank Group 474 World Concern 475 World Conference of Religions for Peace World Development Federation 477 478 World Education World Emergency Relief 479 World Federation of Public Health Associations World Hope International 481 482 483 World Learning World Medical Mission World Mercy Fund 484 World Neighbors 485 486 487 World Policy Institute World Rehabilitation Fund World Relief 488 World Resources Institute 489

228

229

230 Institute for Development Anthropology Institute for Intercultural Studies Institute for International Cooperation and Development Institute for Sustainable Communities Institute for Transportation and Development Policy Institute of Caribbean Studies InterAction Interaction/American Council for Voluntary International Action Inter-American Parliamentary Group on Population and Development Interchurch Medical Assistance Intermed International International (Telecommunications) Disaster Recovery Association International Academy of Health Care Professionals

476

231 232

233

234

480

235 236 237

238

239 240 241

242

243

210

244

International Aid International Bank for Reconstruction and Development International Catholic Migration Commission

490

World Vision (United States) Worldwatch Institute

245

491 Worldwide Friendship International 492

246

211

Appendix C Survey Population


# 1 Organization Name Academy for Educational Development Action Against Hunger (USA) ActionAid International USA # 83 Organization Name International Institute of Rural Reconstruction International Medical Corps International Orthodox Christian Charities International Reading Association

2 3

84 85

Adventist Development and Relief Agency International Advocacy Insitute

86

87

International Relief and Development International Relief Teams

African Methodist Episcopal Church Service and Development Agency, Inc. Africare Aga Khan Foundation U.S.A. Air Serv International Alliance for Peacebuilding Alliance to End Hunger American Friends Service Committee American Jewish World Service

88

7 8 9 10 11 12

89 90 91 92 93 94

International Rescue Committee International Youth Foundation Interplast IPAS - USA Jesuit Refugee Service/USA Joint Aid Management

13

95

Keystone Human Services International Latter-day Saint Charities Life for Relief and Development

14 15

American Near East Refugee Aid American Red Cross International Services American Refugee Committee AmeriCares America's Development Foundation

96 97

16 17 18

98 99 100

Lutheran World Relief Management Sciences for Health MAP International

212

19 20 21

Amigos de las Americas Baptist World Alliance BRAC USA

101 102 103

Medical Care Development Medical Teams International Mental Disability Rights International Mercy Corps Mercy-USA for Aid and Development, Inc. Mobility International USA

22 23

Bread for the World Bread for the World Institute

104 105

24

Campaign for Innocent Victims in Conflict (CIVIC) CARE

106

25

107

National Association of Social Workers National Committee on American Foreign Policy National Peace Corps Association National Wildlife Federation

26

Catholic Medical Mission Board

108

27 28

Catholic Relief Services Center for Health and Gender Equity Center For International Health and Cooperation Centre for Development and Population Activities CHF International Christian Blind Mission USA Christian Childrens Fund Church World Service

109 110

29

111

ONE Campaign

30

112

Open Society Institute

31 32 33 34

113 114 115 116

Opportunity International Oxfam America Pact Pan American Health Organization PATH Pathfinder International

35 36

Citizens Development Corps Citizens Network for Foreign Affairs, The Communications Consortium Media Center

117 118

37

119

PCI-Media Impact

213

38 39 40 41 42 43 44

CONCERN Worldwide US Inc. Congressional Hunger Center Conservation International Counterpart International, Inc. Direct Relief International Doctors without Borders Earth Watch Institute

120 121 122 123 124 125 126

Perkins International Physicians for Human Rights Physicians For Peace Plan USA Population Action International Population Services International Presbyterian Disaster Assistance and Hunger Program Project HOPE

45

Educational Concerns for Hunger Organization (ECHO) Episcopal Relief & Development Family Care International Florida Association of Volunteer Action in the Caribbean and the Americas Food for the Hungry

127

46 47 48

128 129 130

ProLiteracy Worldwide Refugees International Relief International

49

131

Salvation Army World Service Office, The Save the Children SEVA Foundation

50 51

Freedom from Hunger Friends of the World Food Program Gifts In Kind International Giving Children Hope

132 133

52 53

134 135

SHARE Foundation Society for International Development Stop Hunger Now Support Group to Democracy Teach for America Transparency International - USA Trickle Up Program, The

54 55 56 57 58

Global Fund for Women Global Resource Services GOAL USA Grassroots International Habitat for Humanity International

136 137 138 139 140

214

59

Handicap International USA

141

U.S. Committee for Refugees and Immigrants U.S. Committee for UNDP U.S. Fund for UNICEF Unitarian Universalist Service Committee United Methodist Committee on Relief United States Association for UNHCR United Way International Water Aid America Weatherhead Center for International Affairs Winrock International Womens Environment and Development Organization Women's Commission for Refugees World Concern World Conference of Religions for Peace World Education World Emergency Relief World Hope International World Learning

60 61 62

Hands On Disaster Response Heart to Heart International Heartland Alliance

142 143 144

63

Hebrew Immigrant Aid Society

145

64

Heifer International

146

65 66 67

Helen Keller International Hesperian Foundation Holt International Childrens Services Human Rights Watch Hunger Project, The

147 148 149

68 69

150 151

70

Information Management & Mine Action Programs INMED Partnerships for Children Institute for Sustainable Communities Institute of Cultural Affairs InterAction International Aid International Catholic Migration Commission International Center for Religion and Diplomacy International Center for Research

152

71 72

153 154

73 74 75 76

155 156 157 158

77

159

World Rehabilitation Fund

78

160

World Relief

215

on Women 79 80 International Crisis Group International Foundation for Electoral Systems International Fund for Animal Welfare International Housing Coalition 161 162 World Resources Institute World Vision (United States)

81

163

World Wildlife Fund-US

82

216

You might also like