Navigating Effectiveness

Posted on 09 March 2010

In response to this blog’s March 7 invitation of a variety of views on Charity Navigator’s decision to change its rating system (to reflect accountability and outcomes measures in addition to financial metrics like overhead), George E. Mitchell and Hans Peter Schmitz share their perspective.

George E. Mitchell and Hans Peter Schmitz

Charity Navigator, the most popular online NGO watchdog agency, is moving ahead with plans to overhaul how it evaluates nonprofits, following  persistent criticism of its exclusive reliance on financial ratios like overhead to rate organizations.

Since websites like Charity Navigator influence billions of dollars in charitable giving annually, this will have a significant impact on the not-for profit sector in the United States. While Charity Navigator claims on its website that it is “shining lights on truly effective organizations,” the focus on overhead ratios provides little insight into the actual impact of organizations and creates incentives for nonprofits to misreport financial information rather than improve the effectiveness of their programs. 

Research published by the Urban Institute and the Center for Social Innovation at Stanford University have shown how rating agencies create incentives for nonprofits to lower and hide overhead costs, which may actually reduce organizational effectiveness by starving organizations of the infrastructure they need to effectively deliver services.

In comes Ken Berger, the CEO of Charity Navigator, who has been listening attentively to his critics over the past two years. While he says financial measures will play a significant role in a revised ratings system, he also proposes to include measurements of actual outcomes as well as transparency and accountability.

Our own research at Syracuse University’s Moynihan Institute of Global Affairs confirms the need to shift away from financial measures. Based on interviews with more than 150 leaders of international nonprofits rated by Charity Navigator, we find that a majority of those leaders define effectiveness as demonstrating achievement of the goals they promised they would accomplish. In other words, effectiveness is outcome accountability.

This sounds like common sense, but if organizational effectiveness is outcome accountability, any meaningful ratings system used to evaluate nonprofits must somehow measure the extent to which organizations achieve their goals.

As Lowell, Trelstad, and Meehan put it, a more meaningful rating system would provide, in addition to financial data, a qualitative evaluation of an organization’s transparency and governance: (1) an assessment of program effectiveness; (2) and an evaluation of feedback mechanisms designed for donors and beneficiaries; and (3) such a rating system would also allow rated organizations to respond to an evaluation done by a rating agency. More generally, the popular discourse of nonprofit evaluation should move away from financial notions of organizational effectiveness and toward more substantive understandings of programmatic impact.

Charity Navigator is unquestionably heading in the right direction, but they may not be able to go far enough. A crucial piece currently missing is reliable information about outcomes provided by nonprofits themselves. This kind of information is currently hard to come by.

According to Ken Berger, fewer than 10 percent of nonprofits actually monitor whether they are achieving anything in a meaningful way. If nonprofits resent being held accountable to financial measures, then they need to step up and provide the information that the public really needs to know to make good giving decisions. Nonprofits need to evaluate their programs and credibly disclose the results in a simple and standardized format.

Nonprofit organizations need to take more responsibility for demonstrating results to stakeholders. If a nonprofit is really accomplishing something, it should be able to show it – and to the extent that it can show it, the nonprofit can be understood to be effective.

Kudos to Ken Berger for making a bold decision to move Charity Navigator in the right direction.

Now we just need the same boldness of vision among nonprofits.

Nonprofits need to implement serious program evaluations on a continuous basis and, like the evaluators they often criticize, to publically admit when they get it wrong, such as when a program fails to deliver the promised results. Donors also need to understand that achieving progress often requires experimentation. Part of finding out what works is identifying what doesn’t and innovative nonprofits shouldn’t be punished for attempting a new approach that doesn’t pan out - as long as they learn from their mistakes.

Only then will nonprofits be able to stop responding to irrelevant criticisms based on financial ratios and instead start educating stakeholders about what it really takes to achieve meaningful impact.

George E. Mitchell is a PhD candidate in political science at the Maxwell School of Citizenship and Public Affairs at Syracuse University. Hans Peter Schmitz is associate professor of political science at the Maxwell School and Director for Research in the Transnational NGO Initiative at Syracuse University.


2 responses to Navigating Effectiveness

  • [...] not-for-profits and have for many years only focused on overhead spending. As we have argued elsewhere, financial efficiency is, at best, a weak proxy for effectiveness and, at worst, weakens the [...]

  • [...] or administration. Charity watchdogs typically only look at efficiency measures, telling donors nothing about what actual impact an advocacy group [...]

  • Leave a Response