Editors’ Note: Maoz Brown details the history of outcome evaluation in the human services, summarizing an argument he recently made in the December 2019 issue of Social Service Review. The entire issue, on social work history, is worthy of attention from historians of philanthropy. It contains, for instance, important contributions on the Russell Sage Foundation-funded Pittsburgh Survey of 1907-1908, as well as on the United Foundation of Detroit during the War on Poverty.
The increased pressure to measure outcomes features prominently in accounts of the nonprofit sector’s development over recent decades, and funders have been key catalysts of this trend. As numerous authors have noted, foundations, corporations, government agencies, and individual contributors are increasingly looking for hard data showing that their contributions are paying off, thus compelling nonprofit organizations to measure, document, and publicize the social impact they claim to generate.
Novelty is a pronounced theme in the literature on outcome evaluation in the nonprofit sector, with many authors attributing the rise of evaluative practices to changes in philanthropic grantmaking and government contracting during the late twentieth century. Among the most cited examples are the outcome measurement movement in the United Way, the Government Performance and Results Act of 1993, and the rise of venture philanthropy (with its commitment to quantifying “social return on investment”).
I provide additional historical context to these accounts in an article titled “Constructing Accountability: The Development and Delegation of Outcome Evaluation in American Social work,” published in the December 2019 issue of Social Service Review. Why the focus on social work? As I explain in the piece, outcome evaluation has taken root especially in the human services sector, and social work history offers a valuable vantage point on trends in this field. Based on an extensive review of social work administration textbooks, academic and trade journal articles, conference presentations, agency reports, and service program manuals, I offer a nuanced but significant revision to current accounts of outcome evaluation in the human services sector and of the role of funders in driving this change.
First, to be clear, historical evidence does strongly suggest that the imperative human service agencies face today to evaluate and quantify outcomes is unprecedented. To get a sense of how much things have changed since the early twentieth century, one can review a series of books published by the Social Work Publicity Council (begun as a department of the Russell Sage Foundation) on the preparation of annual reports and other publicity materials (you can view an example here). Across all of these volumes (as well as accompanying newsletters), I found no discussion of compiling statistics on client outcomes in order to demonstrate impact quantitatively to the public. Instead, fundraising and publicity guidance from the period highlights the value of sharing rich and dramatic anecdotes. Statistics, when discussed, pertain to recording and conveying the extent of need, not results.
The absence of guidance on compiling systematic data on outcomes is strikingly different from common practice today. Open any book today on fundraising published in the last twenty years, and you will find no shortage of exhortation to marshal quantitative data demonstrating impact. As a means of advancing fundraising prospects, then, the extent and intensity of quantitative outcome evaluation is indeed novel.
This is not to say that human service agencies were not attempting to quantify client outcomes in the early twentieth century. In fact, there is clear evidence not only that social work leaders understood the importance of outcome evaluation but also that they expected direct service agencies to carry out this task. Lacking academic capacity in its early years, the social work profession delegated evaluative responsibility to agency staff, charging service providers with the responsibility of identifying and sharing effective methods. By the 1930s, social work literature regularly presented statistical tabulations of social worker judgments on client outcomes, and by the 1940s, the Family Welfare Association of America (one of the leading institutions of professional social work) had published and disseminated templates for collecting and indexing these quantified judgements.
Why, then, do past fundraising guides and annual reports appear to ignore outcome data? The answer, as I elaborate in the article, lies in the professional framing of outcome evaluation during much of the twentieth century. The value of evaluation lay in its potential to build an empirical foundation for a nascent discipline struggling to establish its distinct value among a jumble of other helping professions. In short, the goal was to advance the profession as a whole, not to make individual agencies more attractive to funders.
Though social workers appreciated the importance of outcome evaluation, the practice still was relatively rare. Agencies generally lacked the time, money, and skillsets for comprehensive and rigorous outcome research. This is the clearest theme I encountered in my historical research; the ideal of infusing clinical record keeping with sophisticated evaluative research consistently ran aground of limited agency capacity. To the extent that systematic outcome evaluation took place, however, it appears to have reflected a professional logic of collective legitimation rather than a market logic of competition for funds. For this reason, results appeared in academic and trade journals designed for field-building, not in annual reports.
The rise of business-oriented thinking and practice in the nonprofit sector in the late twentieth century reconfigured the incentive structures underlying outcome evaluation. Government and philanthropic funders began to incorporate outcome evaluation requirements directly into contract terms and grant guidelines, repurposing outcome data as organizational performance indicators. Traditionally, outcome evaluation was conceived as research in pursuit of effective and replicable methods (what we now sometimes call “evidence-based practices”); the assessment of discrete organizations centered not on client outcomes but rather on resource inputs (e.g., building conditions, personnel qualifications, financial stability). Beginning in the late 1970s, however, outcomes joined inputs in the evaluative calculus used to judge individual organizations in an increasingly competitive market for grants and contracts.
It’s worth hedging here a bit – I am not arguing that there has been a complete shift from evaluating methods to evaluating organizations. Rather, my argument is that outcome evaluation has become increasingly associated with organizational performance. I argue further that this shift represents an ascendant belief in the importance of individual organizations (as opposed to transferable methods) as drivers of social impact. To illustrate, Peter Frumkin observed in an early analysis of venture philanthropy that this form of giving often “focuses not on building a model that is ripe for replication, but instead on building a powerful organization that has a steady revenue stream behind it that can drive internal growth.”
Arguably the clearest manifestation of the growing importance of organizations is the field of nonprofit management, which emerged in the 1980s and saw rapid expansion in subsequent decades. Growing out of profession-specific fields such as social work administration, arts administration, and hospital administration, nonprofit management represents a fundamentally domain-agnostic view of organizations, leveling administrative styles across the disparate fields comprising the nonprofit sector. As Stephen Block noted in his incisive dissertation on the institutionalization of nonprofit management as a distinct discipline, nonprofit managers have tended to “discuss their role, positions, and responsibilities in terms of ‘agency practice’ rather than ‘professional practice.’”
To summarize, the novelty of outcome evaluation at the service provider level in the late twentieth century lies not only in the prevalence of the practice, but also in the cultural reconstruction of accountability and effectiveness that the practice signifies. Notions of social impact tied to professional effectiveness gave way increasingly to an organization-centric understanding of performance.
However, there is also a significant degree of historical continuity in provider-executed evaluation. The refrain about the limited ability of service providers to carry out rigorous evaluative designs persists today. Though caseworkers and administrators often make concerted efforts to collect data on their impact, their methods usually fall short of rigorous research designs. Citing this persistent concern about capacity, I conclude my article with a few recommendations.
First, funders should temper their expectations for outcome evaluation from their service providing grantees, unless those providers are unusually well resourced or the funder is willing to invest in the capacity needed to conduct a formal evaluation. Second, funders should pay more attention to propagating transferable evidence-based practices vetted by trained evaluation experts while also recognizing the unique role of practitioners in adapting these practices to their specific settings. Third, I argue that the human services sector should rediscover the cooperative ethos of its past. The search for evidence-based practices should be a collaborative undertaking based on the technical skill set of the social scientist and the privileged vantage point of the service provider, not a competitive effort to post better numbers than those of other potential grantees.
This research also offers an important takeaway for those interested in the history of philanthropy, which much confront the frequent claims of novelty from the field. Attaining an understanding of how new a practice is requires understanding its cultural underpinnings in any given period. As mentioned before, looking only at past annual reports and other publicity materials to determine the prevalence of outcome evaluation assumes that the function of outcome evaluation has always been to advance organizational standing. The challenge (and thrill) of historical analysis is to uncover how the meaning of a given practice has fluctuated over time. In the case of outcome evaluation, I suggest that past meaning offers useful instruction for correcting present missteps.
-Maoz Brown
Maoz (Michael) Brown recently received his PhD in Sociology from the University of Chicago, where he conducted research on the history of social welfare policy in the United States.