“Am I even relevant? That depends how you measure it.
Take a measurement, then bag it up and give me the evidence.”
–Nate Feuerstein (NF) and Tommee Profitt, from “The Search”
“Program evaluation” can feel like something outsiders make you do to prove your worth and secure your funding. The core purpose of most arts organizations isn’t measuring, after all, but making and sharing creative work that matters. Program evaluation can feel not only like a distraction from your purpose, but also a reduction of it – like explaining a joke.
But, of course, we all evaluate our actions all the time – as individuals, in groups, or as part of organizations. If we didn’t, we would be detached from the world around us and inept at taking action within it. Art-making, itself, is a form of evaluation and assessement. As Jane Hirshfield (2017) names it:
What a writer or painter undertakes in each work of art is an experiment whose hoped-for outcome is an expanded knowing. Each gesture, each failed or less-than-failed attempt to create an experience by language or color and paper, is imagination reaching outward to sieve the world.
Smart organizations also realize that they have limited people, stuff, and money to achieve their goals, so they need to be intentional and inquisitive about how they apply those resources. And, yes, you do have to prove your relative worth to potential funders so that they have evidence of their money’s impact, as well.
Still, it can feel daunting and disruptive to evaluate – for our internal purposes or for external constituents. They key is in keeping evaluation as simple as possible, but not simpler – beginning with the basics and then ramping up intensity and complexity only when it’s necessary to do so.
The Results-Based Accountability framework offers a path from simple to complex, with its four-quadrant sorting of all forms of evaluation. Says RBA author, Mark Friedman (2018):
All performance measures that have ever existed for any program in the history of the universe can be derived from thinking about the quantity and quality of effort and effect.
These four lenses – quantity, quality, effort, and effect – map out four quadrants of evaluation that interrogate three essential questions:
How much did we do? (quantity of effort)
Activity, expressed in quantities (#): such as number of events, money spent, staff time allocated, and so on.How well did we do it? (quality of effort)
Efficiency or proficiency, expressed in averages or percentages (%): such as administrative overhead, unit cost, staff turnover, staff morale, worker safety, and so on.Is anyone better off? (quantity and quality of effect)
Impact, outcome, or benefit to external constituents, expressedin quantities (#) like attendance numbers, revenue earned, or contributions secured, or
in quality measures (%) like proportion of participants claiming positive impact, or degree of observable change in behavior.
The four quadrants offer a menu of possible metrics for you and your team, informed by the data you have available, the questions you want to answer, and the uncertainties you want to reduce (for yourself or your buyers/donors/supporters).
Of course, the golden zone in both value and cost is quadrant four: quality of effect. These are the indicators that your work makes a positive difference in the lives of people and communities. But the other four quadrants are important on the road to the golden zone. And sometimes, simply tracking how hard you try (quantity of effort) is a productive first step on that journey.
From the ArtsManaged Field Guide
Function of the Week: Gifts & Grants
Gifts & Grants involve attracting, securing, aligning, and retaining contributed resources (also called fundraising or development).
Framework of the Week: Value Proposition Canvas
The Value Proposition Canvas encourages you and your team to explore and understand a set of customers, audience members, or constituents from their perspective: What jobs are they trying to do? What pains do they encounter in that effort? And what gains do they experience when they succeed?
Photo by charlesdeluvio on Unsplash
Sources
Friedman, Mark. 2018. Trying Hard Is Not Good Enough 10th Anniversary Edition: How to Produce Measurable Improvements for Customers and Communities. 3rd edition. Parse Publishing.
Hirshfield, Jane. 2017. Ten Windows: How Great Poems Transform the World. Reprint edition. Knopf.
I love matrices, Andrew, and this is a terrific one. Thanks!
The key question in evaluation is "What does success look like?" Answer this question early in the process, then the follow-up, "What do I need to measure to know that I've achieved it?" Then you can build evaluation into the plan so that it doesn't turn into a scramble at the end.