July 24, 2024
Caitlin Pentifallo, Head of Metrics and Regulations at Novata, joins this episode of Sustainable Intelligence to discuss the intricacies of social impact and its connection to impact investing. Caitlin shares her journey from studying the social impacts of the Olympic Games to leading Novata’s extensive Metric Library, which now includes over 2,400 metrics. Listen as she dives into the various methodologies and tools for measuring social impact, the challenges of impact investing, and the importance of selecting context-specific metrics to ensure meaningful and accurate evaluations. The conversation underscores the value of user-centric approaches in ESG data collection and the critical role of metrics in driving informed decision-making for sustainable impact.
Sustainable Intelligence is an interview series from Novata that explores ESG and sustainability in the private markets. From carbon accounting to using data to create value, the series dives into the challenges and opportunities facing private market investors and company leaders as their integrate ESG across the business and respond to regulatory requirements. Each episode centers authentic dialogue, highlighting experts at the forefront of advancing ESG data collection and driving meaningful progress in the sustainability landscape. Listen to more episodes.
Ella Williamson: Welcome to Sustainable Intelligence, where we discuss all things ESG and sustainability for the Private Markets, brought to you by Novata. I’m Ella Williamson, and I’m thrilled to be your host.
Joining us today is Caitlin Pentifallo, Head of Metrics and Regulations at Novata. She oversees the content and direction of Novata’s Metric Library, which currently contains over 2,400 metrics and counting. Prior to joining Novata, Caitlin ran her own micro SaaS social impact platform and social impact consultancy, focused on delivering tech-enabled social impact consulting at scale for nonprofit organizations.
She, therefore, couldn’t be better placed to talk about today’s topic on social impact: the theory, methods and the link to impact investing. Caitlin, could I ask you to start by talking us through how you got here and what led you to Social Impact?
Caitlin Pentifallo: Yes, of course, and thanks for having me on today, Ella. I’m really happy to be here, and this is a timely and topical question for me. We’re a couple of weeks out from the start of the Paris Olympic Games and that’s actually where I got started about 15 years ago, studying the social impacts of the Olympic Games themselves.
So I did my PhD research in Vancouver, British Columbia, at UBC and I went there specifically to study with my advisor, Dr. Rob VanWynsberghe, who’s an expert in sustainability, education, and social change. And at the time, he was also leading a study on the Olympic Games impact, studying the 2010 Winter Olympic and Paralympic Games and when I heard about this, I was like, yes, sign me up. So, the OGI study was the largest of its kind at the time, and this was an independent third party, and hosted at the University of British Columbia. This study was 125 indicators; it had economic, environmental, and sociocultural indicators, and was put together by a rather large team of experts and graduate students. And then you can see from this how kind of my early beginnings in ESG started to develop. And to make a really, really long story short, I spent about five years studying just two social impact indicators and this would be the impact of the 2010 games on social housing stock and the enforcement of urban policing and homelessness policies. From there, I went on to work in a number of different areas after finishing my PhD. So I worked in nonprofits, I worked for the federal government, and I worked for a variety of different agencies, both in corporate and nonprofit spaces before realizing what I really love to do was social impact work specifically for nonprofit organizations.
The more I did that, the more I realized I wanted to do that in tech because putting these two things together was where I was really most passionate. So bringing together those theories, those methods, and all the tools that I was using in the tech space and making them accessible for organizations was something I was just so passionate about doing and so that led me down kind of the road towards tech. The more that I did that on my own, the more I realized I really wanted to focus, and really wanted to focus on what I do today at Novata. So, leading a team, being that expert on the methodological side and on our metrics and regulations is really just an awesome fit for me here at Novata and something that I get to really get my hands into every single day.
So, a fun story connecting me to the Olympics. I got to work for the IOC, did some studies for FIFA and a number of other major events around the world, and got to travel a bit for that. So, excited to see that kickoff, but I’m really excited to be here in the position that I’m in now today with Novata.
Ella: Awesome. So going from two metrics at the beginning of your career to 2,400 now, that is quite a feat. So based on all of this insight and experience that you’ve had over the years, what would you say are the main methodologies and tools that practitioners need to be aware of? And theoretically, how might those differ?
Caitlin: I think, at heart, I’m really a sociologist and did spend a lot of time in educational studies, and what you’ll hear about the most is theory of change and logic models. This is at the heart of what every evaluation is really based in and really where social impact measurement lies. So, when we think about theories of change, those are really going to be your comprehensive illustrations of how and why practitioners might expect a change to happen in a particular context. And then, what they’re going to do there is support that working backwards to identify any kind of necessary preconditions. So your resources, your inputs, those activities that would achieve those longer term outcomes. You might see this kind of put together in a sentence where you really want to illustrate that whole picture beginning to end and really have that theory in place.
Logic models go a step further. They’re still going to be visual representations, but they’re going to identify those inputs, activities, all the way into outputs, outcomes, and eventually impacts of a particular program or that intervention. This is a much more kind of systematic approach, and that’s going to really dictate that relationship between resource activities and impacts. So by investing in something in this way, we hope to see this. By seeing something, by implementing this, we hope to see that and so this really undergirds everything that you’re going to see from that theory of change and that logic model perspective. And this is really at the core of what evaluators are looking to do and hypothesize when they’re going about social impact measurement.
Going from there, methodologically, it’s not super surprising that we hear a lot about approaches that are going to attempt to ground the intangible and these other priceless outcomes in the financial. This is something that we’ve always kind of tried to do and two methods that do try to do that are known by their acronyms: CBA (cost benefit analysis) and SROI (social return on investment). Those are probably the two most prominent and the ones that we hear about the most. So SROI is kind of a cool one because that’s a method for measuring financial value, and so there’s really interesting databases out there that are really going to attempt to quantify what those outcomes are from a particular intervention.
So if you’ve introduced something that is going to have a positive impact on health outcomes, what does it cost for someone to see a primary care physician? What does it cost for someone to avoid an illness? What does it cost for someone to go through something like a physical activity regimen? There are costs and there are costs avoided to all of these things. And there are databases out there that can quantify those things. That methodology is exceedingly complex, it’s hard and it’s rigorous and it takes a lot of time and expertise to put it together. But what something like that seeks to do is to monetize the social value created by an organization’s activities and then compare it to the investment required.
So if you’re putting something in that’s going to cost $10 million, what are the outcomes that you seek to generate over the lifespan of that particular investment? And is that warranted? So you’ll see that a lot of the time in public spaces where you’re saying, okay, if we are to implement something like this playground, this facility, this hospital, what is the outcome at the other end of this equation, and can we justify that? When you see something like cost benefit analysis, that’s another one. Super, super common and prevalent, where this is going to be a process whereby you’re calculating and you’re comparing the benefits and costs of a project and really, really common in spaces like decision making, government policy, and things like that.
So, what you’re going to see in social impact spaces is how do we take things that we know are beneficial for our communities, for different projects, for different communities, outcomes, whatever it might be and assign a dollar value to it? And I think that’s something that intrinsically is super, super interesting to get our heads around that at the end of the day, we’re trying to weigh up and we’re trying to measure things that we really can’t assign true dollar values against. And that’s where social impact, I think, deserves tons and tons of attention. And I think it’s really been historically very, very difficult to capture and to measure.
Lastly, I want to talk about two methods that ground almost everything that I do and the approach that we take here. The first is called utilization-focused evaluation and the other principles-based evaluation. These are off the beaten path a little bit and come from an evaluator in the field named Michael Quinn Patton. I’ve taken some of his courses live in, kind of, a professional development setting. These are much more hands-on and applicable to real life when you’re actually trying to evaluate something in real time with real people.
Some of these other methodological interventions that I talked about, like SROI and CBA, you can see that they’re going to take a lot of time to do. They’re going to require lots of information, lots of expertise over the course of very many months. When we talk about something like UFE or principles based evaluation, these are things a bit more akin to developmental evaluation, developmental evaluation being one that you’re doing a bit more on the fly. So we’re talking about things that we can get more real-time answers and kind of going about it in a bit more of a lighter touch kind of fashion, but the principles here are what I really want to talk about.
So utilization-focused evaluation is more of a practical approach that really wants to drive evaluation from the most kind of user-centric place possible. It wants to be as useful and impactful by aligning them to the needs and the context of those intended users. So it really wants to be as relevant as possible, having that increased use — so, by involving those end users throughout the process, this is actually going to increase the likelihood that findings that data is going to be more useful and significantly higher, of more utility in the end. And the reason why we want to incorporate end users in a process like this is that it has a greater chance of informing and improving decision making. So, the purpose of doing anything in evaluation is about driving decision making and improving that decision making. So as you’re conducting evaluation, as you’re conducting data, whether it’s out in the field, whether it’s in ESG, it’s got to be about improving the decisions that you make once you have that data in hand. So informing what someone is going to do with that data once they have it back, that’s really core and critical to what any evaluator, any kind of person doing an ESG-related assessment is going to be doing, that’s really core to what we want to do when we incorporate those principles into any kind of impact evaluation.
Ella: Wow. So a lot of methodologies and tools you went through there. I’d never heard of social return on investment before, and it just touches so many parts of society, ao thank you for going through that. So if we could move on, what makes impact investing so challenging when it comes to measurement compared to, say, the other frameworks and standards we know and hear about every day?
Caitlin: So, for this, I would say, impact investing is challenging, mostly because its aims are so varied. We’re not just looking for returns, profits, growth, margins. We might be looking for something like gender equity, increased access to education, developing new water systems. Even more challenging is that generational change isn’t measured in one annual reporting cycle. What looks like diminished or slowed growth might actually just be a part of progress. So it’s actually quite difficult. To build out a series of indicators that are going to be contextually specific to what each portfolio is aiming to achieve, while also being nuanced enough to capture change over a longer period of time. So, when we’re doing this in practice at Novata, we’re advocating for a very, very careful selection and use of something like GIIN and IRIS+ metrics. So GIIN stands for the Global Impact Investing Network, and this is something that’s an organization really dedicated to the scale and effectiveness of impact investing. Second to that would be IRIS+. These are often hand in hand, and that’s the impact reporting and investment standards and this provides a bit more of a standardized framework for measuring and reporting the social, environmental, and financial performance of impact investments. This is going to help bring a little bit more consistency and comparability in impact reporting.
While we are very vocal proponents and partners of both GIIN and IRIS+, we really advocate for their correct usage because when they aren’t applied correctly, they’re not really going to give you that accurate story. So to give you an example of this, let’s say you were going to buy a used car and I told you that you could only look at one thing. Let’s say that one thing was going to be number of miles. Would you do that? Would you trust that one thing? Sure, it’s a rather robust and a very popular indicator, but aren’t there other things you’d also like to see or know about in order to assess if that was going to be a car you’d like to invest in? This is a common pitfall that we see a lot of the time, wanting to tack on that one impact metric to assess one small aspect of an impact investing portfolio. We all know there are low mileage used cars, and those are probably lemons. But there are also Toyota Corollas from 1992 that are still going because they work. When you apply this logic to different aspects of impact investing, we’ll also see that you can apply the same metrics to different programs. Now say I told you that you have to use that same used car metric to buy a house. That’s what’s happening when we try to standardize impact metrics across an entire portfolio. This seems pretty extreme and oversimplified, but the drama of this is how it feels to me working in the space of impact metrics, and this is why we, as a metrics and regulations team, work really hard to steer people away and steer them towards a really careful and unique selection of impact metrics to match their context, their geography, their program outcomes, setting, and maturity. All of this really matters in impact investing.
Ella: I love your use of analogies there. A used car, and if you only have that one piece of data about it and the over reliance on that single metric, it can really result in misleading conclusions. So, Caitlin, grounding all of that insight in Novata’s setting in particular and working every day in ESG data collection for the private markets, how do you apply everything we have spoken about to the metric library and how your team selects and writes metrics for our clients?
Caitlin: Great question. First, I’d say we’re always thinking about what our end users are going to do with the data they collect and how they intend to use and interact with the metrics they’ve selected. And that really is bringing that theoretical piece from that utilization focused evaluation forward. That same logic applies to our library and the standards that we license, add, and procure. What are we seeing and hearing that we can leverage to better meet the needs of our end users?
Second, the way that we write and structure our metrics, the question prompts that come with those, they’re not part of the metrics that we receive from standards organizations. These are actually things that our team has written for each and every metric you’ll find in our library. The same thing goes with every piece of guidance, every resource that you see linked. These are all things that we really think through, knowing how our end users are going to experience the platform, and in the end, how they’re going to use and apply the data they collect.
ESG data collection is systematic and it’s intentional. For all its controversies, and for whatever it’s named, it is inherently a values-based proposition. There’s an old phrase that’s oft repeated of what gets measured gets managed, but the other half of that quote is, What gets measured gets managed, even when it’s pointless to measure and manage it, and even if it harms the purpose of the organization to do so. Measuring what matters to an organization, helping to define that, and setting a course for how it is identified is absolutely critical. I always say, as an evaluator, as someone working in the ESG space, I can evaluate anything. It might not be good or useful, but you better believe that I can do it. The other half of that same coin is that we are extremely well prepared as an organization to help with the most challenging questions in metric development.
Ella: So putting the end user first and really understanding how they will interact and utilize specific metrics is key to what you do here at Novata. Thank you so much, Caitlin, for all your insights on these complex metrics and for emphasizing the importance of measuring what truly matters to each unique organization.
Until next time, let’s keep building sustainable intelligence together. Find out more at novata.com.