What’s in a number?
The first order of business in understanding the results of your sustainability report is to understand the metrics upon which they’re based. All reports consists of metrics that fall into one of three broad categories: (1) Environmental, (2) Social, (3) Governance. Together, they’re commonly referred to by the “ESG” acronym. Environmental metrics capture the issues people most often associate with sustainability reporting: carbon emissions, water efficiency measures and waste policies, for example. Social factors address the human aspect of corporate performance such as employee training, health, compensation and labor conditions. Governance factors try to capture who, if anybody, sets your company’s long-term vision on sustainability and who is rewarded or held accountable if certain objectives are or aren’t met.
Depending on the protocol or set of guidelines on which you report, you’ll make statements or provide data on a variety of ESG metrics which the protocols may use to assign you a score. This score may be absolute based on a public scoring rubric, or relative based on the performance of other reporters. You may even see both approaches used side-by-side. For examples of each approach, check out CDP’s scoring methodology and its Climate Performance Leadership Index. To keep it simple, know that your score will depend on both the quantity and quality of the data you disclose.
A score can be helpful in giving you a quick pulse of overall performance. If it’s a relative-based scoring system, it will also help you understand how you stack up to your peers. But you’ll need to go deeper than a single number to understand what metrics to take action on. Here’s my advice to get past scores that can admittedly be somewhat arbitrary (as methodologies change year over year) or un-informative (like when scoring methodologies are not made public or scores are not provided at all).
Apples to oranges
I remember a new client asking me “how do we compare to our peers on sustainability?” It’s a question I would go on to hear many times from new and prospective clients. Like so many others, they wanted to know where they stood—to be “benchmarked.” It’s a foundational part of any assessment exercise and done properly, can help set strategy or ensure tactics are having the desired result. In fact, benchmarking may be the number one way companies go about understanding their sustainability performance. Unfortunately, it can also be fiendishly hard to get right for three reasons:
- Absentee benchmarking: Companies and their consultants look at the performance of others without knowing or comparing results to the client’s current state. This means your benchmark has no baseline.
- Mixed fruit: Mistake #1 can be compounded when the performance metrics used in the benchmark are not apples-to-apples comparisons. This happens when absolute and intensity targets are mixed, annual reduction targets are detached from base or end years, or any other data type or contextual aberrations.
- Red herring: Avoid #1 or #2 and you still run the risk your comparisons are not actually relevant to you, leaving you fixated on carbon emissions when your issues are rooted in governance practices. This often occurs when we obsess over the Big Four: emissions, water, waste or energy without thinking about their upstream drivers.
These mistakes are dangerous because companies end up setting targets or issuing policies based on the “best practices” of others as opposed to being done in relation to their current state, capabilities and with metrics appropriate for them.
Our first mistake may be the easiest to resolve through the act of reporting (hence Step 1!). Reporters to the Carbon Disclosure Project or the Global Reporting Initiative, for example, enjoy an excellent data set on both themselves and their peers with which to make meaningful comparisons, and so can control for the #1 problem of “absentee benchmarking.” Eaton, a perennial CDP top-performer, explains how it goes about benchmarking and the value of the CDP report in establishing its baseline. As Eaton rightly says, one of the primary benefits of reporting using an established sustainability protocol is a quality, consistent data set for comparison purposes. For these reporters, the risk of absentee benchmarking is mitigated.
The second matter of “mixed-fruit” is trickier to resolve. It takes a trained eye to spot when “axis are mislabeled” in all those fancy charts your consultant has provided. Start by double checking the denominators, units and other modifiers of metrics you’re analyzing. Are they consistent across data sets? Do they have all the contextual data around them? Not sure what’s contextual? Try making a story about the metric by asking “where did this metric come from and what’s it telling me?” If it’s carbon emissions, for example, look at the energy data underneath it, the categorization of the physical sources contributing to it and think about what this means in the real world.
If you spot a part of the story that sounds substantially different from how your story plays out, you may be trying to weave two incongruous narratives together and need to flag or otherwise mark that metric with an asterisk: caveat emptor. Take a look at what Broadcom says on page 5 of its sustainability highlights reel. Notice how carbon emissions are normalized by square foot. Using an intensity metric explains how emissions are “declining” despite aggregate, absolute emissions increasing. If you’re benchmarking against Broadcom, you’ll need to think hard about how to make sense of their emissions trajectory versus yours. By making sure the stories behind each metric start and end at the same place, you can help eliminate mistake #2.
This bring us to the “red herring” mistake: assessing irrelevant data. It’s easy to think the metrics and comparisons made by the protocols are necessarily appropriate to you. Remember, the benchmarks and data produced by sustainability protocols like CDP are not ends in and of themselves, but guidance on where to hunt for solutions. So having relatively high or low scope 1 emissions compared against your peer group is not a justification for knee-jerk reaction, but a cause for reflection about the nature of your business, the drivers of those emissions and how they’re reported in the first place.
Consider a company whose absolute emissions are going up due to M&A activity while those of its peers fall. A problem? Not necessarily. Deeper investigation may reveal its peers are selling assets and benefiting from the associated overall reduction in emissions while the company’s use of an absolute emissions target obscures its progress reducing the overall intensity of its emissions. In this case, how the company reports is as important as what it reports. Importantly, just because a metric is in the benchmark doesn’t mean it’s inherently relevant. With hundreds of different data points in any given sustainability protocol, be judicious in including data points that matter to your company as opposed to those placed conveniently front and center.
Up to this point, we’ve focused on external benchmarking, but the concept has equal applicability within your organization. Internal benchmarking is the corollary to the practice of private, internal-only reporting that I espouse for first time reporters. The difference is where internal reporting is designed to get systems and processes in order and lower the anxiety around full-blown public disclosure, internal benchmarking is designed to help organizations identify the metrics relevant to them so they can then be used in external benchmarking exercises. In this sense, internal benchmarking is another tactic for avoiding mistake #3. It’s also a great way to practice and refine your benchmarking process before applying it to external data sets. It works like this: organizations segment their operations or assets into categories for performance comparison purposes. For example, a real estate asset manager can be benchmarked against another, comparable asset manager, but it can also segment its own portfolio by region or property type to make performance comparisons such as GHG emissions by region or water intensity by property type. This exercise is designed to help an organization identify the performance indicators most meaningful internally as opposed to the outside world. With this accomplished, indicators can then be compared against identical metrics from peers.
The practices of internal and external benchmarking are fundamental tools in making sense of sustainability performance. They help organizations identify the metrics meaningful to them, create accurate comparisons across data sets and ensure benchmarks are placed in context. While a sloppy benchmark is misleading and un-actionable, a good benchmark is where you reap the rewards of the reporting exercise because you will have quality and consistent data about yourself and your peers with which to work. So go get your data together, keeping in mind the three mistakes, and get benchmarked!
Here are a few places to start:
- Global Reporting Initiative: Sustainability Disclosure Database
- CO2 Benchmark
- GRESB: Benchmark
Measurabl’s guided, step-by-step approach to reporting lowers the barriers to sustainability disclosure so it’s something any company can do regardless of size or level of expertise. Paired with our analytics and engagement tools, we make it easy to benchmark performance and show companies not only the areas in need of improvement, but guide them through exactly how to implement those changes. In short, we’re building a platform to make the 1, 2, 3 of sustainability possible, affordable and effective for all organizations, democratizing sustainability and empowering change.