Part 4: Why Complexity Matters When Measuring Sustainability
Part 4: Why Complexity Matters When Measuring Sustainability
By Matt Polsky and Claire Sommer
In the last two months, our atmosphere has passed the 400-ppm carbon threshold. Unlike the sound barrier, this number doesn’t mark an abrupt physical change of state. It’s a symbol of the now increasingly likely future when climate change substantially affects life on earth.
Still, even as a negative marker, 400 ppm tells us just how urgent it is that even useful and popular tools such as metrics take more leaps. Metrics aren’t just markers. In order to guide us, metrics should take account of the reality of the complex world in which we actually operate, no matter how unfamiliar or uncomfortable that may sometimes seem. Systems thinking needs to be better understood, taken to a higher level and brought into the world of metrics.
We note that systems thinking is starting to come up in sustainable business conversations, but remain concerned about persistent mindsets that ignore complexity. Without it, sustainability practitioners are unequipped to grapple with a point that should come up early in metrics work: “Did we just miss something very important?”
Too many articles implicitly distill big problems into linear, simpler solutions. While the latter have an important place, the large reliance on them fails the global test — in all senses of the term — that our world’s challenges demand.
Complexity is a close kin to systems thinking. However, this is mostly understood only at the glazed-look level. A working definition might help: Complexity is about understanding how interdependent, nonlinear systems actually act in the wild. That’s not how we’d like them to act, or how parts can be tested in the laboratory, but as whole entities.
Here are a couple of pitfalls for sustainability metrics.
Pitfall 8: Overlooking nonlinearities
Exceedances of thresholds, sometimes called tipping points, are where the underlying state of something actually changes its physical condition. It becomes something qualitatively different from what was just there before. These events may or may not be known ahead of time.
In a GreenBiz article, Sissel Waage describes how natural ecosystems can leap directly from forest to grassland — not the usual sequence — under the right pressures, in nonsynchronous leaps. This instructs us no longer to count on the common assumption of straight-line realities when developing and projecting the results of metrics.
Pitfall 9: Failure to embrace uncertainty, seeing data as sufficient without wizened interpretation, and pervasive human biases
The old joke goes that everyone talks about the weather but no one does anything about it. Post-Superstorm Sandy, we’d argue that a lot more people are finally starting to act and are seeking connections, if not causes, as it relates to climate change. “Did climate change cause Superstorm Sandy?” has jumped off the weather map and into national political discussions.
Taken in this light, we should understand more about where weather forecasts and climate models are coming from. Nate Silver’s book, “The Signal and the Noise,” published a scant month before Sandy, does just that.
Let’s put three conventional “wisdoms,” or at least common practices, on the table:
- Weather forecasters are often wrong (hence the jokes).
- Uncertainty is a bad thing and, regardless, does not need to be worked into predictions or even much talked about.
- To the degree that chaos theory is at all popularly understood, it has the reputation for making predictions impossible or meaningful, as any departure in practice from what is expected can reset the system on a new, unexpected path.
In “The Weatherman Is Not a Moron,” Silver challenges these practices: “Perhaps because chaos theory has been a part of meteorological thinking for nearly four decades, professional weather forecasters have become comfortable treating uncertainty the way a stock trader or poker player might.” That is:
“The forecasters look at lots of different models: Euro, Canadian, our model — there [are] models all over the place, and they don’t tell the same story — which means they’re all basically wrong,” Ben Kyger, a director of operations for the National Oceanic and Atmospheric Administration, told me.
The National Weather Service forecasters who adjusted (data-derived) temperature gradients with their human-held light pens were merely interpreting what was coming out of those models and making adjustments themselves. “I’ve learned to live with it, and I know how to correct for it,” Kyger said. “My whole career might be based on how to interpret what it’s telling me.”
Silver puts his finger on the fact that chaos theory, systems thinking and a tolerance for being wrong are baked into the system for weather experts. They are part and parcel of meteorological culture. Yet this embrace of uncertainty is not necessarily seen in other fields, where projecting confidence is expected.
Further, Silver makes no bones about the inadequacies of relying on data alone. Instead, wise interpretation is key, as a system is impossible to understand or forecast without it. In other words, data “does not speak for itself.”
The complexity concepts Silver describes — chaotic, non-linear, dynamic systems — almost never come up in the context of sustainable business metrics. If you asked at a metrics forum, “How many people here have stopped to consider whether the system for which they’re proposing these metrics might be chaotic and non-linear?” most people probably would have no idea what you’re talking about.
In a review of Silver’s book, quantum physicist and bias-expert Leonard Mlodinow crystalizes the various ways we can go wrong when we fail to embrace uncertainty:
“We are fooled into thinking that random patterns are meaningful; we build models that are far more sensitive to our initial assumptions than we realize; we make approximations that are cruder than we realize; we focus on what is easiest to measure rather than on what is important; we are overconfident; we build models that rely too heavily on statistics, without enough theoretical understanding; and we unconsciously let biases based on expectation or self-interest affect our analysis.”
This is not a testimonial to the power of common sense.
We doubt that there are cut-and-dried answers on how to incorporate and communicate complexity. But the challenge is there for metrics practitioners and communications professionals. As a starting point, we can at least acknowledge from the beginning when a problem is truly complex, even if that is not immediately addressed.