Does forecasting accuracy really matter?
By Ben Roesch on May 05, 2023
tl;dr - Yes, of course it matters. But improving it in lieu of the other benefits crowdsourced forecasting can provide continues to receive an outsized portion of attention when thinking about how to use crowdsourced forecasting to improve decision-making.
With “forecasting” in its name, the accuracy of crowdsourced forecasts is a constant point of focus and attention. Our first conversations with potential clients often focus on what amounts to developing a crystal ball for their organization. Some ask about how they might incorporate new algorithmic techniques like AI and machine learning to further improve that crystal ball. People who participate in crowdsourced forecasting suggest potential incentive improvements in scoring systems or devise entirely new scoring systems in hopes of producing measures that might promote more accurate forecasts.
These are all welcome improvements, but unfortunately come with a side effect: they choke off attention on fundamental challenges that more often threaten the success of forecasting efforts. We’ve been running crowdsourced forecasting programs for years and while we’ve had our share of successes and failures, none of them have failed due to inaccurate forecasts or issues with scoring.
If crowdsourced forecasting is “accurate enough,” then where should you
focus your attention to increase the likelihood of success of your forecasting
program?
Building a culture and engagement strategy that supports the activity of forecasting
For most people, forecasting is intimidating, especially at first. The topics might be unfamiliar or they might be nervous about being “wrong,” especially since keeping score is such an integral part of forecasting. Fortunately, this is often a perception issue more than an insurmountable problem. Taking 15 minutes to read the question’s background information and a few news articles can often get someone to an initial forecast that feels reasonably informed rather than a complete shot in the dark. It then becomes easier to track the course of the question, learning and updating as events unfold with our reminder and alert system.
In addition, most forecasting tournaments involve an inherent tension between the questions that decision makers want to ask (often hard or esoteric) and the questions that interest forecasters. Any forecasting effort should dedicate significant attention to the question portfolio, making sure to strike a balance between “hard” questions and engaging ones.
On top of all this, participants are often asked to forecast for minimal reward. Prize and incentive pools are often small (or outright disallowed). The forecasting exercise is not part of their job description. Personal benefits, such as improving critical reasoning or illuminating biases in their thinking, are valuable but relatively intangible. All of these combine to make engagement an ongoing challenge.
Any forecasting effort that wants a reasonable chance to succeed needs to dedicate a sizable portion of its mindshare and resources towards forecaster engagement. Over time, we've found a number of key elements but, for better or worse, there is not a one-size-fits-all solution. The right engagement strategy is going to depend on the nature of the effort and the people involved.
Senior leaders are thinking of the big picture. Forecast results need to meet them there.
Because we score all forecasts, forecasting questions need to be precisely defined to ensure they can be judged. In contrast, senior leaders often care about big picture, “fuzzy,” high-level questions. This can produce a mismatch between the questions being forecasted and the future these senior leaders are trying to understand. If a decision maker wants to know “will there be a major financial crisis this year?” but we hand them forecasts for “how many bank failures will there be this year?,” then we’ve created a disconnect. And when these senior leaders do receive forecasts, whether they’re highly relevant or not, there’s an additional challenge to integrate them into existing decision making processes.
Organizations are also often unable or unwilling to ask the questions they really should be asking – they’re about the proverbial “elephants in the room,” would be too politically sensitive, or may simply be unwelcome given someone’s personal agenda or financial performance incentives.
Bridging this disconnect is the focus of our issue decomposition process, where we take a big picture question and break it down into the drivers that influence the outcome. Those individual signals can then be forecasted to signify whether the drivers are happening or not. We can then roll forecasting results back up into a more direct answer to the big picture question and meet the senior leader at their level of thinking.
Taking time to think through and deliver on a plan to connect forecasting outputs to decision making and decision makers is imperative. Without it, forecasting results are likely to be met with “what do I do with this?” style reactions from decision makers.
Accurate forecasts are clearly a necessary and valuable part of forecasting, but spending all of our time trying to squeeze blood from the accuracy stone is unlikely to produce significantly different results for the effort. If we want to truly institutionalize our forecasts efforts and make them as valuable as possible, time and resources are better spent ensuring we have an active, engaged community of forecasters and actionable results that are useful to decision makers.
change management project management cultivate forecasts Cultivate Labs