We need to make it feel ok to be wrong sometimes
By Adam Siegel on May 31, 2019
No one enjoys being wrong. It’s an unpleasant emotional experience for any of us. But that’s exactly the risk we’re suggesting people take when we ask them to make a forecast about the future.
Further complicating the problem in our crowd forecasting work, we are typically recruiting as forecasters office employees in the highly political arena of large, silo’d, organizations. These people have all learned the hard way, you never want to do anything to purposefully look bad. And yet, we’re asking someone to essentially stand up and say: “My forecast is X, you may make a decision based on my (and other’s) forecast, and I could be totally wrong!”
You can argue participation should be anonymous to protect from this. And that's often our approach, but we know there is still hesitation among a non-trivial number of people to participate in an exercise where they're being measured. Being measured may illustrate something they may not like about their own perceived value and skill, a dicey proposition if you own self-worth is largely driven by your workplace title and status.
So what to do? Our clients want the accuracy and derivative insights of crowd forecasting, but a large number of people are disincentivized from participating for the reasons I’ve just described.
We’ve written in our knowledge base and I’ve talked on this blog about creating a baseline set of incentives, but this is a special case. How do you transform those who don’t want to know, to those that do?
Unsurprisingly, it turns out different methods work for different kinds of people. And while we can use technology as a nudging mechanism and make sure our user experience has minimal friction, success ends up being more about messaging, tone, and creating an aura of credibility, integrity, and safety.
For example, for several clients we've taken the simple step of hiding all our accuracy leaderboards, to completely remove the sense of competition. We wanted to drive the message that this is a collective exercise where we need everyone’s contributions, not just the “top performers.” We've also changed how we think about feedback to individual participants. Outside of finance and other use cases where accuracy is all that matters, we focus more on other types of metrics that contribute to the collective: valuable forecast rationales, submitting forecast questions for the rest of the participants, etc. And perhaps most critically, we try to create a pull for input based on a sense of mission. I'm uncomfortable expressing an opinion I'll be held accountable to, but I need to share it because I do have a perspective to share, and they're using us as a critical signal amongst all the noise to make decisions.
—
When I first went through Y Combinator and started Inkling Markets back in 2006, the overriding question I was asked was why crowd forecasting? 13 years later, its value is unquestioned as case studies have proliferated and concepts like “Superforecaster” have entered the management zeitgeist.
And yet, crowd forecasting is not something you just “turn on.” It requires the commitment of senior management and cultivating a culture of humility. Only when people feel safe, that even when wrong, they’re making a valuable contribution to the greater good, are our programs successful.
Some organizations just aren’t ready for this. It’s so much easier just not to know, to play the game, to preserve and maintain. But if they are ready to walk the talk of “diversity and inclusion,” “engagement,” “cultural transformation,” and all the rest, a program like crowd forecasting can be incredibly powerful.
change management disruptive leadership crowdsourced forecasting