In general, people are terrible at forecasting. Each day we forecast a wide range of things and we get most of them wrong. In particular, companies and individuals are notoriously bad at judging the likelihood of uncertain events, and there are numerous studies showing this all too well.

But improving a firm’s forecasting competence even a little can yield a competitive advantage. A company that is right three times out of five on its judgment calls is going to have an ever-increasing edge on a competitor that gets them right only two times out of five. It doesn’t take an expert forecaster to predict that!

Most predictions made in companies, whether they concern project budgets, sales forecasts, or the performance of potential hires or acquisitions, are not the result of statistical analysis and data driven calculations. They are coloured by the forecaster’s understanding of basic statistical arguments, susceptibility to cognitive biases, desire to influence others’ thinking, and concerns about reputation. Indeed, predictions are often intentionally vague to maximize wiggle room should they prove wrong. The good news is that training in reasoning and debiasing can reliably strengthen a firm’s forecasting competence. The Good Judgment Project demonstrated that as little as one hour of training improved forecasting accuracy by about 14% over the course of a year.

About the Good Judgment Project

In 2011, Philip Tetlock teamed up with Barbara Mellers, of the Wharton School, to launch the Good Judgment Project. The goal was to determine whether some people are naturally better than others at prediction and whether prediction performance could be enhanced. The GJP was one of five academic research teams that competed in an innovative tournament funded by the Intelligence Advanced Research Projects Activity (IARPA), in which forecasters were challenged to answer the types of geopolitical and economic questions that U.S. intelligence agencies pose to their analysts.

The IARPA initiative ran from 2011 to 2015 and recruited more than 25,000 forecasters who made well over a million predictions on topics ranging from whether Greece would exit the eurozone to the likelihood of a leadership turnover in Russia to the risk of a financial panic in China. The GJP decisively won the tournament—besting even the intelligence community’s own analysts.

The Good Judgment Project identified the traits shared by the best-performing forecasters in the Intelligence Advanced Research Projects Activity tournament. I have written here previously on innovation archetypes and there is certainly some parallel between the thoughts shared in this article (based on my TEDx talk earlier in 2016) and the research coming out of the GJP. Here are their findings as to what makes the best-performing forcaster:

Characteristic 1: Philosophical Approach and Outlook

Cautious: They understand that few things are certain
Humble: They appreciate their limits
Nondeterministic: They don’t assume that what happens is meant to be

Characteristic 2: Abilities and Thinking Style

Open-minded: They see beliefs as hypotheses to be tested
Inquiring: They are intellectually curious and enjoy mental challenges
Reflective: They are introspective and self-critical
Numerate: They are comfortable with numbers

Characteristic 3: Methods of Forecasting

Pragmatic: They are not wedded to any one idea or agenda
Analytical: They consider other views
Synthesizing: They blend diverse views into their own
Probability-focused: They judge the probability of events not as certain or uncertain but as more or less likely
Thoughtful updaters: They change their minds when new facts warrant it
Intuitive shrinks: They are aware of their cognitive and emotional biases

Characteristic 4: Work Ethic

Improvement-minded: They strive to get better
Tenacious: They stick with a problem for as long as needed

As I explored the research and insight emerging from the IARPA GJP, I noted that not only are many of the above traits very similar to those I discussed in my article, though presented in a very different way, but also that the role of forecasting is itself a key part of the innovation process, and is in fact, arguably, the key part of the process. If we are unable to forecast our innovation “successes”, then when it comes to determining our innovation priorities and investments, what are we using to make our choices and set these priorities? Are our leadership and/or management equipped with the right skills, insights, tools and support to make truly informed (and accurate) decisions? What cognitive biases are influencing these decisions?  I do not have the answers at this time, but this is a topic I wanted to start a conversation about and will continue to research.

A public GJP tournament is ongoing at; join to see if you have what it takes, in the mean time let me know your thoughts on this and I will share more as I research this topic further.