Skip to main content
Uncategorized

Strategy, forecasting and human factors.

By March 1, 2016July 23rd, 2016No Comments

There’s not a lot of room for a fixed mindset in problem solving or innovation.

With that in mind, the LT team tend to view ourselves as specialist generalists, and over the last 9 months one of the things that has continually surprised me is the similarity of the processes we apply in our various specialties. I’ve found documents we’ve worked on where I could substitute ways of working terms for human factors terms and not change a thing more. It’s validated my belief that in times of change and innovation, it’s not content of your knowledge, but your ability to process and apply it to novel opportunity.

Scientific research is in a similar transition,and it’s exciting to see a lot of robust science being published supporting that approach, and the methods we’ve been using in projects, which I discussed a little back here.

Philip Tetlock’s recently published research regarding how individuals and teams make predictions of real world, high stakes events illustrates this vividly. Tetlock, funded by the United States Government’s IARPA (the intelligence counterpart of DARPA), ran a multiyear forecasting competition. The rules were simple – make better predictions, consistently, than the other competitors. If you possessed a supercomputer that could assess masses of data: use it. If you worked for the CIA and had classified information: use it. If you had a search engine and a subscription to the New York Times, use ‘em.

After realising much of the poor prediction amongst specialist analysts stemmed from how questions were actually structured, over 1 million accurate, assessable predictions were gathered from competitors over several years, covering ForEx, geopolitics or scientific progress amongst others. Fascinating trends emerged, not least of all that a distinct group of participants were over 30% more accurate in their predictions than specialist intelligence officers possessing classified information, and 60% more accurate than the competition average. It’s fair to say the ‘average’ in the competition was also slightly higher than general population.

When you consider the analysts soundly out predicted are the very same who make recommendations on invading foreign countries, it’s a worry. Initial research in fact showed most ‘expert forecasters’ were less accurate than dart throwing chimps at predicting wars, elections and economic events.

There’s also a huge opportunity here, because Tetlock’s research also showed that the cognitive processes utilised by the competition ‘super forecasters’ were skill based, which is to say, trainable. Intellect plays a surprisingly small part (as Tetlock would say, if you’re clever enough to be interested in this, you’re clever enough). 

It turns out, What We Do, How We Do It, and How We Think About It are all co-dependent, something I’ll explore in my next blog.

In the mean time, there is a great break down of the research, with videos of Tetlock discussing the research with the likes of Nasim Taleb and Danny Kahneman here.

Leave a Reply

Subscribe to the Luna Newsletter

Close Menu

Quote of the week

The new competitive advantage is the ability to anticipate, respond and adapt to change.

Recent Luna Posts

Become Remarkable.