Boston, MA (PRWEB) November 18, 2016
Co-founder and CMO of Kimble Applications Mark Robinson draws on long experience of successfully managing consulting businesses to look at some of the issues that arise in forecasting, and reflects on the failure of polling to correctly predict the US election result.
The US election result was a huge shock to the world. There was anger about those poll predictions being so wrong. “Data died tonight,” Republican strategist Mike Murphy tweeted. “How Data Failed Us” ran a headline in the New York Times.
Forecasting can be difficult. And there is always a possibility of getting it wrong. Consulting business managers will likely know only too well the feeling the US Democrats had of missing the last step on the staircase. That’s a little like the unwelcome realization that a big project isn’t generating the revenue that was predicted. Or when you discover hundreds of unbilled hours against an engagement. Or you didn’t realise until three weeks after the end of it, that you missed last month’s forecast.
But a bad forecast is not usually caused by a failure of the number crunching. It is generally because of the way that the data is being collected, input and interpreted. So how you analyze the results you are getting is important.
The main reasons why forecasts turn out to be wrong are that the information in them isn’t current or that it is not reliable. Sometimes, with the best of intentions, people lower down in the system are introducing errors by fiddling with the data. Or, with less good intentions, people are “gaming” the system because they don’t have a vested interest in it being accurate.
Looking at trends and understanding why previous predictions did not match up against reality is a vital part of forecasting. To go back to the case of the US election, a clear global trend for electoral polling to be wrong was generally dismissed. Both for the Brexit vote and the UK general election in 2015, electoral polling failed to predict the result. In the UK “shy Brexiteers” and “shy Tories” have been breaking the models – but the possibility that there would be a large number of shy Trump voters in the US was not taken seriously enough.
Pollsters perhaps should have learned from this to be more sceptical than they were of the answers that voters were giving to pollsters questions. Venture capitalist Peter Thiel said: “I was having dinner last week with a high-profile venture capitalist and he said, ‘I’m voting for Trump but I have to lie and tell everyone I’m voting for Gary Johnson’. He was stunningly matter-of-fact about it.”
If people are not answering correctly how they will vote, that is a difficult issue to fix. Rather than trying to adjust the polls to take account of bad information, attention should be paid to where it is coming from. Why are people uncomfortable saying how they will vote? Where people are being measured in ways they don’t have any control over or stake in, there can be a tendency to game the system. For instance, if everyone who participated in a poll received a cash reward if the poll matched reality, a different behaviour would be driven.
In the business world, people who are inputting information into your professional services automation (PSA) system may adjust it in ways that don’t help with the accuracy of the high level view. For instance, the sales manager knows that John’s sales forecasts are usually too optimistic and adjusts the forecast to account for that. A better solution, in our view, would be to approach John and find out why this is happening. Perhaps managing John more closely – having weekly meetings and asking cogent questions – would fix the root cause and drive data accuracy from the outset.
In another scenario, a forecast can end up setting certain expectations which people then work to. Perhaps the revenue forecast is perceived as being set in stone and although staff have an opportunity to bring in more business, they don’t feel encouraged to do so, because the forecast is driving their behaviour. In the run up to the US election, Democratic party activists didn’t devote enough time to some blue states because the polls showed they were in the lead. But those were some of the states where the polls were most off.
So testing your forecast regularly against reality is vital. Used correctly, powerful IT software is a tool for augmenting human intelligence, not replacing it.
Election expert Thomas E. Mann told the New York Times: “If we could go back to the world of reporting being about the candidates and the parties and the issues at stake instead of the incessant coverage of every little blip in the polls, we would all be better off. They are addictive, and it takes the eye off the ball.”
Forecasting is not the be all and end all of either politics or business. Using PSA along with a best practice framework that constantly checks your predictions against reality enables you to concentrate on what really matters – people.
For more information about Kimble Applications and forecasting, watch this webinar.