Welcome To the Stock Synergy, Momentum & Breakout HUB On AGORACOM

Edit this title from the Fast Facts Section

Free
Message: A little long, but a great read on the paradox of forecasting

http://www.boston.com/bostonglobe/ideas/articles/2011/01/09/that_guy_who_called_the_big_one_dont_listen_to_him/?page=full

That guy who called the big one? Don’t listen to him.

Inside the paradox of forecasting

By Joe Keohane January 9, 2011
In 2006, a somewhat obscure economist stood before a room full of peers at the International Monetary Fund and let loose with some good old-fashioned doomsaying. The United States was about to get hit with a ghastly housing bust, he said. The price of oil was about to skyrocket, and a particularly nasty recession was on its way, bringing with it untold ruin and misery for citizens, bankers, and businesspeople all over the world. The prophesy was dismissed initially as the mutterings of a pessimistic crank. A year later, he was proved right beyond all doubt. “He sounded like a madman in 2006,” an economist who had attended the talk later told The New York Times. “He was a prophet when he returned in 2007.”

That economist was New York University’s Nouriel Roubini. And since he called the Great Recession, he has become about as close to a household name as an economist can be without writing “Freakonomics” or being Paul Krugman. He’s been called a seer, been brought in to counsel heads of state and titans of industry — the one guy who connected the dots while the rest of us were blithely taking out third mortgages and buying investment properties in Phoenix. He’s a sought-after source for journalists, a guest on talk shows, and has even acquired a nickname, Dr. Doom. With the effects of the Great Recession still being keenly felt, Roubini is everywhere.

But here’s another thing about him: For a prophet, he’s wrong an awful lot of the time. In October 2008, he predicted that hundreds of hedge funds were on the verge of failure and that the government would have to close the markets for a week or two in the coming days to cope with the shock. That didn’t happen. In January 2009, he predicted that oil prices would stay below $40 for all of 2009, arguing that car companies should rev up production of gas-guzzling SUVs. By the end of the year, oil was a hair under $80, Hummer was on its way out, and automakers were tripping over themselves to develop electric cars. In March 2009, he predicted the S&P 500 would fall below 600 that year. It closed at over 1,115, up 23.5 percent year over year, the biggest single year gain since 2003.

How can this be? How can someone with the insight to be so right about a major event be so wrong about so many other ones? According to a recent study, it’s simple: The people who successfully predict extreme events, and are duly garlanded with accolades, big book sales, and lucrative speaking engagements, don’t do so because their judgment is so sharp. They do it because it’s so bad.

Predicting the future is essential to modern life. When we buy a house, we’re essentially predicting that the surrounding neighborhood isn’t about to go to seed; when we start a business, we’re predicting that what we’re selling will find a buyer; when we marry, we’re predicting our mate won’t turn into an appalling, intolerable bore. Every decision, from going to a party, to voting, to professing belief in a higher power, is tightly bound to our confidence about what will happen next.

We reserve a special place in society for those who promise genuine insights into the future — who can predict what will happen in business, in sports, in politics, technology, and so on. The media landscape is rich with these experts; Wall Street pays millions of dollars every year to analysts to put a precise dollar figure on next year’s company earnings. Those who manage to get a few big calls right are rewarded handsomely, either in terms of lucrative gigs or the adoration of a species that so needs to believe that the future is in fact predictable.

But are such people really better at predicting the future than anyone else? In October of last year, Oxford economist Jerker Denrell cut directly to the heart of this question. Working with Christina Fang of New York University, Denrell dug through the data from The Wall Street Journal’s Survey of Economic Forecasts, an effort conducted every six months, in which roughly 50 economists are asked to make macroeconomic predictions about gross national product, unemployment, inflation, and so on. They wanted to see if the economists who successfully called the most unexpected events, like our Dr. Doom, had better records over the long term than those who didn’t.

To find the answer, Denrell and Fang took predictions from July 2002 to July 2005, and calculated which economists had the best record of correctly predicting “extreme” outcomes, defined for the study as either 20 percent higher or 20 percent lower than the average prediction. They compared those to figures on the economists’ overall accuracy. What they found was striking. Economists who had a better record at calling extreme events had a worse record in general. “The analyst with the largest number as well as the highest proportion of accurate and extreme forecasts,” they wrote, “had, by far, the worst forecasting record.”

By way of illustration, the authors cite the case of one Sung Won Sohn. Sung, a successful businessman who was then the CEO of Hanmi Financial Group, had made headlines with his forecasting prowess. After visiting a company that claimed it couldn’t meet the demand for $250 jeans, he had a hunch that “there must be money out there” and hiked his predictions on growth and inflation for 2005, even as other economists were predicting a drop in inflation and weaker growth. He was right, and his predictions won him the top spot among economists in the Journal’s survey for the year. This would have been a testament to some impressive intuitive faculties, had he not placed 43d and 49th out of 55 in the previous two years. It wasn’t just Sung who came in for a beating. Across the board, Denrell and Fang found that poor forecasters are more likely to make bold predictions, and therefore, like the proverbial broken clock that is right twice a day, “they are also more likely to make extreme forecasts that turn out to be accurate.”

Their work is the latest in a long line of research dismantling the notion that predictions are really worth anything. The most notable work in the field is “Expert Political Judgment” by Philip Tetlock of the University of Pennsylvania. Tetlock analyzed more than 80,000 political predictions ventured by supposed experts over two decades to see how well they fared as a group. The answer: badly. The experts did about as well as chance. And the more in-demand the expert, the bolder, and thus the less accurate, the predictions. Research by a handful of others, Denrell included, suggests the same goes for economic forecasters. An accurate prediction — of an extreme event or even a series of nonextreme ones — can beget overconfidence, which can lead to making bolder and bolder bets, and thus, more and more errors.

So it has gone with Roubini. That one big call about the Great Recession gave him an unrivaled platform from which to issue ever more predictions, and a grand job title to match his prominence, but his subsequent predictions suggest that his foresight may be no better than your average man on the street. The curious nature of his fame calls to mind two of economist Edgar Fiedler’s wry rules for economic forecasters: “If you must forecast, forecast often,” he wrote. And: “If you’re ever right, never let ’em forget it.”

There’s no great, complex explanation for why people who get one big thing right get most everything else wrong, argues Denrell. It’s simple: Those who correctly predict extreme events tend to have a greater tendency to make extreme predictions; and those who make extreme predictions tend to spend most of the time being wrong — on account of most of their predictions being, well, pretty extreme. There are few occurrences so out of the ordinary that someone, somewhere won’t have seen them coming, even if that person has seldom been right about anything else.

But that leads to a more disconcerting question: If this is true, why do we put so much stock in expert forecasters? In a saner world than ours, those who listen to forecasters would take into account all their incorrect predictions before making a judgment. But real life doesn’t work that way. The reason is known in lab parlance as “base rate neglect.” And what it means, essentially, is that when we try to predict what’s next, or determine whether to believe a prediction, we often rely too heavily on information close at hand (a recent correct prediction, a new piece of data, a hunch) and ignore the “base rate” (the overall percentage of blown calls and failures).

And success, as Denrell revealed in an earlier study, is an especially bad teacher. In 2003 he published a paper arguing that when people study success stories exclusively — as many avid devourers of business self-help books do — they come away with a vastly oversimplified idea of what it takes to succeed. This is because success is what economists refer to as a “noisy signal.” It’s chancy, fickle, and composed of so many moving parts that any one is basically meaningless in the context of the real world. By studying what successful ventures have in common (persistence, for instance), people miss the invaluable lessons contained in the far more common experience of failure. They ignore the high likelihood that a company will flop — the base rate — and wind up wildly overestimating the chances of success.

To look at Denrell’s work is to realize the extent to which our judgment can be warped by our bias toward success, even when failure is statistically the default setting for human endeavor. We want to believe success is more probable than it is, that it’s the result of a process we can wrap our heads around. That’s why we’re drawn to prophets, especially the ones who get one big thing right. We want to believe that someone, somewhere can foresee surprising and disruptive change. It means that there is a method to the madness of not just business, but human existence, and that it’s perceptible if you look at it from the right angle. It’s why we take lucky rabbits’ feet into casinos instead of putting our money in a CD, why we quit steady jobs to start risky small businesses. On paper, these too may indeed resemble sucker bets placed by people with bad judgment. But cast in a certain light, they begin to look a lot like hope.

Joe Keohane is a writer in New York City.

Share
New Message
Please login to post a reply