Saturday, 2 April 2011
Why are expert forecasts so often worthless?
The fake forecasts of this Greenpeace activist should be the focus of study
Do you remember, who on January 25 said that “our assessment is that the Egyptian government is stable and is looking for ways to respond to the legitimate needs and interests of the Egyptian people"?
It was, of course, US secretary of state Hillary Clinton, who based her statement on the assessments of the wast US intelligence community, including the CIA and the US embassy network.
This and a number of other intelligence failures has prompted the little known US agency, Intelligence Advanced Researh Activity (IARPA) to sponsor a vast project - involving thousands of participants - to improve forecasting, Reuters´ World Affairs columnist Bernd Debusmann reports:
The idea is to raise five large competing teams of people of diverse backgrounds who will be asked to make predictions on fields that range from politics and global security to business and economics, public health, social and cultural change and science and technology. The project is expected to run for four years and stems from the recognition that expert forecasts are very often wrong.
One of the teams is being put together by University of Pennsylvania professor Philip Tetlock, whose ground-breaking 2005 book (Expert Political Judgment: How Good is It? How Can We Know?) analysed 27,450 predictions from a variety of experts and found they were no more accurate than random guesses or, as he put it, “a dart-throwing chimpanzee”.
“To test various hypotheses,” Tetlock said in an interview, “we want a large number on my team, 2,500 or so, which would make it almost ten times bigger than the number I analysed in my book.” There are no firm numbers yet on how big the other four teams will be. But Dan Gardner, the author of a just-published book that also highlights the shortcomings of expert predictions, believes the IARPA-sponsored project will be the biggest of its kind. It is expected to start in mid-2011.
The title of Gardner’s book, “Future Babble. Why expert predictions are next to worthless and you can do better,” leaves no doubts over his conclusion. The book is an entertaining, well researched guide to decades of totally wrong predictions from eminent figures. There was the British writer H.N. Norman, for example, who, in the peaceful early days of 1914, predicted there would be no more wars between the big powers of the time. World War I started a few months later.
There was the Stanford biologist Paul Ehrlich, whose best-selling 1968 book The Population Bomb predicted that hundreds of millions of people would starve to death in famines in the 1970s. There was an entire library of books in the 1980s that predicted Japan would overtake the United States as the world’s leading economic power.
Not to forget the U.S. Defense Intelligence Agency’s September 1978 prediction that the Shah of Iran “is expected to remain actively involved in power over the next ten years.” The Shah fled into exile three months later, forced out by increasingly violent demonstrations against his autocratic rule.
In a similar vein, U.S. Secretary of State Hillary Clinton said on January 25 that “our assessment is that the Egyptian government is stable and is looking for ways to respond to the legitimate needs and interests of the Egyptian people.”
Seventeeen days later, the leader of that stable government, Hosni Mubarak, stepped down in the face of mass protests.
“We are not clairvoyant,” America’s intelligence czar, James Clapper, told a hearing of the House Intelligence Committee where criticism of the sprawling U.S. intelligence community was aired. “Specific triggers for how and when instability would lead to the collapse of various regimes cannot always be known or predicted.”
IARPA should add a team to analyze why fake "experts" like NASA´s James Hansen and a great number of his warmist academic colleagues - who receive vast sums of public money - are still allowed to spread their bogus "science". Hansen and his followers are no better than the man mentioned in Debusmann´s article, Stanford´s Paul Erlich, who is famous for always being wrong: