In “An Inconvenient Truth”, Al Gore claimed there would be an increase in global hurricane numbers and intensity.
This has not occurred and may not have even been predicted at the time. Weather Underground has a write-up concerning this from both sides. Researchers note that the measurement of the strength of a hurricane is very complex variable to measure. The Dvorak tropical cyclone intensity estimate method was only developed in 1972. Comparisons to data before this measurement came in will not provide an accurate picture of the intensity of hurricanes. As in the case of tornadoes, the intensity is rarely directly measurable. There is a tendency of individuals to accept indirect measurements as correct. This is not necessarily true. Estimates are the best available measurement in many cases, but can certainly be incorrect. Statistics used to calculate the values can and do vary by researcher. Ideally, researchers would start with the same set of raw data values and work through the calculations on their own. However, time constraints probably do not allow this. Keep in mind, however, that any error in the estimates can change the final solution. It is these values that are often the point of contention between scientists.
USGCRP (usgcrp.gov) says: “There is no clear trend in the annual numbers of tropical cyclones.” However, as for the number, they note: “There is less confidence in projections of a global decrease in numbers of tropical cyclones. The apparent increase in the proportion of very intense storms since 1970 in some regions is much larger than simulated by current models for that period.”
I interpret this as: The projections in the global decrease in cyclones are not based on observed trends, but rather some other variables. Larger storms are higher than simulated by models, calling into question the models used. If the models are not accurate, then the conclusions based on these models are based on inaccurate information. There is reason for skepticism concerning the accuracy of the modeling associated with hurricanes.
Al Gore may or may not have been wrong about intensity (due to the new measurement methods) but he was certainly wrong about more hurricanes. This was known by 2006 when the movie came out.
The Missing Hotspot
The issue of the missing hotspot in the troposphere as been making the rounds on the skeptic blogs. Does the absence of the hotspot prove there is no human-caused climate change? No, not necessarily (there can be a theory without the hot spot that still works) but it does call into question the theory itself. If what was predicted by models is not there, we need to know why.
I have read that the hotspot may not be long-term (Skeptical Science) and that the hot spot is not the “fingerprint” of AGW, but rather that stratospheric cooling is the fingerprint. The IPCC does address both phenomena. I was not sure if they labelled the stratospheric cooling as the fingerprint or not. There is considerable discussion about other factors (natural ones) in the mix.
There was also discussion about whether the radiosondes are accurate and whether or not using proxies (such as the thermal wind equation) is appropriate, especially if they more closely match the model. The radiosondes are what the skeptics are hitting mostly. My concern is the proxies that may accepted as better than the radiosondes because of their closer match to the model.
This may be a prime example of why global warming is not well accepted. While people constantly hear that “the science is settled” when it comes to light that reality and the models do not seem to match, there is a skeptic’s claim the models are wrong and the theory is wrong. Climate science then says the models were not understood or may need adjustments. Much of the climate change theory rests on models and proxies, which do not seem to match reality as well as predicted. This leads people to believe there is no validity in the science, rightly or wrongly so.
Where do I start? This appears to be outright fraud. First, Marcott admits the data for the last 100 years is not statistically robust. Then why did it appear in the Science article? What was the reason for the inclusion of data that was not robust?
If you remove the “hockey stick” at the end of the article, we are “not screwed” as some news outlets seemed to like for a headline.
There are claims that the dates were rearranged on the data points so the graph would give the results desired. Only 73 proxies were used with 2/3 of them in the northern hemisphere. Some of the graphs show no warming in the proxies.
These are some of the graphs out there associated with Marcott. If I had written up a science paper with a uptick like Marcott has, I am certain I would have had the paper torn up and been told to try again. Such an increase appears beyond any possible accurate interpretation of the data. The uptick does not follow any trend in the data. Marcott admits there are problems with the data. The paper should not have included this data, if nothing else because it is questionable science. The entire end of the graph and probably the remainder of the graph also, should have been verified by someone outside the paper’s authors or the authors needed to find another way of analyzing the data and verified that the uptick was real. As it is, it looks like case of falsified data and very poor scientific method.
Graph One: Found on multiple sights, attributing it to the Science article
Graph Two: Skeptical Science
As an aside, you might want to check out:
An interesting study. If it is supported by other studies and data, it would mean that we do not understand climate as well as we are told and that something essential is missing from the models that would explain the change