Roger Pielke Jr has a great posting which comments on the testimony of Michael Oppenheimer from Princeton University and coordinating lead author of the IPCC special report titled Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation, to the Select Committee on Energy Independence and Global Warming of the US House of Representatives. He takes issue with Oppenheimer’s assertion that “assignment of cause for the damaging outcomes of such extremes” is a relatively new field.
In suggesting that this is a “new” field he notably avoids discussing a large body of literature such as on tropical cyclones (in the US, Australia, China, India, Latin America, etc.), floods, European storms, Australian bushfires, etc. where peer reviewed work has explained damage trends solely in terms of increasing societal vulnerability. Why is it so hard for IPCC authors to acknowledge any of this literature?
I like that. Good question.
I’m going to dig out my text books to find out when this field of extreme events became “mature”, but in the 1970’s we learned all about environmental “extreme” events like flooding in my undergraduate and graduate courses in Hydrology. We learned that the probability distribution of many extreme events could be reliably be modelled using the Gumbell Distribution. For homework, we gathered data on “extreme” floods, wave heights, etc. and plotted those events on Gumbell Distribution probability paper (we didn’t have Excel in those days). The data usually lined up in a straight line which enabled us to extrapolate to longer periods of time than for which we had data. For example, we could take fifty years of real data on maximum river levels and use that data to extrapolate to the 100, 500, or even 1000 year flood. That would then provide a basis up which to design flood control structures. I learned at my first corporate job that Gumbell distribution also worked well with maximum ocean wave heights.
All that before the IPCC existed to tell us what we don’t know.