(Translated by https://www.hiragana.jp/)
Instrumental Record « RealClimate
The Wayback Machine - https://web.archive.org/web/20210212022810/http://www.realclimate.org/index.php/archives/category/climate-science/instrumental-record/

RealClimate logo


Don’t climate bet against the house

Decades ago (it seems) when perhaps it was still possible to have good faith disagreements about the attribution of current climate trends, James Annan wrote a post here summarizing the thinking and practice of Climate Betting. That led to spate of wagers on continued global warming (a summary of his bets through 2005 and attempts to set up others is here).

There were earlier bets, the most well known perhaps was the one for $100 between Hugh Ellsaesser and Jim Hansen in 1989 on whether there would be a new temperature record within three years. There was (1990), and Ellsaesser paid up in January 1991 (Kerr, 1991). But these more recent bets were more extensive.

More »

References

  1. R.A. KERR, "Global Temperature Hits Record Again", Science, vol. 251, pp. 274-274, 1991. http://dx.doi.org/10.1126/science.251.4991.274

Update day 2021

Filed under: — gavin @ 22 January 2021

As is now traditional, every year around this time we update the model-observation comparison page with an additional annual observational point, and upgrade any observational products to their latest versions.

A couple of notable issues this year. HadCRUT has now been updated to version 5 which includes polar infilling, making the Cowtan and Way dataset (which was designed to address that issue in HadCRUT4) a little superfluous. Going forward it is unlikely to be maintained so, in a couple of figures, I have replaced it with the new HadCRUT5. The GISTEMP version is now v4.

For the comparison with the Hansen et al. (1988), we only had the projected output up to 2019 (taken from fig 3a in the original paper). However, it turns out that fuller results were archived at NCAR, and now they have been added to our data file (and yes, I realise this is ironic). This extends Scenario B to 2030 and Scenario A to 2060.

Nothing substantive has changed with respect to the satellite data products, so the only change is the addition of 2020 in the figures and trends.

So what do we see? The early Hansen models have done very well considering the uncertainty in total forcings (as we’ve discussed (Hausfather et al., 2019)). The CMIP3 models estimates of SAT forecast from ~2000 continue to be astoundingly on point. This must be due (in part) to luck since the spread in forcings and sensitivity in the GCMs is somewhat ad hoc (given that the CMIP simulations are ensembles of opportunity), but is nonetheless impressive.

CMIP3 (circa 2004) model hindcast and forecast estimates of SAT.

The forcings spread in CMIP5 was more constrained, but had some small systematic biases as we’ve discussed Schmidt et al., 2014. The systematic issue associated with the forcings and more general issue of the target diagnostic (whether we use SAT or a blended SST/SAT product from the models), give rise to small effects (roughly 0.1ºC and 0.05ºC respectively) but are independent and additive.

The discrepancies between the CMIP5 ensemble and the lower atmospheric MSU/AMSU products are still noticeable, but remember that we still do not have a ‘forcings-adjusted’ estimate of the CMIP5 simulations for TMT, though work with the CMIP6 models and forcings to address this is ongoing. Nonetheless, the observed TMT trends are very much on the low side of what the models projected, even while stratospheric and surface trends are much closer to the ensemble mean. There is still more to be done here. Stay tuned!

The results from CMIP6 (which are still being rolled out) are too recent to be usefully added to this assessment of forecasts right now, though some compilations have now appeared:

CMIP6 model SAT (observed forcings to 2014, SSP2-45 scenario subsequently) (Zeke Hausfather)

The issues in CMIP6 related to the excessive spread in climate sensitivity will need to be looked at in more detail moving forward. In my opinion ‘official’ projections will need to weight the models to screen out those ECS values outside of the constrained range. We’ll see if other’s agree when the IPCC report is released later this year.

Please let us know in the comments if you have suggestions for improvements to these figures/analyses, or suggestions for additions.

References

  1. Z. Hausfather, H.F. Drake, T. Abbott, and G.A. Schmidt, "Evaluating the Performance of Past Climate Model Projections", Geophysical Research Letters, vol. 47, 2020. http://dx.doi.org/10.1029/2019GL085378
  2. G.A. Schmidt, D.T. Shindell, and K. Tsigaridis, "Reconciling warming trends", Nature Geoscience, vol. 7, pp. 158-160, 2014. http://dx.doi.org/10.1038/ngeo2105

2020 Hindsight

Yesterday was the day that NASA, NOAA, the Hadley Centre and Berkeley Earth delivered their final assessments for temperatures in Dec 2020, and thus their annual summaries. The headline results have received a fair bit of attention in the media (NYT, WaPo, BBC, The Guardian etc.) and the conclusion that 2020 was pretty much tied with 2016 for the warmest year in the instrumental record is robust.

More »

An ever more perfect dataset?

Filed under: — gavin @ 15 December 2020

Do you remember when global warming was small enough for people to care about the details of how climate scientists put together records of global temperature history? Seems like a long time ago…

Nonetheless, it’s worth a quick post to discuss the latest updates in HadCRUT (the data product put together by the UK’s Hadley Centre and the Climatic Research Unit at the University of East Anglia). They have recently released HadCRUT5 (Morice et al., 2020), which marks a big increase in the amount of source data used (similarly now to the upgrades from GHCN3 to GHCN4 used by NASA GISS and NOAA NCEI, and comparable to the data sources used by Berkeley Earth). Additionally, they have improved their analysis of the sea surface temperature anomalies (a perennial issue) which leads to an increase in the recent trends. Finally, they have started to produce an infilled data set which uses an extrapolation to fill in data-poor areas (like the Arctic – first analysed by us in 2008…) that were left blank in HadCRUT4 (so similar to GISTEMP, Berkeley Earth and the work by Cowtan and Way). Because the Arctic is warming faster than the global mean, the new procedure corrects a bias that existing in the previous global means (by about 0.16ºC in 2018 using a 1951-1980 baseline). Combined, the new changes give a result that is much closer to the other products:

Differences persist around 1940, or in earlier decades, mostly due to the treatment of ocean temperatures in HadSST4 vs. ERSST5.

In conclusion, this update further solidifies the robustness of the surface temperature record, though there are still questions to be addressed, and there remain mountains of old paper records to be digitized.

The implications of these updates for anything important (such as the climate sensitivity or the carbon budget) will however be minor because all sensible analyses would have been using a range of surface temperature products already.

With 2020 drawing to a close, the next annual update and intense comparison of all these records, including the various satellite-derived global products (UAH, RSS, AIRS) will occur in January. Hopefully, HadCRUT5 will be extended beyond 2018 by then.

In writing this post, I noticed that we had written up a detailed post on the last HadCRUT update (in 2012). Oddly enough the issues raised were more or less the same, and the most important conclusion remains true today:

First and foremost is the realisation that data synthesis is a continuous process. Single measurements are generally a one-time deal. Something is measured, and the measurement is recorded. However, comparing multiple measurements requires more work – were the measuring devices calibrated to the same standard? Were there biases in the devices? Did the result get recorded correctly? Over what time and space scales were the measurements representative? These questions are continually being revisited – as new data come in, as old data is digitized, as new issues are explored, and as old issues are reconsidered. Thus for any data synthesis – whether it is for the global mean temperature anomaly, ocean heat content or a paleo-reconstruction – revisions over time are both inevitable and necessary.

References

  1. , 2021. https://www.metoffice.gov.uk/hadobs/hadcrut5/HadCRUT5_accepted.pdf

Climate Sensitivity: A new assessment

Filed under: — gavin @ 22 July 2020

Not small enough to ignore, nor big enough to despair.

There is a new review paper on climate sensitivity published today (Sherwood et al., 2020 (preprint) that is the most thorough and coherent picture of what we can infer about the sensitivity of climate to increasing CO2. The paper is exhaustive (and exhausting – coming in at 166 preprint pages!) and concludes that equilibrium climate sensitivity is likely between 2.3 and 4.5 K, and very likely to be between 2.0 and 5.7 K.

More »

References

  1. S.C. Sherwood, M.J. Webb, J.D. Annan, K.C. Armour, P.M. Forster, J.C. Hargreaves, G. Hegerl, S.A. Klein, K.D. Marvel, E.J. Rohling, M. Watanabe, T. Andrews, P. Braconnot, C.S. Bretherton, G.L. Foster, Z. Hausfather, A.S. Heydt, R. Knutti, T. Mauritsen, J.R. Norris, C. Proistosescu, M. Rugenstein, G.A. Schmidt, K.B. Tokarska, and M.D. Zelinka, "An Assessment of Earth's Climate Sensitivity Using Multiple Lines of Evidence", Reviews of Geophysics, vol. 58, 2020. http://dx.doi.org/10.1029/2019RG000678

Nenana Ice Classic 2020

Filed under: — gavin @ 27 April 2020

Readers may recall my interest in phenological indicators of climate change, and ones on which $300K rest are a particular favorite. The Nenana Ice Classic is an annual tradition since 1917, and provides a interesting glimpse into climate change in Alaska.

This year’s break-up of ice has just happened (unofficially, Apr 27, 12:56pm AKST), and, like in years past, it’s time to assess what the trends are. Last year was a record early break-up (on April 14th), and while this year was not as warm, it is still earlier than the linear trend (of ~8 days per century) would have predicted, and was still in the top 20 earliest break-ups.

Nenana Ice Classic ice break up dates

A little side bet I have going is whether any of the contrarians mention this. They were all very excited in 2013 when the record for the latest break-up was set, but unsurprisingly not at all interested in any subsequent years (with one exception in 2018). This year, they could try something like ‘it’s cooling because the break up was two weeks later than last year (a record hot year)’, but that would be lame, even by their standards.

Update day 2020!

Following more than a decade of tradition (at least), I’ve now updated the model-observation comparison page to include observed data through to the end of 2019.

As we discussed a couple of weeks ago, 2019 was the second warmest year in the surface datasets (with the exception of HadCRUT4), and 1st, 2nd or 3rd in satellite datasets (depending on which one). Since this year was slightly above the linear trends up to 2018, it slightly increases the trends up to 2019. There is an increasing difference in trend among the surface datasets because of the polar region treatment. A slightly longer trend period additionally reduces the uncertainty in the linear trend in the climate models.

To summarize, the 1981 prediction from Hansen et al (1981) continues to underpredict the temperature trends due to an underestimate of the transient climate response. The projections in Hansen et al. (1988) bracket the actual changes, with the slight overestimate in scenario B due to the excessive anticipated growth rate of CFCs and CH4 which did not materialize. The CMIP3 simulations continue to be spot on (remarkably), with the trend in the multi-model ensemble mean effectively indistinguishable from the trends in the observations. Note that this doesn’t mean that CMIP3 ensemble means are perfect – far from it. For Arctic trends (incl. sea ice) they grossly underestimated the changes, and overestimated them in the tropics.

CMIP3 for the win!

The CMIP5 ensemble mean global surface temperature trends slightly overestimate the observed trend, mainly because of a short-term overestimate of solar and volcanic forcings that was built into the design of the simulations around 2009/2010 (see Schmidt et al (2014). This is also apparent in the MSU TMT trends, where the observed trends (which themselves have a large spread) are at the edge of the modeled histogram.

A number of people have remarked over time on the reduction of the spread in the model projections in CMIP5 compared to CMIP3 (by about 20%). This is due to a wider spread in forcings used in CMIP3 – models varied enormously on whether they included aerosol indirect effects, ozone depletion and what kind of land surface forcing they had. In CMIP5, most of these elements had been standardized. This reduced the spread, but at the cost of underestimating the uncertainty in the forcings. In CMIP6, there will be a more controlled exploration of the forcing uncertainty (but given the greater spread of the climate sensitivities, it might be a minor issue).

Over the years, the model-observations comparison page is regularly in the top ten of viewed pages on RealClimate, and so obviously fills a need. And so we’ll continue to keep it updated, and perhaps expand it over time. Please leave suggestions for changes in the comments below.

References

  1. J. Hansen, D. Johnson, A. Lacis, S. Lebedeff, P. Lee, D. Rind, and G. Russell, "Climate Impact of Increasing Atmospheric Carbon Dioxide", Science, vol. 213, pp. 957-966, 1981. http://dx.doi.org/10.1126/science.213.4511.957
  2. J. Hansen, I. Fung, A. Lacis, D. Rind, S. Lebedeff, R. Ruedy, G. Russell, and P. Stone, "Global climate changes as forecast by Goddard Institute for Space Studies three-dimensional model", Journal of Geophysical Research, vol. 93, pp. 9341, 1988. http://dx.doi.org/10.1029/JD093iD08p09341
  3. G.A. Schmidt, D.T. Shindell, and K. Tsigaridis, "Reconciling warming trends", Nature Geoscience, vol. 7, pp. 158-160, 2014. http://dx.doi.org/10.1038/ngeo2105

One more data point

Filed under: — gavin @ 15 January 2020

The climate summaries for 2019 are all now out. None of this will be a surprise to anyone who’s been paying attention, but the results are stark.

  • 2019 was the second warmest year (in analyses from GISTEMP, NOAA NCEI, ERA5, JRA55, Berkeley Earth and Cowtan & Way, RSS TLT), it was third warmest in the standard HadCRUT4 product and in the UAH TLT. It was the warmest year in the AIRS Ts product.
  • For ocean heat content, it was the warmest year, though in terms of just the sea surface temperature (HadSST3), it was the third warmest.
  • The top 5 years in all surface temperature series, are the last five years. [Update: this isn’t true for the MSU TLT data which have 2010 (RSS) and 1998 (UAH) still in the mix].
  • The decade was the first with temperatures more than 1ºC above the late 19th C in almost all products.

This year there are two new additions to the discussion, notably the ERA5 Reanalyses product (1979-2019) which is independent of the surface weather stations, and the AIRS Ts product (2003-2019) which again, is totally independent of the surface data. Remarkably, they line up almost exactly. [Update: the ERA5 system assimilates the SYNOP reports from weather stations, which is not independent of the source data for the surface temperature products. However, the interpolation is based on the model physics and many other sources of observed data.]

The two MSU lowermost troposphere products are distinct from the surface record (showing notably more warming in the 1998, 2010 El Niño years – though it wasn’t as clear in 2016), but with similar trends. The biggest outlier is (as usual) the UAH record, indicating that the structural uncertainty in the MSU TLT trends remains significant.

One of the most interesting comparisons this year has been the coherence of the AIRS results which come from an IR sensor on board EOS Aqua and which has been producing surface temperature estimates from 2003 onwards. The rate and patterns of warming of this and GISTEMP for the overlap period are remarkably close, and where they differ, suggest potential issues in the weather station network.

The trends over that period in the global mean are very close (0.24ºC/dec vs. 0.25ºC/dec), with AIRS showing slightly more warming in the Arctic. Interestingly, AIRS 2019 slightly beats 2016 in their ranking.

I will be updating the model/observation comparisons over the next few days.

How good have climate models been at truly predicting the future?

A new paper from Hausfather and colleagues (incl. me) has just been published with the most comprehensive assessment of climate model projections since the 1970s. Bottom line? Once you correct for small errors in the projected forcings, they did remarkably well.

More »

10 years on

Filed under: — gavin @ 17 November 2019

I woke up on Tuesday, 17 Nov 2009 completely unaware of what was about to unfold. I tried to log in to RealClimate, but for some reason my login did not work. Neither did the admin login. I logged in to the back-end via ssh, only to be inexplicably logged out again. I did it again. No dice. I then called the hosting company and told them to take us offline until I could see what was going on. When I did get control back from the hacker (and hacker it was), there was a large uploaded file on our server, and a draft post ready to go announcing the theft of the CRU emails. And so it began.

From “One year later”, 2010.

Many people are weighing in on the 10 year anniversary of ‘Climategate’ – the Observer, a documentary on BBC4 (where I was interviewed), Mike at Newsweek – but I’ve struggled to think of something actually interesting to say.

It’s hard because even in ten years almost everything and yet nothing has changed. The social media landscape has changed beyond recognition but yet the fever swamps of dueling blogs and comment threads has just been replaced by troll farms and noise-generating disinformation machines on Facebook and Twitter. The nominally serious ‘issues’ touched on by the email theft – how robust are estimates of global temperature over the instrumental period, what does the proxy record show etc. – have all been settled in favor of the mainstream by scientists plodding along in normal science mode, incrementally improving the analyses, and yet they are still the most repeated denier talking points.

More »