Cleaning Berkeley Earth's BEST Gridded Daily Temperature Data
You may have recently seen air quality maps produced by the Berkeley Earth group, especially in the wake of the horrific Camp Fire whose death toll now exceeds 80. For example, here’s their real-time visualization of PM2.5 concentrations.
For several years, the Berkeley Earth team had primarily worked on producing gridded weather datasets with solid historical coverage, going back to 1850 in some cases. I’ve been particularly interested in their work given the global daily temperature datasets they have on tap – which are publicly available! This is all very exciting, given that many of the global datasets available, like Willmott and Matsuura (UDEL) and CRU, are only at monthly resolution. However, there’s something to be desired in making their offerings truly accessible to researchers:
- Their netCDF files are packaged with “temperature anomaly” and “climatology” layers, but no “temperature” layer. This necessitates a bit of wrangling on your part to overcome some inherent dimensional consistency issues to create a temperature object. It would’ve struck me to provide the temperature values, and allow researchers to determine which climatology period they want to construct, not the reverse.
- For example, the climatology is a 365-day stack. Great, except that the anomalies data is based on true dates and includes leap years. There’s not an immediate way to sum the two into temperature estimates. If you ignore the mismatch and are performing analysis that spans many decades, by the end of your time-series you’ll be off by several weeks (amateurish!).
- The .nc’s are stripped of an informative time axis (see below figure). Instead, the files have date_number, day, month, and year attributes. Why couldn’t they have provided an out of the box date object, and left it to researchers to parse the month and year when necessary?
- As a result, you wouldn’t even know whether leap dates are included unless you manually inspected the layer count for a decadal file.
- And lastly, the absence of that time axis means you’re not able to discern dates when visually inspecting the data in a viewer like Panoply. If you wanted to see what the temperature [anomalies] were for Jan 12, 2013, then you’ll have to manually count how many days passed since the Jan 1, 2010 start date of that decadal extract.
I’ve spent a fair amount of time trying to address these issues, and wanted to share R
code so that you don’t have to recreate the process. The only packages you’ll need to run this are raster
and tidyverse
.
In brief, this function 1.) reads in a BEST .nc file, which you may or may not have previously spatial subsetted, 2.) creates a stacked climatology that properly accounts for leap years by duplicating Feb 28 climatology values for Feb 29, and 3.) outputs temperature estimates with a date explicit axis.
You’ll notice some hard coded elements, such as the origin date and whether to drop partial years (since BEST is continuously updating their data, a portion of the current year will always be included in the most recent decadal extract). I’ve also included some tests to convince myself that the summed product mirrors values obtained from a manually searched climatology layer combined with the anomalies layer for a specified date.