DAYMET ancillary data conversion

I recently posted tools to extract DAYMET data for point locations. However, the product as produced by the DAYMET team is gridded data. If you want to scale models driven by DAYMET data spatially you need access to this gridded (spatial) data.

Sadly, access to the data is rather convoluted. If you want to download this gridded data, or tiles, you first have to figure out which tiles you need. This means, going to the DAYMET website and clicking several times within the tiles you want to download to get their individual numbers. After establishing all these tile numbers you can download them from the THREDDS server.

Obviously this routine is less than ideal, if for example you want to do regional to state based research and have to select a large number of tiles. I hope to automate this process. First step in this process is reprojecting the tiles grid to a latitude and longitude. Oddly enough the format of the netCDF file was not strandard and rather confusing. It took me a while to figure out how to accomplish this. Here is a bash script to do so. These instructions work on all ancillary netCDF grids as provided by DAYMET. From the generate tile file using the below code you can either determine the tile based upon any given point location or alternatively for a region of interest. The latter is my goal and in the works, as the former is covered by previous code.

#!/bin/bash

# get filename with no extension
no_extension=`basename $1 | cut -d'.' -f1`

# convert the netCDF file to an ascii file 
gdal_translate -of AAIGrid $1 original.asc

# extract the data with no header
tail -n +7 original.asc > ascii_data.asc

# paste everything together again with a correct header
echo "ncols        8011" 	>  final_ascii_data.asc
echo "nrows        8220"	>> final_ascii_data.asc
echo "xllcorner    -4659000.0" 	>> final_ascii_data.asc
echo "yllcorner    -3135000.0" 	>> final_ascii_data.asc
echo "cellsize     1000" 	>> final_ascii_data.asc
echo "NODATA_value 0"    	>> final_ascii_data.asc 

# append flipped data
tac ascii_data.asc >> final_ascii_data.asc

# translate the data into Lambert Conformal Conic GTiff
gdal_translate -of GTiff -a_srs "+proj=lcc +datum=WGS84 +lat_1=25 n +lat_2=60n +lat_0=42.5n +lon_0=100w" final_ascii_data.asc tmp.tif

# convert to latitude / longitude
gdalwarp -of GTiff -overwrite -t_srs "EPSG:4326" tmp.tif tmp_lat_lon.tif

# crop to reduce file size, only cover DAYMET data areas
gdal_translate -a_nodata -9999 -projwin -131.487784581 52.5568285568 -51.8801911189 13.9151864748 tmp_lat_lon.tif $no_extension.tif

# clean up
rm original.asc
rm ascii_data.asc
rm final_ascii_data.asc
rm tmp.tif
rm tmp_lat_lon.tif

 

You are what you eat

Going through my literature I came across an interesting but slightly macabre paper with the innocent title “Seasonal water availability predicts the relative abundance of C3 and C4 grasses in Australia” (by Murphy and Bowman, 2007).  In short, the paper discusses the species distribution and optimal growing conditions of C3/C4 grasses. Differences in the photosynthetic pathway of C4 grasses allows them to function at temperatures exceeding those optimal for C3 grasses. However, not only temperature limits C3 or C4 grass growth but other factors such as the timing of precipitation influences the growth and relative abundance of either group at a given location. Disentangling this relative abundance is required to eliminate potential biases when estimating carbon uptake by the terrestrial biosphere.

The authors tackled the question of relative C3/C4 abundance and diversity across a large geographical extent, by using a common technique in ecology namely transects of quadrats scattered across the Australian continent. Sadly, this is time intensive to establish and measure these quadrats. However, a macabre twist to this story helps to scale these measurements. Dead kangaroos to be precise.

Kangaroos eat a ton of grass. These grasses, depending on their photosynthetic pathway (C3 or C4), will show differences in their 13C stable isotope composition.

Isotopes are atoms of the same element but with a different atomic mass, due to additional neutrons at it's nucleus. When these atoms do not radioactively decay we call them stable isotopes. Carbon (C) in it's most abundant form has 6 neutrons and 6 protons at it's nucleus, hence called carbon-12 (12C).  Several isotopic forms of carbon exist with carbon-13 (13C) having one and carbon-14 (14C) having two extra neutrons in their nuclei. Of these 14C is unstable, decaying into more stable components. The 13C isotope is stable and will therefore persist.

And, since you are what you eat (on a molecular level - and organismic level as well I guess) these differences will be reflected in the tissues of the kangaroo. Assuming that kangaroos don’t prefer a particular kind of grass. Both authors used collagen tissue samples of 779 road-killed individuals to calculate the relative abundance of C3/C4 grasses around the road-kill locations. Combining these measurements with climate data, it allowed them to scale the relative abundance of grasses across the whole of Australia. Hence, they potentially removed some bias in estimates of terrestrial biosphere carbon uptake by these grasslands.

Although my main interest in this paper was the relation between climate and abundance of C3/C4 grasses I think the paper teaches some valuable lesson in science outside it’s original scope, mainly “the proxy measurement “ (and a rather macabre one).  Furthermore, it can be used as a fun example to show people the unexpected part of science. I assume few Australians would have imagined that a lot of road-killed kangaroos could tell a story about the kind of grasses it once ate and where they grow!

Fall colours

Last week I helped to clean the PhenoCam database. This means hunting for out of place images and shifts in the time series. Going through all these images I found this gem of a fall colour image. Although New England is known for it’s fall colours and leaf peeping this images is from Downer Woods near Lake Michigan.

Downer Woods during fall[/caption]

Bad Science

I just finished reading “Bad Science” by Ben Goldacre. It’s an entertaining read discussing the various tricks employed by the “alternative” medicine circles and pharmaceutical industry to peddle  drugs / miracle cures and therapies (you will never buy vitamins again).

In particular, the chapter “How the Media Promote the Public Misunderstanding of Science” stayed with me. This chapter discusses the role of media in actively promoting false ideas, mostly driven by profit or a lack of understanding. Most of the issues presented in this chapter apply to science in general and not only to medicine related topics. It also stresses the need for scientists to communicate science better and take a more active role in how their results are presented. It’s often all too clear that climate deniers exploit the weaknesses of the media to push their agenda. Media want easy (positive) headlines with an easy to understand storyline - e.g. miracle cures. Sadly, the world and science is often more complicated. But this does not mean we should not try to word things differently as to reach more people with research.

I can only recommend reading this book and I wholeheartedly support the author’s crusade against bad science in general.

DaymetR, a Daymet single pixel subset tool for R

Daymet is a collection of algorithms and computer software designed to interpolate and extrapolate from daily meteorological observations to produce gridded estimates of daily weather parameters - as concisely described on the Daymet website.

As I’m extensively using daily meteorological data to drive my grassland model, quick and easy access to this data is key. However accessing single pixel values through the java tool provided was a bit cumbersome and did not fit my workflow. As such I wrote my own tool which queries the website and allows you to subset time series of a single pixel location (given a latitude, longitude position) all within R. Data is either just dowloaded to the current working directory or imported as a structured array into your R workspace for further analysis or formatting.

You can find a link to the DaymetR code on my software page or follow this link to my github.

ps. since this post I have added a python version of the same code. A link can be found on the software section of my website.

Pagination


© 2018. All rights reserved.

Powered by Hydejack v7.5.1