A long read on the Yangambi research station, past and future

After a nice write up on my Jungle Rhythms project in The Guardian a more lengthy write up of other projects in and around the Yangambi research station, both in the past and ongoing,  is now online on ENSIA.com. Good to once more bring some attention to safeguard local historical collections and capacity building within this context in DR Congo.

Want to get published, show me your code.

All too often one is still confronted with a statement at the end of the manuscript reading: "Code is available from the authors at reasonable request".

The last few years there has been a strong focus on open data and open access journals. This is in part stimulated by a reproducibility crisis in science, often in the biomedical sciences. However, the strong focus on data and journal access alone is misplaced.

Jungle Rhythms made it into The Guardian

A cache of decaying notebooks found in a crumbling Congo research station has provided unexpected evidence with which to help solve a crucial puzzle – predicting how vegetation will respond to climate change. . . . (by Dan Grossman)

My Jungle Rhythms has made some waves as of late. The project sparked the interest of dr. Dan Grossman, a science journalist, and his nice summary of all the Jungle Rhythms work was published in The Guardian. As a result of this IFLscience picked it up as well. Especially in the comments section of The Guardian the response was really positive. I’m happy to see some global exposure of the project, and the larger context and importance of similar work. I also hope that this exposure might bring about more funding to safeguard historical collections and capacity building within this context in DR Congo.

Google Earth Engine time series subset tool

Google Earth Engine (GEE) has provided a way to massively scale a lot of remote sensing analysis. However, more than often time series analysis are carried out on a site by site basis and scaling to a continental or global level is not required. Furthermore, some applications are hard to implement on GEE or prototyping does not benefit from direct spatial scaling. In short, working on a handful of reference pixels locally is often still faster than Google servers. I hereby sidestep the handling of large amounts of data (although sometimes helpful) to get to single location time series subsets with a GEE hack.

I wrote a simple python script / library called gee_subset.py which allows you to extract time series for a particular location or it’s neighbourhood. This tool is similar to my MODIS subset or daymetr tools, which all facilitate the extraction of time series of remote sensing or climatological data respectively.

git clone https://github.com/khufkens/gee_subset

My python script expands this functionality to all available GEE products, which include high resolution Landsat and Sentinel data, includes climatological data among others Daymet, but also representative concentration pathway (RCP) CMIP5 model runs.

Compared to the ORNL DAAC MODIS subset tool performance is blazing fast (thank you Google). An example query, calling the python script from R, downloaded two years (~100 data points) of Landsat 8 Tier 1 data for two bands (red, NIR) in ~8 seconds flat. Querying a larger footprint (1x1 km) only creates a small overhead (13 sec. query). The resulting figure for the point location with the derived NDVI values is shown below. The demo script to recreate this figure is included in the example folder of the github repository.

[caption id=”attachment_1614” align=”aligncenter” width=”880”] NDVI values from Landsat 8 Tier 1 scenes. black lines depicts a loess fit to the data, with the gray envelope representing the standard error.[/caption]

Pagination


© 2018. All rights reserved.

Powered by Hydejack v7.5.1