in Jungle rhythms / Research / Science on Citizen science, Jungle rhythms, Research, Science
After a nice write up on my Jungle Rhythms project in The Guardian a more lengthy write up of other projects in and around the Yangambi research station, both in the past and ongoing, is now online on ENSIA.com. Good to once more bring some attention to safeguard local historical collections and capacity building within this context in DR Congo.
All too often one is still confronted with a statement at the end of the manuscript reading: "Code is available from the authors at reasonable request".
The last few years there has been a strong focus on open data and open access journals. This is in part stimulated by a reproducibility crisis in science, often in the biomedical sciences. However, the strong focus on data and journal access alone is misplaced.
in Climate / Jungle rhythms / Research / Science on Citizen science, Climate, Climate data, Dr congo, Jungle rhythms, Research, Science
A cache of decaying notebooks found in a crumbling Congo research station has provided unexpected evidence with which to help solve a crucial puzzle – predicting how vegetation will respond to climate change. . . . (by Dan Grossman)
My Jungle Rhythms has made some waves as of late. The project sparked the interest of dr. Dan Grossman, a science journalist, and his nice summary of all the Jungle Rhythms work was published in The Guardian. As a result of this IFLscience picked it up as well. Especially in the comments section of The Guardian the response was really positive. I’m happy to see some global exposure of the project, and the larger context and importance of similar work. I also hope that this exposure might bring about more funding to safeguard historical collections and capacity building within this context in DR Congo.
in Research / Science on Journals, Peer-review, Publishing, Research, Science, Software
A few months ago I was flooded with review requests. And I figured that it might be time to look around for solutions and code something up allow me to annotate peer-review PDFs easily, and generate a review report with a click of a button (as proposed years ago).
in Python / Research / Science / Software on Gee, Python, Research, Science, Software
Google Earth Engine (GEE) has provided a way to massively scale a lot of remote sensing analysis. However, more than often time series analysis are carried out on a site by site basis and scaling to a continental or global level is not required. Furthermore, some applications are hard to implement on GEE or prototyping does not benefit from direct spatial scaling. In short, working on a handful of reference pixels locally is often still faster than Google servers. I hereby sidestep the handling of large amounts of data (although sometimes helpful) to get to single location time series subsets with a GEE hack.
My python script expands this functionality to all available GEE products, which include high resolution Landsat and Sentinel data, includes climatological data among others Daymet, but also representative concentration pathway (RCP) CMIP5 model runs.
Compared to the ORNL DAAC MODIS subset tool performance is blazing fast (thank you Google). An example query, calling the python script from R, downloaded two years (~100 data points) of Landsat 8 Tier 1 data for two bands (red, NIR) in ~8 seconds flat. Querying a larger footprint (1x1 km) only creates a small overhead (13 sec. query). The resulting figure for the point location with the derived NDVI values is shown below. The demo script to recreate this figure is included in the example folder of the github repository.
[caption id=”attachment_1614” align=”aligncenter” width=”880”] NDVI values from Landsat 8 Tier 1 scenes. black lines depicts a loess fit to the data, with the gray envelope representing the standard error.[/caption]