in Environment / R / Research / Science / Software on Climate data, Data, R, Research, Science
I recently created the MCD10A1 product. This is a combined MODIS MOD10A1 and MYD10A1 product, alleviating some of the low bias introduced by either overpass through a maximum value approach. This approach has been used in the study by Gascoin et al. (2013) but I wanted some additional validation of the retrieved values.
As such I looked at the SNOTEL network which ”… is composed of over 800 automated data collection sites located in remote, high-elevation mountain watersheds in the western U.S. They are used to monitor snowpack, precipitation, temperature, and other climatic conditions. The data collected at SNOTEL sites are transmitted to a central database, called the Water and Climate Information System, where they are used for water supply forecasting, maps, and reports.” Here, the snowpack metrics could provide the needed validation data for my MCD10A1 product.
Although the SNOTEL website offers plenty of plotting options for casual exploration and the occasional report, but the interface remains rather clumsy with respect to full automation. As such, and similar to my amerifluxr package (both in spirit and execution), I created the snotelr R package. Below you find a brief description of the package and it’s functions.
Installation
You can quick install the package by installing the following dependencies
install.packages("devtools")
and downloading the package from the github repository
Most people will prefer the GUI to explore data on the fly. To envoke the GUI use the following command:
library(snotelr)
snotel.explorer()
This will start a shiny application with an R backend in your default browser. The first window will display all site locations, and allows for subsetting of the data based upon state or a bounding box. The bounding box can be selected by clicking top-left and bottom-right.
The plot data tab allows for interactive viewing of the soil water equivalent (SWE) data together with a covariate (temperature, precipitation). The SWE time series will also mark snow phenology statistics, mainly the day of:
For in depth analysis the above statistics can be retrieved using the snow.phenology() function
# with df a SNOTEL file or data frame in your R workspace
snow.phenology(df)
To access the full list of SNOTEL sites and associated meta-data use the snotel.info() function.
# returns the site info as snotel_metadata.txt in the current working directory
snotel.info(path = ".")
# export to data frame
data = snotel.info(path = NULL)
To query data for e.g. site 924 as shown in the image above use:
in Jungle rhythms / Research / Science on Citizen science, Jungle rhythms, Research, Science
In the Jungle Rhythms project volunteers tag observations with #hashtags on the online forum. One observation in particular is not only informative towards post-processing of the annotations but also has scientific value in it’s own right. Mainly, the cause of death of an observed tree within the Jungle Rhythms project holds information on the ecology of the tree and human and natural stresses it experiences, which lead to it’s demise.
Within this context I ran some quick statistics on the hashtags of the online forum of the Jungle Rhythms project.
Overall, several sources of tree death exist as nicely summarized by @itsmestephanie and I quote:
“Abattu as in abattoir. Felled, cut down. / Coupé as in coupon. Cut, presumably down. / Passants - passers-by. Coupé par les passants - cut down by passers-by. / Sec as in desiccation. Dry. / Brûlé as in crème brûlée. Burnt. / Cassé: Broken. / Tombé: Fallen. / Vent as in ventilation. Wind. Tombé par le vent = Fallen (rather pushed down) by a really big vent. / Mort: Mortician, mortality, mortuary. Morbidity, moribund, morbid. Jack Mort. Lord Voldemort.”
I counted all instances of the hashtags on subjects in the forum and summed them using both natural or human causes. Double mentions were excluded, not to count hashtags multiple times within the same forum post.
The largest class is the “coupé” class, with a total of 257 occurrences. Second on the list is the “cassé” class with 74 mentions, followed by “mort” (72) and “tombé” (46). All other classes list smaller numbers.
Summing all human caused events results in a total of 264 deaths, while natural causes only account for approximately half this number (133). Both these values account for 10 and 5 % of the total number of observed trees. With the project currently at 90% completion and no incentive to report the events I’ll have to validate the true numbers.
However, a few conclusions can be drawn from these simple statistics. Firstly, the human influence on the experiment was significant (twice as large as the natural deaths). As far as I could tell most trees were located along forest paths. This increased the likelihood of a tree being cut down due to easy accessibility (e.g. more elaborate description such as: coupé par les passants). Within the natural causes the classes “cassé” and “tombe” account for a large fraction of the deaths. In other words, over 50% of the deaths are related to physical instability and tree fall of either the whole tree (tombé) or the bole supporting the canopy (cassé).
Treefall is an important process in forest regeneration, statistics derived from the Jungle Rhythms project therefor not only give insight into seasonal processes of the trees observed but also provides mortality rates and causes.
in Op-ed / Politics / Science on Op-ed, Publishing, Science
Science and climate science in particular has always been at the center of what, post US election, is being described as fake news. Fake news or “post-truth” (more honestly plain lies) have been shaping the discussion around climate change for years. Over the past years the scale of fake news grew and with it mainstream media outlets lost authority and trust.
This flood of fake news is at it’s core a form of obfuscation. Obfuscation aims to hide a true message or signal by increasing the noise which feeds the same channel. It clutters the news sphere using a false equivalency that all information sources (regardless) merit equal weight. Tactics that dominate science discussions that were fed by fake news and fought in the public news sphere are slowly shifting to the formal academic world of scientific publishing as fake (science) open access journals become more common.
The past few years there has been a push for open access journals. Open access journals rely on academics to pay for the final publishing of the journal article, rather than asking for exorbitant access fees post publication. Although promising in terms of free access to scientific work the push for open access has led to a flourishing business of shady journals; facilitated by the publish or perish culture in academia. As with fake news, fake academic journals and fake science obfuscate valid research results by increasing the number of low quality research publications one has to wade through.
For example the journal “Expert Opinion on Environmental Biology” seems like a respectable if not high flying journal with an impact factor of 4.22 (above average in ecology). However, the devil is in the details as the footnote attached reads:
*Unofficial 2015 Journal Impact Factor was established by dividing the number of articles published in 2013 and 2014 with the number of times they are cited in 2015 based on Google search and the Scholar Citation Index database. If ‘X’ is the total number of articles published in 2013 and 2014, and ‘Y’ is the number of times these articles were cited in indexed journals during 2015 than, impact factor = Y/X
Generally journals use citation indices, or impact factors, to indicate their visibility within the academic community. Proper journals are mostly listed by the Institute of Scientific Information(currently ISI Web of Knowledge) and summarized in a yearly Science Citation Index report. Most fake journals can’t establish these credentials and therefore trick scientists by publishing fake numbers (( More so, when searching on the web for ISI one easily comes across imposters as well. Here the service International Scientific Indexing (or ISIndexing.com, the name is well chosen) provides a service focussed on “… to increase the visibility and ease of use of open access scientific and scholarly journals.” )). Although the journal might still contain valid and good research, the tactics used do not instill trust.
More alarming than the profiteering from desperate scientists who chase metrics and the resulting obfuscation is a recent trend of merger acquisitions of more respected journals by fake academic publishers. Here the tactic is to buy small legitimate journals to intersperse with their lesser variety, borrowing trust. Not only will these mergers make it harder to distinguish good from bad journals, it will also increase chances of low quality peer-review, as solid science was never the motive of these predatory publishers. If this is a new trend the question remains how to safe-guard scientific legitimacy of open access journals and science in general, and what format to use?
I would argue that to solve the issue of shady open access journals we need even more radical openness in science. If one is forced to publish data, and code (if not links to how to obtain the data from 3th party sources) it become easier to separate those with quality research from those containing nothing but random noise.
The time invested in a fake research article becomes significantly larger, discouraging abuse. In addition, it will force people into good data management as ugly code and data structures will reflect badly on the scientist as well. Furthermore, since all pieces of the research are available it will also solve issues regarding reproducibility and inter-comparison of research results. Finally I would argue that similar practices could be used in conventional journalism, reporting all raw data used, sources (if not endangering lives) and statistics (if applicable). Transparency is the only way forward in an age of fake news and science, lack of it should be regarded as suspicious.
in Research / Science / Software on Remote sensing, Research, Science, Software
Recently I needed to convert swath data to gridded data. Most MODIS products come as gridded products which are properly geo-referenced / rectified. However, some low level products are provided as “swath” data which are the “raw” form when it comes to geo-referencing. Luckily most of these swath products do provide ground control point information to convert them from wobbly sensor output to a gridded geo-referenced image.
This procedure of converting from swath to gridded data is normally done with the MODIS Resampling Tool (MRT) software. Here, I provide a few lines of code which will do just the same using the community driven Geospatial Data Abstraction Library (GDAL). I would argue that the four lines of true code beat installing the MRT tool any day.
The code is a mashup of a stackexchange post and converts MODIS L1B data (or similar) to gridded data, requiring you to specify the file name and the requested scientific data subset (SDS). You can find the available SDS using the gdalinfo command, or using the product information sheet. The data is output as a geotiff.
#!/bin/bash
#
# swath to grid conversion for
# MOD04 data, but will work on other
# MODIS L1 / L2 data as well (I think)
# get the reprojection information, stuff it into
# a virtual file (.vrt)
gdal_translate -of VRT HDF4_EOS:EOS_SWATH:"$1":mod04:$2 modis.vrt
# delete the bad ground control points
sed '/X=\"-9.990000000000E+02\"/d' modis.vrt
# grab the filename without an extension
filename=$(basename "$1")
filename="${filename%.*}"
# reproject the data using the cleaned up VRT file
gdalwarp -overwrite -of GTIFF -tps modis.vrt $filename_$2.tif
in Climate / Op-ed / Politics / Science on Climate, Environment, Politics, Science
After past week’s election of Donald Trump I’ve decided to break what I consider an unspoken rule in science. Mainly, as a young scientist one does not discuss politics openly, nor take a hard line on issues, as it potentially jeopardizes your academic career.
However, the US elections will have dire consequences for both domestic and global environmental policy and science in general, due to the proposed dismantling of the EPA, overt climate change denial by the president-elect and his anti-science stance. Surprisingly, during the past week the language with regards to Mr. Trump’s victory moved to one of reconciliation, to “acceptance”, giving the Mr. Trump “a chance”.
The ramifications of an unchallenged president-elect and reconciliation will reverberate throughout the globe. Hence, as an ecologist, environmentalist and climate scientist I can not in good conscious stand by, and not take a very strong political but scientifically backed position.
Science and scientists have the obligation to challenge old, and dangerous, ideas. I admit that I’ve failed by not doing as much as I could outside the academic sphere or various social media echo chambers. In this spirit, I will contest misinformation and lies about climate change and environmental issues. As climate change has morphed into an inherently social problem it is also my duty to openly support minorities, poor and the disenfranchised who are the first to suffer from climate change.
I will not concede by giving a man who has shown a lack of scientific knowledge, integrity, transparency and fueled by misogyny and racism “a chance”. I will not lower my voice in years to come, silently accepting the new status quo.