in Engineering / Research / Science / Software on Cuda, Deep learning, Gpu computing, Research, Science, Software
The past weeks I’ve been toying with GPU computing and Deep Learning. This is a brief summary of things I’ve learned in setting things up (hardware and software).
Let’s talk about hardware! I’m currently running everything on a NVIDIA MSI GTX960 4GB Gaming card. Given the cost of the card (~$250) and the fact that for a lot of the Deep Learning applications memory will be the limiting factor (Google: “Check failed: error == cudaSuccess (2 vs. 0) out of memory”) rather than CUDA cores, this is a really good card. Most faster cards (more CUDA cores) will have the same amount of memory. In short, you might gain some time but it does not allow you to train more complex models (larger in size). Unless you upgrade to a NVIDIA GTX Titan X card with 12GB of memory (costing 5x more) or dedicated compute units such as a Tesla K40 you won’t see a substantial memory increase in the product line.
On the software end, the documentation on the installation of the Caffe framework is sufficient to get you started (on Ubuntu at least). The documentation is good and the community seems to be responsive, judging from the forum posts, github issues. However, I’ve not engaged in any real trouble shooting requests (as things went well). The only issue I encountered when setting up the Caffe-Segnet implementation was the above mentioned “out of memory error”. Although people mentioned that they got things running on 4GB cards I just couldn’t get it to work. In the end I realized that the memory being used to drive my displays (~500MB) might just tip the scale.
Indeed, offloading graphics duties from the dedicated card onto the motherboard’s integrated graphics did free up enough memory to make things work. An important note here is that you need to install the CUDA Toolbox and drivers without their OpenGL drivers. The CUDA Toolbox overwrites the Ubuntu originals and breaks the graphics capabilities of your integrated graphics in the process.
To install the CUDA Toolbox correctly, with full memory capabilities first select the integrated graphics (iGPU) as your preferred GPU in your system BIOS. Then download the CUDA Toolbox runtime and run the following command:
sudo ./cuda_x.x.x_linux.run --no-opengl-libs
This should install all CUDA libraries and drivers while at the same time prevent the Toolbox from overwriting your iGPU OpenGL libraries. In addition you might also want to make sure you initiate the NVIDIA devices on boot (devices should be accessible but not driving any display) by placing the below script in your /etc/init.d/ folder.
#!/bin/bash
NVDEVS=`lspci | grep -i NVIDIA`
N3D=`echo "$NVDEVS" | grep "3D controller" | wc -l`
NVGA=`echo "$NVDEVS" | grep "VGA compatible controller" | wc -l`
N=`expr $N3D + $NVGA - 1`
for i in `seq 0 $N`; do
mknod -m 666 /dev/nvidia$i c 195 $i
done
mknod -m 666 /dev/nvidiactl c 195 255
(I’m not quite sure the latter script has any use as the devices might be initiated already anyway. However, I rather list it here for future reference.)
In the end, both the MIT scene recognition model as well as the Caffe-Segnet implementation work. Below you see the output of the Segnet Camvid demo, which uses the input of a webcam and classifies it on the fly into the 12 classes. The output is garbage as the scene does not represent a street view (the original intent of the classifier), but it shows that the system runs at a solid 210ms per classification.
in Phenocam / Research / Science / Software on Deep learning, Phenocam, Research, Science
Past week I started to play with the Caffe deep learning framework. Although I initially planned on using the SegNet branch of the Caffe framework to classify snow in PhenoCam images. However, given that it concerns a rather binary classification I don’t need to segment the picture (I do not care where the snow in the image is, only if it is present). As such, a more semantic approach could be used.
Luckily people at MIT had already trained a classifier, the Places-CNN, which deals with exactly this problem, characterizing an image scene. So, instead of training my own classifier I gave theirs a try. Depending on the image type, and mostly the view angle the results are very encouraging (even with their stock model).
For example, the below image got classified as: mountain snowy, ski slope, snowfield, valley, ski_resort. This all seems very reasonable indeed. Classifying a year worth of images at this site yielded an accuracy of 89% (compared to human observations).
However, when the vantage point changes so does the accuracy of the classification, mainly due to the lack of images of this sort in the original training data set I presume. The image below was classified as: rainforest, tree farm, snowy mountain, mountain, cultivated field. As expected, the classification accuracy dropped to a mere 13%. There is still room for improvement using PhenoCam based training data. But, building upon the work by the group at MIT should make these improvements easier.
in Research / Science / Software on Caffe, Code, Cuda, Deep learning, Gpu, Segnet
Here I provide a simple set of bash commands and settings to get started with the caffe-SegNet tutorial on the Harvard Odyssey cluster and it’s NVIDIA CUDA capabilities. If the below setup works you can move on and start processing your own data.
First install load all necessary modules into your ~/.bashrc file
# clone the tutorial data and rename the directory
git clone https://github.com/alexgkendall/SegNet-Tutorial.git
mv SegNet-Tutorial Segnet
# move into the new directory
cd Segnet
# clone the caffe-segnet code
git clone https://github.com/alexgkendall/caffe-segnet.git
# download the Odyssey specific cmake settings
wget -q https://www.dropbox.com/s/hbhzl2bwm19vtd0/FindAtlas.cmake?dl=0 -O ./caffe-segnet/cmake/Modules/FindAtlas.cmake
# create the build directory
mkdir ./caffe-segnet/build
# move into the build directory
cd ./caffe-segnet/build
# create compilation instructions
cmake -DCMAKE_INCLUDE_PATH:STRING="$CUDNN_INCLUDE;$BOOST_INCLUDE;$GFLAGS_INCLUDE;$HDF5_INCLUDE;$GLOG_INCLUDE;$PROTOBUF_INCLUDE;$SNAPPY_INCLUDE;$LMDB_INCLUDE;$LEVELDB_INCLUDE;$ATLAS_INCLUDE;$PYTHON_INCLUDE" -DCMAKE_LIBRARY_PATH:STRING="$BOOST_LIB;$GFLAGS_LIB;$HDF5_LIB;$GLOG_LIB;$PROTOBUF_LIB;$SNAPPY_LIB;$LMDB_LIB;$LEVELDB_LIB;$ATLAS_LIB;$CUDNN_LIB;$PYTHON_LIB" ..
# compile and test all code
make all
make runtest
in Jungle rhythms / Research / Science on Citizen science, Jungle rhythms
Jungle Rhythms is live for little over a month and has accumulated an impressive 40,000 classifications. With a substantial amount of data classified, I’ll be transforming these classifications in actual dates of seasonal growth patterns (instead of lines on paper) in the next few weeks or so.
In the meantime, using the same classification data, I made a visualization of user contributions over the past month. In the graph below you see rectangles with their relative size scaled to number of contributions by each user (listed in the rectangle). Grey rectangles are contributions made by non registered users.
Currently, ElizabethB is leading the pack with a hard to beat 11758 classifications. Rainbobrite is runner up with 6467 classifications. Although a few large contributors make up more than 50% of the classifications the remaining classifications are made in lower numbers by more people. For example, 7% of all classifications are made by unregistered users classifying ~3 images per session. This illustrates the power in numbers, which drives a lot of citizen science. All contributions matter, even the few classifications now and again!
in Jungle rhythms / Research on Citizen science, Dr congo, History, Jungle rhythms, Research
The brief history of agricultural research in Congo starts after 1908 when the Belgian state took control of Congo ending the rule of Leopold II, due to international outcry over atrocities committed.
In subsequent years the Belgian state, under guidance of Edmond Leplae and informed by agricultural engineer Jean Claessens, created a government institution (Service de l’Agriculture) focussed on agricultural development, mirroring research facilities in other tropical colonies. Although policy was focused on boosting export crops, in part by increased focus on research, the period up until Leplea’s retirement was dominated by his much hated policy of mandated cultivation (e.g. cotton and rubber).
After 1930 there was a shift in policy away from mandated cultivation and focusing on research driven agricultural development with stronger focus on supporting the local farmers. As such, the Institut National pour l’Étude Agronomique du Congo belge (INÉAC, the Institute for the study and agricultural development of Belgian Congo) was created in 1933, with headquarters in Yangambi.
All data in collected and digitized in the Jungle Rhythms project were gathered during the latter period at the Yangabmi research station. Although there was some ongoing research before this period. The INÉAC created a major shift towards basic research, in addition to the applied agricultural research. This basic research were often well coordinated and documented research efforts. This basic research topics included plant diseases, botany, geology but also genetics. Most surprisingly INÉAC was run by scientists with minimal intervention of the Belgian administration (either local or afar). However, INÉAC recieved support from the government which makes complete autonomy questionable, especially WWII set part of the research agenda.
It is clear that the Congo agricultural research stations (and Yangambi in particular) have a long and winding history. At the eve of independence the research station had built up solid international reputation running large autonomous experiments and data collection throughout the Congo basin. The data on seasonal dynamics of tree species digitized within the Jungle Rhythms project is part of this historical research effort. However, even after more than 70 years these data still retain their scientific value and could contribute to solving some of todays research questions and problems.