Basic Local Alignment – DNA fiddling

My flatmate was doing some manual queries of DNA sequences using the National Center for Biotechnology Information (NCBI) online Basic Local Alignment Search Tool (BLAST). Doing this manual seemed like a waste of time. So, I looked around a bit to find an optimal automated solution. After trying several R and Perl solutions, nothing seemed stable enough to be of any use. In the end I settled on Biopython and their online query routine in their Blast class of tools. Of all tools this seemed the most mature and especially the most stable for querying the NCBI server.

PhenoPi: beta software installer

UPDATE: the installer is currently offline as I broke the code while rewriting routines and haven’t had time to fix this yet. Check in later.

I’m releasing a first version of my PhenoPi software installation package onto the world. The goal of the set of scripts is to minimize the amount of time spend on setting up a PhenoPi camera. The current set of scripts should be run on a clean raspberry pi to ensure a proper setup. Any volunteers to test the code are welcome (but do this on a non operational pi as it might mess with your stuff). Currently no images will be uploaded to the PhenoPi server, so everything is stored in the phenopi_images folder.

New additions to the code are a privacy filter so images do not include up to the bottom half of the image in those uploaded for scientific research. This to avoid privacy issues when overlooking gardens when I’m primarily interested in the vegetation higher up or in the distance.

Installation

In your raspberry pi home directory (/home/pi) clone the project to your raspberry pi using the following command (with git installed)

git clone https://khufkens@bitbucket.org/khufkens/phenopi.git

(this location will be updated to github in the near future, if this command doesn’t work check my github page)

all files will be cloned into a directory called phenopi

Use

To run the basic install using the following command:

sh /home/pi/phenopi/install_phenopi.sh site_name privacy_value

or

./install_phenopi.sh site_name privacy_value

in the /home/pi/phenopi directory

with “site_name”  the name of the site (no spaces allowed) and “privacy_value” how much of the bottom of the image in % you want to see removed (0, 25 or 50 are accepted values, default is 0)

After the installation your camera should be up and running and you should be able to find a website displaying constantly updating image at

http://IP:8080

This will website will look like this, showing the current image stream as well as the latest uploaded image (at night the colours are strange due to reflections of status LEDs and long exposures).

Screen_Shot_2015-05-02_at_22.47.15.png

Notes

Make sure that your raspberry pi camera is enabled, a description on how to enable your camera is provided on the raspberry pi site

TetraPi: a well characterized multispectral camera

Motivation and goals

Many of the projects on the Public Lab deal with infragrams, NDVI or NRG images to measure vegetation health. Although the images produced do discriminate between healthy and diseased or stressed vegetation, the imageging pipeline is not well characterized. Furthermore, commercial companies offering well defined multipsectral cameras (> 2 channels) do this at a steap price tag (~$5000).

Both the lack of a of a well characterized image sensor, which makes quantitative research impossible (by using inverse modeling using amongst others PROSAIL or DART) and lowering the price of an inherently simple device which should be accessible to everyone made me start this project.

For my concept two conditions need to be met:

  1. Designing a multispectral camera based upon a raspberry pi and a raspberry pi multiplexer, and writing the necessary software.
  2. Characterize the raspberry pi imaging sensor; extract the spectral response curves and set those free on the web.

1. TetraPi: a multispectral raspberry pi

Hardware

Designing the housing, either as a sled design to be mounted in a standard outdoor security camera housing and a fully independent camera has been fairly straightforward. Below you see the design of a camera sled which fits a VITEK security camera housing, as well as the stand alone / mobile version.

[caption id=”” align=”aligncenter” width=”351”] TetraPi, mobile version[/caption]

[caption id=”” align=”aligncenter” width=”353”] TetraPi, static version[/caption]

Both designs are largely finished and might need some cosmetic updates, but by and large these designs work. Features include a provision for four raspberry pi cameras (NOIR or RGB) and a filter holder which accomodates 1” dielectric filters as produced by for example Thorlabs. This modular design allows for simulateous acquisition of for example a photochemical reflectance index (PRI), a true normalized difference vegetation index (NDVI), as well as a standard RGB image. Multiplexing the cameras is taken care of by an IVMECH raspberry pi shield multiplexer.

envisioned mobile camera features
  • four channel camera [finished]
  • adaptable filters (1" dielectric filters) [finished]
  • 12-24VDC battery operated [ubec ordered]
  • local and remote (cable) trigger [3 mm jack connection in place, push button installed]
  • ad-hoc wifi access to upload images directly to a laptop [software issue]
envisioned static camera features
  • four channel camera [finished]
  • adaptable filters (1" dielectric filters) [finished]
  • 12-24VDC  Power-over-Ethernet (PoE) or battery operated [ubec ordered]
  • UPS feature using [awaiting the pijuice shield]
  • ad-hoc wifi access to upload images directly to a laptop for remote sites [software issue]
  • wired and wireless time lapse image acquisition and uploads, for on the grid location [todo]

Operation of the mobile version is still dependent on a tripod to ensure a proper exposure. The mobile version will use a ubec to step down an external 12-24VDC source to 5V needed by the pi, alternatively it can be hooked up directly to the 5V output of for example a drone. The plywood design should make the whole setup not overly heavy. There is also still room to shrink the design, but for convenience reasons I keep it larger as it makes it easier to work on the system.

 CAD designs

The drawings of both cameras can be found here:

All designs are based upon 3 and 9 mm plywood or acrylic. For environmental reasons I try to minimize plastics as much as possible. The mobile version has not fixed power socket yet as I still have to deside on the size.

Budget
mobile version
  • Raspberry pi B+ ($30)
  • 4 x raspberry pi camera ($120)
  • 16GB micro SD card ($10)
  • USB wifi adapter ($10)
  • 5V ubec ($10)
  • camera multiplexer ($96)
  • real time clock ($10)
  • small electronics ($4)
  • plywood housing ($5)
  • screws / stand offs (< $4)

Total: ~$300

static version
  • Raspberry pi B+ ($30)
  • pijuice battery pack UPS ($41)
  • 4 x raspberry pi camera ($120)
  • 16GB micro SD card ($10)
  • USB wifi adapter ($10)
  • 5V ubec ($10)
  • camera multiplexer ($96)
  • outdoor housing  ($25)
  • plywood housing ( $5)

Total: ~$350

Note:

I do not include the dielectric filters in the price as these are somewhat optional and run at $100 a piece. The prices listed above also indicate a full system, meaning 4 cameras where this might exceed the need of some people.

A complete system would run for less than $800 dollar (or a fifth of the price of a commercial system - in hardware cost). A standard NDVI system would set you back $400. Altenatively, a photochemical reflectance index (PRI) system (2 filters) will cost you a little south of $500.

Software

Little progress has been made on this part. I have some script which I can recycle from my PhenoPi project but I’m still waiting on the multiplexer and until it’s arrival little can be done from a software development point of view.

2. Characterizing the OmniVision OV5647 imaging sensor

Inverse modelling of either the standard RGB signal or derived spectral features is based upon a thorough knowledge of the spectral response of a camera. The spectral response of a camera is defined as the wavelength dependent sensitivity of a given sensor channel (RGB).

For reasons unknown most imaging sensors do not come with this information. However, this spectral response can be measured using a monochromator (a rare piece of equipment) or if one knows the spectral response of your source light and the wavelength dependent characteristics of a diffraction grating.

Using both a well characterized light source (measured with a spectrometer) and a well characterized grating, it should be possible to back out the spectral response of the raspberry pi camera. I refer to my raspberry pi spectrometer research notes for all the details regarding measuring the source light, and further calibrating the spectra.

More updates soon…

raspberry pi camera: spectral response curves (grating properties)

UPDATE: Since, I found the spectral response curves (in the visible spectrum) of both v1 and v2 raspberry pi cameras. You can find the digital response curves on my projects page.

As described previously, a diffraction grating splits light in it’s wavelength components (intensities). However, the angle of the diffracted light as it ‘exits’ the diffraction grating is dependent on the number of slits (grooves) in the grating. To correctly align any sensor (parallel) with the grating and register the diffracted light a little math is required.

For a diffraction order m, given an incident beam of light at angle $sin_{\theta_i}$,  a given wavelength $\lambda$ and a slit density d; a diffracted beam will exit at an angle $sin_{\theta_d}$ according to:

$d [ sin_{\theta_i} - sin_{\theta_d}  ] = m \lambda$

or in function of the ‘exit’ angle:

$\theta_d = asin(\frac{m \lambda} {d} + sin_{\theta_i})$

Using this relationship we can calculate the incident angle at which the the exiting light at a given wavelength will be orthogonal to the grating (or parallel to a sensor), or alternatively the angle at which the detector should be placed.

Using the above equation I ran an analysis for a set number of incident angles and wavelengths to extract the overall diffraction properties for a 300 lines/mm and 1351 lines/mm grating (a professional Thorlabs grating and DVD grooves respectively). Optimal grating angles, where the diffracted light exits orthogonal to the grating (going straight into a sensor) is calculated to be ~12 and ~67 degrees for (300 and 1351 lines/mm respectively). I’ll be using a 300 lines/mm grating at a 12 degree angle.

[caption id=”attachment_621” align=”alignnone” width=”832”]300 lines/mm grating 300 lines/mm grating[/caption]

[caption id=”attachment_622” align=”alignnone” width=”832”]1351 lines/mm grating 1351 lines/mm grating[/caption]

raspberry pi camera: spectral response curves (intro)

UPDATE: Since, I found the spectral response curves (in the visible spectrum) of both v1 and v2 raspberry pi cameras. You can find the digital response curves on my projects page.

In the previous post I described my project to democratize phenology monitoring. From a purely scientific point of view adding citizen science cameras to the PhenoCam network would increase the coverage, however to truly replace the current StarDot cameras in more than a citizen science project I need to characterize the spectral response of the raspberry pi camera’s imaging sensor.

The spectral response of any imaging sensor (or most of them anyway) is determined by the formula used on the microlenses in the bayer filter. In practice every imaging sensor is monochrome, it’s only by adding this bayer filter, a checkerboard of tiny red/green/blue filters alternatively overlaying all pixels, that you can extract color from your imaging sensor.

Sadly, most spectral responses of the imaging sensors are corporate secrets. I’m unsure why, but I assume that knowing the spectral response of the filters tells something about which process is used and how. This being said, this doesn’t mean you can’t measure it!

Measuring the spectral response of a sensor is generally done using a monochromator, a light source which emits a particular wavelength, and a spectrometer, a device to measure the intensity of that light source in function of wavelength. Here the monochromator emits light of a known wavelength which is simultaneously measured by the spectrometer and the imaging sensor. The spectrometer provides a true intensity measurement at this wavelength while the imaging sensor provides an intensity measurement for every bayer filter colour at this particular wavelength. If we cycle through all wavelengths the output of such an analysis are spectral response curves, showing the sensitivity of each bayer filter component colour across all wavelengths. Although this methodology is sound, finding a monochromator is rather hard. Yet an alternative approach exists.

A monochromator uses a diffraction grating to split a known light source into it’s component wavelengths. This same diffraction grating is not selective and at any given time outputs all light components only at a slightly different angle. The monochromator only passes the desired wavelength, as shown below (left image).

[caption id=”” align=”alignnone” width=”486”] Basic function of a monochromator (left) and a spectrometer (right). The monochromator passes only light of a certain wavelength, while the spectrometer measures the intensity of the light at a given wavelength (as reflected of an object). Clicking the image will guide you to an interesting blog post about the Talbot effect, a diffraction property.[/caption]

So, in theory we could use a diffraction grating to do all the work for us without the intermediary and elusive monochormator! However, the transmission properties of a grating are wavelength dependent. This is the reason why in an ordinary (monochromator / spectrometer) setup you need to measure the true intensity as well as the image sensor response simultaneously. The only way to calculate the spectral response curve of the sensor is to factor in the wavelength dependent transmission properties of a grating. For most classroom gratings these properties are not described, but when ordering from an optical instrument builder they are!

In short, given a known light source (characterized using a spectrometer), a cheap but characterized grating it is possible to get a crude approximation of the spectral response of any imaging sensor using one image (well two actually as you need to calibrate the relative location of the spectrum - using a CFL light for example)!

Next step, designing a grating housing (a cheap spectrometer) for this task.

 

 

Pagination


© 2018. All rights reserved.

Powered by Hydejack v7.5.1