Beevision

Adam

Repository for the HoneyBee Vision Darwin Project (2013-2014)

This project is maintained by Robert Chisholm & Adam Petterson

2014/02/06

Further blog posts on my personal blog

In order to make it easier to keep thi blog up to date and searchable I have decided to move all posts from here to my personal blog. This is where all future posts can be found as well.

These can be found by using the Darwin Project tag.

2013/11/10

More Capture Card Work and Presentation

While this week I have not had too much chance to get work done due to other commitments several advances have been made.

Firstly I decided to drop the idea of using OpenCV to do the recording as it turned out to be too slow for most computers to record a single camera at an acceptable framerate, never mind two. This is due to the conversions to RGB and several other processes performed by OpenCV to save each frame.

While this was a setback a temporary workaround to capture test data has been found in a program named ffmpeg. Combined with Video4Linux2 this is able to capture video from a simple bash command. As such a short Bash script has been written and pushed to the Git repository that will request information about the cameras and recording length before starting the two recordings and pushing them into the background to run simultaneously.

There was a slight issue at one point where I could not get the cameras working simultaneously due to a mysterious error message stating that "the device is full". I initially thought this could be an issue with the driver being flagged as busy when one is running, as I had heard mention of similar problems. It was, however, a problem with the laptop I was using, where the only two USB sockets I could fit both cards in were running on the same host controller. This caused both cards together to surpass the bandwidth of the controller and as such only the first card actually started recording. On my netbook this was not an issue however. This will mean that provided the machine we will be running on has multiple host controllers we should not run into any more issues on the hardware side of things.

Another significant event of this week was the presentation on Thursday. For this I had to discuss the Green Brain project as a whole, our specific roles in it and the work done so far and to be done. When writing this I wanted to be sure I was getting my facts right so spent some time cementing some of the basics as to what we will be doing on the project, which involved re-reading several of the initial papers.

While working on the presentation, and looking for facts, I stumbled across a paper which was incredibly relevant to our project. "Mimicking Honeybee eyes with a 280 degrees field of view catadioptric imaging system" by Wp Sturzl, N Beodekker, L Dittmar and M Egelhaaf. This paper presented a method of simulating a bees full field of view while only using a single camera. While much of the hardware used by this project is beyond the scope of ours it did provide an insight into how others in the field are attempting similar things, and contained some equations for filtering and resampling that could come in useful.

From this point we aim to get some test data recorded relatively soon to start to experiment with, and more reading needs to be done into the specifics of the field of view, the overlap of the eyes and the filtering and subsampling so we can confidently move forward with the rest of the project.

2013/10/29

Vision reading and experimenting with the capture card

This week I have been working on increasing my knowledge of bee vision as a whole by reading the 1929 paper by Hecht and Wolf about visual accuity. This covered the detail bees can see at and comparing this to humans. The authors discussed this in terms relating to the seperation of ommatidia. While this likely goes into more detail than we will need to simulate the vision of the bee it is nevertheless useful to know how the system works in the original case.

I have also begun reading "Colour Receptors in the Bee Eye-Morphology and Spectral Sensitivity" by Menzel and Blakers, and started researching some camera calibration, though I have not gotten far enough into either of these to have anything significant to report back on yet.

There were very relevant sections in this paper such as the varying densities of omatidia in different sections of the eye providing varying levels of accuity. While talking to Robert we had the idea that this could perhaps be moddeled with a blurring filter, though depending on the filter chosen this could prove computationally expensive. More research will have to be done in this area to decide whether this avenue is worth pursuing.

I also started work getting the capture card that will be used to view the images from the bees cameras working. While it was found that the drivers for the capture card have been included in the latest linux kernal not every device I had seemed to work (A Playstation 3 worked but an XBox 360 didn't). This could possibly be due to variances in the regional encoding though changing this through UCView did not seem to help, though this may be to do with my installation of it.

The card mounted in linux under /dev/video# (where # was the incremental number of video devices attatched). This is the same as a webcam mounts which is useful for finding compatible C++ libraries that allow accessing of video devices. Due to this I started experimenting with OpenCV, a computer vision library for Python and C++ that I have past experience with. I started, to ensure it worked ran it in an old python program I had that simply displayed the webcam. Once I saw that this worked I ported the program to C++.

Once I had this working I decided to try extending the program to record video feeds from two cameras that could be used to capture the test data. While I did get this working I was unfortunately unable to regulate the framerate correctly using a simple single threaded program using the clock method. As such it was not adding the frames to the buffer quickly enough resulting in a faster than real time video. I plan to look into using the Boost C++ libraries to multithread this and keep track of the time in a seperate thread.