Optical Flow For Live Cell Imaging

Motivation

The progress in staining of living cells together with advances in confocal microscopy devices has allowed detailed studies of the behavior of intracellular components including the structures inside the cell nucleus. The typical number of investigated cells in one study varies from tens to hundreds in order to achieve a reasonable statistical significance of the result. One gets time-lapse series of two or three dimensional images as an output from the microscope. The manual analysis of such large data sets is very inconvenient and annoying. This is especially true for 3D series. Moreover, there is no guarantee on the accuracy of the result. Therefore, there is a natural demand for computer vision methods which can help with the analysis of time-lapse image series. Estimation or correction of global as well as local motion belongs to the main tasks in this field.

We have implemented the latest optical flow methods which we use for local motion estimation in live-cell image series. Up to our best knowledge, nobody has investigated the application of the state-of-the-art methods in the field of live-cell imaging yet.

 

Project Objectives

  • The first goal is to implement wide range of state-of-the-art optical flow methods suitable for motion estimation in 2D and 3D live-cell image series.

  • The second goal is to rigorously compare the accuracy of tested methods.

  • The third goal is to implement a motion tracking application which will use selected optical flow methods for the motion estimation.


Availability

The library and tools are available under the GNU GPL from the CBIA web pages.

Optical Flow methods

Let the two consecutive frames of image sequence be given. Optical flow methods compute the displacement vector field which maps all voxels from the first frame into their new position in the second frame. We study and implement the three-dimensional variants of following methods:

Variational Optical Flow Methods

Variational optical flow methods determine the desired displacement as the minimizer of suitable energy functional. Particular energy functional consists of data term and smoothness term. Data term ensures that certain image properties (e.g. grey value, gradient magnitude) remain constant in time. Smoothness term regularises the non-unique solution by certain smoothness constraint.

We study common as well as the state-of-the-art variational optical flow methods published recently by Andrés Bruhn. We focus our interest on the multiscale warping based methods. These methods can handle motion larger than one pixel. This situation often occurs in live-cell image series. We implement and extend to three dimensions several variational optical flow methods, namely the following:

  • Classical Horn-Schunck variational optical flow (VOF) method
  • VOF method with isotropic image driven regularization (Charbonnier regularization)
  • VOF method with anisotropic image driven regularization (Nagel-Enkelmann regularization )
  • VOF method with isotropic flow driven regularization
  • VOF method with nonlinear data term and flow driven regularization
  • VOF method with nonlinear data term and anisotropic image driven regularization
  • Multiscale warping based variants of these methods (These can handle large motions).

 

We present some results and screenshots from our programs here. In the first example you can see two 2D frames of HL-60 cell nucleus. In the right picture is the example output of our optic flow demo program. The flow is visualised in color representation (top frame color codes direction, intensity length of the flow vector) and in vectors (bottom frame). Size of input frames is 400x400 pixels. The flow was computed with warping based method for large displacements.

a_1_tn.png a_2_tn.png Our Demonstration program. Results for two HL-60 cell frames
First frame of HL-60 sequence Second frame of HL-60 sequence Results visualised in 2D demo program

In the second example we demonstrate the capability of our software to handle 3D image sequences. The first two pictures show the input 3D frames (size 276 x 286 x 106). The first frame in red channel imposed over the second frame in green channel are shown in the third picture. In the fourth picture is the second frame backward registered using the computed flow. We again use the warping based method.

First 3D frame of HL-60 sequence
Second 3D frame of HL-60 sequence First frame in green, Second frame in red. Result. First Frame over second frame. First frame in red second in green
First Frame of 3D example. Second frame of 3D example.

 

First frame in red color over imposed on second frame (in green color) .

The 3D flow is applied on the second frame (Backward registration). First frame is over imposed over second frame. First frame in red color. Second frame in green frame.

 

The previous two examples are from a publication presented on VISAPP2007 conference [1] . You can read there which variational optical method was the best for local motion estimation in live-cell imaging.

We tried to use our software for registration of other biomedical images. We were successful in registration of 3D CT brain images as well as PET lung images using the variational optical flow. We cannot display our results here due to copyright.

Currently, these methods may be easily tested with our ofd, the GUI demo program for computation of optical flow. The program is part of our collection of libraries and tools related to the computation of optical flow . Once one is happy with the tunning of chosen optical flow method, she can easily start batch processing. The screen shot shows how to supply the ofcl tool with parameters used in the GUI program.

Phase-based and Energy-based Optical Flow Methods

We are currently working on these methods. In particular, we have already implemented the traditional Heeger's method and modified it to work better on biomedical images [4] . This optical flow computation method is based on processing energies obtained from spatio-temporal filtering.

Unfortunatelly, our implementation still suffers from a major limitation: the input image can't be scaled so that larger velocities can't be detected reliably. It also computes with subpixel velocities in multiples of 0.1.

The improvements were:

Anyway, we present some preliminary results on generated images with foreground motions and with global and additional local motions. In both cases, all velocities were up to 3px/frame.

 

the first frame the second frame both frames overimposed visualised ground-truth flow visualised computed flow flow visualization legend
the first frame, only local motions the second frame, only local motions both frames overlayed, the first is in red, the second is in green visualization of ground-truth flow field computed flow field flow field visualization: color and intensity gives direction and length of a vector, respectively
the first frame the second frame both frames overimposed visualised ground-truth flow visualised computed flow  
the first frame, global with additional local motions the second frame, global with additional local motions both frames overlayed, the first is in red, the second is in green visualization of ground-truth flow field computed flow field  

 

Note that the method, like any other method based on spatio-temporal filtering, requires more than just two frames. However, only two frames from a tested time-lapse image are presented above.

We still work on this method. The Fleet and Jepson method is planned too.

We implemented all these methods into the OpticalFlow library. This library is written in C++ and is multiplatform (tested Linux and Windows). Moreover, we use the sophisticated multigrid framework for solution of the optical flow problems. Therefore we achieve reasonable computation times even for 3D image sequences (seconds for 2D frames, minutes for 3D frames). The library is still under development but its source codes are available under the GNU license download the library. The authors are Jan Hubený and Vladimír Ulman.

Ground Truth Image Series Generator

For the sake of performance analysis of any optical flow computation method on particular image data, we had developed the gtgen tool --- a ground-truth generator.

It allows for automatic generation of high-fidelity image sequences based on real-world sample image as well as corresponding artificial flow field sequences [3] [5] . We've accented the fidelity to the sample image in our approach since the performance of given optical flow computation method may substantially differ on different sorts of images. Hence, the analysis on given data doesn't have to be valid for another sort of images.

The goal was to design a fully-automatic easy-to-use solution so a large dataset of image sequences, resembling real application in mind, accompanied with correct results, what we call ground-truth, can be made available quickly. We refer to ground-truth flow fields as to created 2D or 3D flow fields that describe the movements displayed in the generated images.


Our tool benefits of a two-layered approach in which user-selected foreground is locally moved and inserted into an artificially generated background. The background is visually similar to the sample real image while the foreground is extracted from it and so its fidelity is guaranteed.

The tool requires following inputs:

  • sample image, the output must look like this image
  • mask image of background, this determines the background region where global motion occurs
  • mask image of foreground, this determines the foreground regions where additional local motions occur
  • mask image of possible positions, this determines regions in which foreground components are allowed to appear
  • definition of global motion which remains constant throughout the entire generated sequence
  • number of frames of the sequence to be created

Sample image
Sample image
Background mask
Background mask

Foreground mask
Foreground mask

Mask of possible positions
Mask of possible positions

 

Foreground motion is suggested for each foreground component for each frame individually and automaticaly by the gtgen tool. The determination of a new position considers the previous movement of this component as well as the mask of possible positions in which the component must remain.

Path visualization
Visualization of positions of foreground components in green within the mask in blue. There was no global motion present. Hence, only local movements are visible. The brightest intensity shows the first position.

Video sequence
Video sequence of this example.

The gtgen is capable of generating 2D and 3D image sequences of arbitrary length [2] .

Example 1, results
Example of possible positions mask in the left. For this mask, an image of all positions where foreground components occured during the time-lapse sequence in the right.
Example 1, video
Video sequence of this example where no global movements were performed.

Example 2, video
Video sequence of the same example with global movement.

Example 3, results
Example of the mask of possible positions. Notice shapes and how they control local movements in the following video.
Example 3, video
Video sequence in which this mask of possible positions was applied.

Example 4, video
Video of an example of generated 3D time-lapse sequence. Notice that the generator can handle overlapping of foreground components.

 

The ground-truth generator is available under GNU GPL as part of the OpticalFlow collection as a tool of_gtgenb.download gtgen

The gtgen-ng tool

We are currently developing a new generation [5] of the time-lapse ground-truth datasets generator, which employs GUI control interface and plug-ins based scheme to allow for simulation of general motion and events to be observed in the generated time-lapse image sequence. The CBIA CytoPacq is used for generation of synthetic and highly realistic image content.

Example of synthetically generated time-lapse sequence can be downloaded here. Only the image and ground-truth binary mask of the observed cell nuclei are displayed in the file. The method, however, produced ground-truth optical flow fields as well.

Motion Tracking Application

We are half way to a motion tracker now. We have already implemented simple version of it. Basically, points of interest are selected in the first frame of the sequence. Their coordinates of every point is adjusted into the next frame with its movement based only on the flow information found at the particular coordinate. No additional heuristics is conducted. Here, we will present our first results on the artificially generated data.

Inputs:

Gray level image sequence ad_seq3_001_tn.png
Positions of objects of interests 3out_001_tn.png

Algorithm:

  • Compute the optic flow between consecutive frame pairs. For our data, we used the most suitable method, with respect to Average angular error, with best parameter settings (see chapter Related Publications).
3out_001_rgb.png gtflow.png color.png
Computed flow Ground truth flow

Legend(color codes flow direction, intensity vector length)

  • Track selected objects of interest using computed flow starting from their initial positions. The tracking is performed as follows. We use a flow computed between the first and the second frame. We consider the vector at the position of an examined object. We simply add this vector to the object coordinate. This way we get the position of examined object in the second frame. This is also an initial position for the processing of the movements between the second and the third frame. The process is repeated until the last frame.
    3out_001_tn.png
    3out_001_tn.png
    Computed tracking Ground truth tracking

Outputs:

The application saves the positions of each object in each frame. It also computes some basic statistics. The movement maps are saved.

comp_crosses_tn.png
gt_crosses_tn.png
Computed movement map Ground truth movement map

The described prototype of tracking application is very simple. Surprisingly, it works really good and therefore we are motivated to develop more sophisticated version of it. We hope that if we take into account interactions between objects and add some image procesing into the tracking application, we can get even better results than we have at the moment. It should be emphasized that all examples on this page are for 2D sequences. Nevertheless, our framework is able to process even 3D time-lapse image sequences.

Finally, we present tracking results for two real-world live-cell image sequences. Tracking of HP2010 protein domains in HL-60 cells is in the first example. Tracking of telomers is in the second example.

2out_001.png 2out_001.png
Tracking of HP2010 domains in HL-60 cells
Tracking of telomers

 

The tracker is available under GNU GPL as part of the OpticalFlow collection as a tool of_tracker.download tracker


Related Publications

[1] Jan Hubený, Vladimír Ulman, Pavel Matula: Estimating Large Local Motion in Live-Cell Imaging Using Variational Optical Flow. In Proceedings of VISAPP2007 (http://www.visapp.org/ 2nd International Conference on Computer Vision Theory and Application 8-11. March 2007, Barcelona, Spain). article.pdf , poster.pdf .

[2] Vladimír Ulman, Jan Hubený: On Generating Ground-truth Time-lapse Image Sequences and Flow Fields.
In Proceedings of ICINCO2007 (http://www.icinco.org/icinco2007/ Fourth International Conference on Informatics in Control, Automation and Robotics, 9-12 May, Angers, France). article.pdf , poster.pdf .

[3] Vladimír Ulman, Jan Hubený: Pseudo-real Image Sequence Generator for Optical Flow Computations. In Proceedings of SCIA2007 (http://www.scia2007.dk/ Scandinavian Conference on Image Analysis 10-14 June 2007, Aalborg, Dennmark). The original publication is available at www.springerlink.com. , poster.pdf

[4] Vladimír Ulman: Improving Accuracy of Optical Flow of Heeger's Original Method on Biomedical Images. In Proceedings of ICIAR 2010 (http://www.iciar.uwaterloo.ca/iciar10/ International Conference on Image Analysis and Recognition 21-23 June 2010, Povoa de Varzim, Portugal). The original publication is available at www.springerlink.com .

[5] Svoboda David, Ulman Vladimír: Generation of Synthetic Image Datasets for Time-Lapse Fluorescence Microscopy. In Proceedings of ICIAR 2012 (http://www.iciar.uwaterloo.ca/iciar12/ International Conference on Image Analysis and Recognition 25-27 June 2012, Aveiro, Portugal). The original publication is available at www.springerlink.com .

 

Written by Vladimír Ulman   
Last Updated ( Wednesday, 19 September 2012 )