Background
Methods of manual cell localization and outlining are so onerous that automated tracking methods would seem mandatory for handling huge image sequences, nevertheless manual tracking is, astonishingly, still widely practiced in areas such as cell biology which are outside the influence of most image processing research. The goal of our research is to address this gap by developing automated methods of cell tracking, localization, and segmentation. Since even an optimal frame-to-frame association method cannot compensate and recover from poor detection, it is clear that the quality of cell tracking depends on the quality of cell detection within each frame.Methods
Cell detection performs poorly where the background is not uniform and includes temporal illumination variations, spatial non-uniformities, and stationary objects such as well boundaries (which confine the cells under study). To improve cell detection, the signal to noise ratio of the input image can be increased via accurate background estimation. In this paper we investigate background estimation, for the purpose of cell detection. We propose a cell model and a method for background estimation, driven by the proposed cell model, such that well structure can be identified, and explicitly rejected, when estimating the background.Results
The resulting background-removed images have fewer artifacts and allow cells to be localized and detected more reliably. The experimental results generated by applying the proposed method to different Hematopoietic Stem Cell (HSC) image sequences are quite promising.Conclusion
The understanding of cell behavior relies on precise information about the temporal dynamics and spatial distribution of cells. Such information may play a key role in disease research and regenerative medicine, so automated methods for observation and measurement of cells from microscopic images are in high demand. The proposed method in this paper is capable of localizing single cells in microwells and can be adapted for the other cell types that may not have circular shape. This method can be potentially used for single cell analysis to study the temporal dynamics of cells.Introduction
The automated acquisition of huge numbers of digital images has been made possible due to advances in and the low cost of digital imaging. In many video analysis applications, the goal is the tracking of one or more moving objects over time such as human tracking, traffic control, medical and biological imaging, living cell tracking, forensic imaging, and security [1-7].The possibility of image acquisition and storage has opened new research directions in cell biology, tracking cell behaviour, growth, and stem cell differentiation. The key impediment on the data processing side is that manual methods are, astonishingly, still widely practiced in areas such as cell biology which are outside the influence of most image processing research. The goal of our research, in general, is to address this gap by developing automated methods of cell tracking.
Although most televised video involves frequent scene cuts and camera motion, a great deal of imaging, such as medical and biological imaging, is based on a fixed camera which yields a static background and a dynamic foreground. Moreover, in most tracking problems it is the dynamic foreground that is of interest, hence an accurate estimation of the background is desired which, once removed, ideally leaves us with the foreground on a plain background. The estimated background may be composed of one or more of random noise, temporal illumination variations, spatial distortions caused by CCD camera pixel non-uniformities, and stationary or quasi-stationary background structures.
We are interested in the localization, tracking, and segmentation of Hematopoietic Stem Cells (HSCs) in culture to analyze stem-cell behavior and infer cell features. In our previous work we addressed cell detection/localization [8,9] and the association of detected cells [10]. In this paper cell detection and background estimation will be studied, with an interest in their mutual inter-relationship, so that by improving the performance of the background estimation we can improve the performance of the cell detection. The proposed approach contains a cell model and a point-wise background estimation algorithm for cell detection. We show that point-wise background estimation can improve cell detection.
There are different methods for background modelling, each of which employs a different method to estimate the background based on the application at hand, specifies relevant constraints to the problem, and makes different assumptions about the image features at each pixel, processing pixel values spatially, temporally, or spatio-temporally [11-23].
There is a broad range of biomedical applications of background estimation, each of which introducing a different method to estimate the background based on some specific assumptions relevant to the problem [12-14,24]. Close and Whiting [12] introduced a technique for motion compensation in coronary angiogram images to distinguish the arteries and background contributions to the intensity. They modelled the image in a region of interest as the sum of two independently moving layers, one consisting of the background structure and the other consisting of the arteries. The density of each layer varies only by rigid translation from frame to frame and the sum of two densities is equal to the image density.
Boutenko et. al [13] assumed that the structures of interest are darker than the surrounding immobile background and used a velocity based segmentation to discriminate vessels and background in X-ray cardio-angiography images, considering the faster vessel motion in comparison with the background motion.
Chen et. al [14] modelled the background of a given region of interest using the temporal dynamics of its pixels in quantitative fluorescence imaging of bulk stained tissue. They modelled the intensity dynamics of individual pixels of a region of interest and derived a statistical algorithm to minimize background and noise to decompose the fluorescent intensity of each pixel to background and the stained tissue contributions.
A simulation and analysis framework to study membrane trafficking in fluorescence video microscopy was proposed by Boulanger et. al [24]. They designed time-varying background models in fluorescence images and proposed statistical methods for estimating the model parameters. This method decides whether any image point belongs to the image background or a moving object.
Several segmentation and tracking methods are proposed for a broad range of biomedical applications, each of which introducing a different method to segment and/or track specific biological materials based on some specific assumptions relevant to the problem [25-28].
Cheng et. al used shape markers to separate clustered nuclei from fluorescence microscopy cellular images in a watershed-like algorithm [25]. Shape markers were extracted using H-minima transform. A marking function was introduced to separate clustered nuclei while geometric active contour was used for initial segmentation.
Gudla et. al proposed a region growing method for segmentation of clustered and isolated nuclei in fluorescence images [26]. They used a wavelet-based approach and a multi-scale entropy-based thresholding for contrast enhancement. They first oversegmented nuclei and then merged the neighboring regions into single nuclei or clustered nuclei based on area followed by automatic multistage classification.
A semi-automatic mean-shift-based method for tracking of migrating cell trajectories in vitro phase-contrast video microscopy was proposed by Debeir et. al [28]. They used mean-shift principles and adaptive combinations of linked kernels in the proposed method. They used this method for detection of different gray-level configurations. This method required manual initialization of the cell centroids on the first frame, it did not use temporal filtering or time-dependent feature, and it did not provide precise information on the cell boundaries and shapes.
Most tracking problems have an implicit, nonparametric model of the background to avoid making assumptions regarding the foreground. By developing a model for the background it is possible to find a classifier that labels each image pixel as background/not background; i.e., the foreground is identified as that which is not background. In contrast, the more focussed context of our cell tracking problem admits an explicit model of the foreground. Because of the low SNR of our problem, where illumination is limited to minimize cell phototoxicity, it is desired to remove all deterministic non-cell variations in the image (i.e., the background) before localizing the cells.
Some of the earlier works have integrated foreground detection and background estimation in a mutual framework, however most of the previous methods classify each pixel to either foreground or background, where their goal is the general segmentation of dynamic objects with no assumptions regarding the foreground. In contrast our goal is the localization of foreground objects, given specific assumptions that are integrated in the form of a foreground model.
In our proposed method, in place of classifying each pixel to either foreground or background, we estimate a single global background and do detection of foreground objects (but not pixel by pixel). Our proposed method addresses foreground detection and background estimation as inter-related processes, and take advantage of this inter-relation to improve the performance of cell detection. In the proposed algorithm, the background elements are removed from the scene frame by frame using a spatio-temporal background estimator while a probabilistic cell model is applied to the image sequence to localize cell centers. The spatio-temporal estimator has been applied to estimate the background in phase contrast image sequences taken from living Hematopoietic (blood) Stem Cells in culture, and leads to substantial improvements in cell localization and cell outline detection.
No comments:
Post a Comment