Joint level-set and spatio-temporal motion detection for cell segmentation
- Fatima Boukari^{1} and
- Sokratis Makrogiannis^{1}Email author
https://doi.org/10.1186/s12920-016-0206-5
© The Author(s) 2016
Published: 10 August 2016
Abstract
Background
Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images.
Methods
In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result.
Results
We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan-Vese techniques, and 4 % compared to the nonlinear spatio-temporal diffusion method.
Conclusions
Despite the wide variation in cell shape, density, mitotic events, and image quality among the datasets, our proposed method produced promising segmentation results. These results indicate the efficiency and robustness of this method especially for mitotic events and low SNR imaging, enabling the application of subsequent quantification tasks.
Keywords
Background
Cell identification, quantification and characterization using imaging techniques are emerging research areas that are systematically integrated in biological and medical studies [1]. Recent developments in time-lapse microscopy enable the observation and quantification of cell-cycle progression, cell migration, and growth control [2]. The tasks of detecting and tracking individual cells or particles in a time series of images are key elements in this process. More importantly, the large volume of data produced by fluorescence microscopy and imaging modalities emphasizes the need for automated and robust techniques that can address the challenges in accurate detection and segmentation as well as tracking.
Cell tracking methodologies involve the tasks of preprocessing, cell segmentation and motion tracking [3–9]. In this context, segmentation of cells is a particularly challenging task that has a direct impact on the overall quantification process. Image segmentation is a popular field in the domain of image analysis. More specifically, parametric [10] and nonparametric active contour models [11–14] have been widely used in development of bio-imaging and biomedical image analysis techniques. An interesting aspect in cell analysis methods is the relation between image quality and segmentation accuracy. Many segmentation methods address certain types of datasets; however, for low-quality images and different cell types and shapes, the same methods may yield varying levels of performance.
Earlier published works propose to use partial differential equation (PDE) models for heat diffusion to detect motion with applications to moving edge detection [15], and human assistive technologies [16]. Building upon previous ideas for estimating motion activity using spatio-temporal diffusion [16], in this work we develop and utilize a heat flow analogy model in the joint spatio-temporal domain and combine this process with a region-based level-set optimization approach for cell segmentation of images obtained by fluorescence microscopy. Spatial and temporal motion parameters of our model are estimated for each dataset and an optimal Parzen bandwidth parameter is experimentally determined for density estimation of edges and outliers in each dataset. High activity regions are initially detected by solving numerically a system of coupled spatio-temporal nonlinear partial differential diffusion equations on three consecutive frames. In order to obtain more stability in parameter choice, we apply a histogram transformation approach to match the reference background threshold from the intensity distribution of each three consecutive frames to an intensity distribution model learned from all frames of the sequence during the training stage. After this step, each video sequence frame is scaled and transformed into a sequence with background with same order of magnitude for a more robust and a less sensitive parameter choice method. Then spatial and temporal motion parameters of our model are estimated for each dataset and an optimal Parzen bandwidth parameter is experimentally estimated for density estimation for edges and outliers for each dataset. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation to detect the moving cells. Next, adjacent regions with motion are merged to form a moving cell by mean intensity thresholding of these regions. Thresholding determines the boundary of each cell. Finally, the moving delineation curve is used as an initial level-set to be refined using a region-based process for final segmentation. We validated the joint approach denoted by ST-Diff-TCV over a set of sequences against reference data and compared the segmentation accuracy of the joint spatio-temporal and level-set technique with results derived from Chan-Vese (CV) segmentation [17], a temporally linked level set method that we have recently presented [18] denoted by TCV, and spatio-temporal diffusion based segmentation only (ST-Diff). Our method can accurately detect fluorescent cells at an average Dice coefficient rate of 89 % showing a clear improvement over region-based level set segmentation with and without temporal linking, and nonlinear diffusion-based segmentation. In addition, it can detect and segment newly appearing cells. Another feature of this method is it can detect cells hardly detectable by means of mean intensity and produces accurate results for high or low cell density images. This method allows to detect cells that were impossible to detect using the region based CV segmentation because the optimization criterion is defined by the mean intensity inside and outside the level set defined moving curve. Hence, cell regions with low intensity values are considered as part of the background, and the region competition process fails to delineate these cells. However, these regions are detected by the spatio-temporal motion detection method because they are rather detected by their high activity process than by their intensity value, then refined by CV model to detect the cell boundaries more accurately.
The structure of this paper is organized as follows: In Section ‘Methods’ we introduce the frame intensity standardization approach by histogram transformation, the spatio-temporal diffusion-based technique, followed by the detection of spatio-temporal discontinuities by Parzen density estimation, cell delineation and identification from background, followed by the region-based level-set model with temporal linking, and finally the joint spatio-temporal diffusion and temporal level-set region-based method. Section ‘Results and discussion’ describes the dataset properties, image quality assessment, performance validation with comparisons between CV, TCV, ST-Diff, and ST-Diff-TCV, followed by discussion of results. Finally, in the conclusion we summarize the main observations about the advantages of this method and perspectives for future work.
Methods
Frame intensity standardization by histogram transformation
We experimentally found that the matched frames are less sensitive to the temporal, spatial diffusion parameters and Parzen kernel bandwidth values than the raw frames. In our experiments we used 256 bins for all datasets.
The following steps define the two algorithms that learn the CDF reference value for background intensity F _{ AF }(I _{ ref }) at the training stage (Algorithm 1) and transform every source image at the testing stage (Algorithm 2) so as to make its testing background as close as possible to the reference intensity.
We note that after the histogram transformation and scaling, all frames of the same sequence are going to have similar pixel intensity ranges.
Spatio-temporal diffusion
Perona-Malik anisotropic diffusion
where k denotes the conductance parameter that is a positive constant. The anisotropic diffusion method has been extensively used for image restoration as it largely preserves edge image features.
Moving regions are initially detected in each three consecutive frames by numerically solving the spatio-temporal partial-differential diffusion equation [18] where the diffusivity function is applied to the gradient magnitude of the image I. In this work we used the function (9) that is more suitable for region oriented applications [22]. This nonlinear diffusion is bound to the gradient magnitude [23]. It applies more diffusion in uniform regions and slows down at edges, therefore preserves high contrast edges over low contrast ones.
Spatio-temporal nonlinear diffusion
Partial differential equation model
Here we propose to simulate nonlinear heat flow through the processed frames in both spatial and temporal dimensions. This operation smooths-out the background regions and simultaneously preserves the spatio-temporal discontinuities corresponding to cells. More specifically, given 3 consecutive frames of the sequence at times {t−1,t,t+1}, we define a system of three coupled PDEs for each frame.
At time points τ={t−1,t,t+1}
Initial condition
Boundary condition
Numerical solution
We used the Finite Difference method to solve the system of (10) on a 2D square grid lattice. We applied padding by replicating the pixel intensities at the image borders to obtain a zero gradient at the boundaries. This will enable the detection and localization of motion within each 3 consecutive frames.
At t
At t−1
At t+1
In (13), (14), and (15) λ _{ s }, λ _{ t }, λ _{ tPF }, λ _{ tNF } denote the numerical “time” steps for spatial, temporal, next frame temporal, and previous frame temporal terms respectively. In our implementation we set λ _{ t }=T S _{ Ratio }·λ _{ s } and λ _{ tPF }=λ _{ tNF }, where T S _{ Ratio } is a fixed parameter for the ratio of temporal to spatial diffusion. The diffusivity function is applied to the gradient magnitude of the image I.
Detection of spatio-temporal discontinuities by Parzen density estimation
The idea is to estimate the likelihood of mean intensity in the neighborhood of each pixel in the diffused frame. Assuming a model of unimodal probability density function (PDF) for region interiors and bimodal PDF for edges, we use the likelihood of mean intensity as an index of edge occurrence. Low values of this index correspond to a bimodal PDF indicating an edge. We estimate this likelihood by the nonparametric technique of Parzen kernels [24–26].
Cell delineation and identification
The edge map can be interpreted as a topographic surface consisting of valleys corresponding to spatio-temporally homogeneous areas and peaks denoting spatio-temporal discontinuities. The next step is to apply watershed segmentation. Watershed analysis has emerged from mathematical morphology and was implemented by a series of morphology operators in its early versions. Since then, several implementations have appeared, proposing iterative, sequential, arrowing, flow line oriented and flooding techniques [27]. Regional minima of the topographic relief are selected and flooded to form the moving regions. We obtain a watershed region in the resulting segmentation for each minimum. We first find the watershed ridges of the stochastic map of spatio-temporal discontinuities. The watershed transform divides a multivalued image into separate regions by identifying the regional minima and applying flooding operations to each minimum to fill the watershed basins. Each basin corresponds to a region. We first invert the stochastic map produced by Parzen density estimation to form regions separated by spatio-temporal discontinuities.
To separate the cells we calculate intensities and areas of watershed regions and classify them into cells or background using area and intensity prior information and likelihoods p(a r e a|c _{ i }), p(I|c _{ i }) in Gaussian form, where c _{ i }={b a c k g r o u n d,c e l l}. Adjacent watershed regions with coherent motion should be merged together to form a moving object. We compute mean intensity over the watershed regions and classify into foreground or background using as threshold value the standardized reference value T(I _{ ref }) calculated by (6).
Region-based level-set model with temporal linking
In contrast to edge based methods like classical snakes [10] and early level-set methods [11], where an edge detector is used to stop the evolving curve, region-based methods tend to be less sensitive to noise. The use of region-based statistics may prove advantageous for images characterized by edge discontinuity and higher level of noise. Chan-Vese (CV) method [17] is a region-based active contour model for energy minimization. Here, we shortly describe the theoretical background of Chan-Vese model and its minimization framework. This model is a special case of the Mumford-Shah functional [28] for segmentation using piecewise constant approximation.
Temporally linked level-set segmentation
where n is the frame number in the time-lapse sequence, and i _{ final } is the number of iterations required to converge for frame n. We take the contour result of each frame as the initial contour for the following one. These results are utilized to minimize the energy functional of the next image. If the segmentation in frame n is accurate, then this initialization will correspond to a point close to the global optimum of the energy functional in frame n+1. The main steps of this technique are summarized in Algorithm 3.
Joint spatio-temporal diffusion and temporally linked level-set approach (ST-Diff-TCV)
We propose a joint method combining the S-T differential information with the high delineation accuracy that characterizes level set-based segmentation [15, 17]. More specifically, we use S-T Diffusion to delineate the cells first, then initialize TCV with the S-T Diffusion result to refine the cell segmentation. We apply the S-T Diffusion technique on each modulo k frame to address cell events that may not be handled by TCV such as cell mitosis, cell division, new cells entering the field of view, and other cases.
Results and discussion
Data description
Image sequence properties and quality using Signal-to-Noise Ratio (SNR) and Contrast-to-Noise Ratio (CNR)
Dataset name | Average SNR std | Average CNR std | Level of difficulty |
---|---|---|---|
N2DH-SIM01 | 21.53 ± 0.69 | 7.96 ± 0.95 | |
N2DH-SIM02 | 22.21 ± 0.65 | 8.35 ± 0.96 | Medium:different noise levels |
N2DH-SIM03 | 18.59 ± 0.47 | 4.21 ± 0.47 | cell density of the initial |
N2DH-SIM04 | 18.97 ± 0.47 | 4.09 ± 0.49 | population and number of |
N2DH-SIM05 | 19.49 ± 0.54 | 4.22 ± 0.60 | simulated mitotic events. |
N2DH-SIM06 | 21.92 ± 0.54 | 7.90 ± 0.78 | |
C2DL-MSC01 | 14.67 ± 0.67 | 2.11 ± 0.36 | High:low SNR, cell strectching appear |
C2DL-MSC02 | 15.09 ± 2.31 | 4.47 ± 1.49 | as discontinuous extensions of the cells. |
N2DL-HeLa01 | 26.60 ± 3.41 | 19.23 ± 7.67 | High:high cell density, low resolution, |
N2DL-HeLa02 | 16.02 ± 1.67 | 5.40 ± 1.08 | frequent mitoses (normal and abnormal). |
N2DH-GOWT101 | 22.47 ± 0.49 | 12.62 ± 0.77 | Medium:heterogeneous staining, prominent nuclei, |
N2DH-GOWT102 | 18.91 ± 0.92 | 8.32 ± 0.91 | mitoses, cells entering and leaving the field of view |
Image quality assessment of the datasets
where \(\bar {u}_{C}\) is the average image intensity over the cell regions, \(\bar {u}_{B}\) is the average intensity over the background and σ _{ B } is the standard deviation of the background pixels. In Table 1 we list the average SNR (in dB) and average CNR that are means over all frames in each sequence using (25) and (26) and corresponding standard deviations of each dataset over cell regions. A comparison between the qualitative level of difficulty and the image quality metrics in Table 1 shows that the simulated sequences have higher SNR and CNR, therefore being more amenable to segmentation than the real sequences.
Comparison of CV, TCV, ST-Diff, and ST-Diff-TCV methods
We applied the standard CV, TCV, ST-Diff, and ST-Diff-TCV methods on 12 time-lapse fluorescent microscopy datasets listed in Table 1. Fluorescent microscopy imaging is often times subjected to a mixture of different types of noise. The main goal of a preprocessing step is to reduce the corruption caused by noise and to improve the image quality [28]. To facilitate data analysis, a combination of filters and histogram enhancement is applied to the datasets to obtain better delineation accuracy.
where R _{ Ref } is the set of all pixels that belong to cell regions in the reference image, R _{ S } is the set of all binary regions delineated by the tested segmentation technique. The DICE coefficient measures the relative similarity between two binary images over their cardinalities. It is frequently used for image segmentation validation. The value of 1 indicates perfect matching.
The mean DICE coefficient obtained from segmentation of each sequence by CV, TCV, ST-Diff, and the joint ST-Diff-TCV method
Dataset name | Size | Frames | CV | TCV | ST-Diff | ST-Diff-TCV |
---|---|---|---|---|---|---|
N2DH-SIM01 | 494x534 | 56 | 0.92 ± 0.02 | 0.87 ± 0.03 | 0.92 ± 0.02 | 0.93 ± 0.02 |
N2DH-SIM02 | 569x593 | 100 | 0.78 ± 0.31 | 0.86 ± 0.03 | 0.88 ± 0.02 | 0.94 ± 0.02 |
N2DH-SIM03 | 606x605 | 100 | 0.94 ± 0.03 | 0.92 ± 0.02 | 0.92 ± 0.02 | 0.94 ± 0.02 |
N2DH-SIM04 | 673x743 | 56 | 0.88 ± 0.02 | 0.86 ± 0.02 | 0.91 ± 0.01 | 0.93 ± 0.01 |
N2DH-SIM05 | 597x525 | 76 | 0.70 ± 0.40 | 0.92 ± 0.04 | 0.88 ± 0.01 | 0.94 ± 0.01 |
N2DH-SIM06 | 655x735 | 76 | 0.83 ± 0.27 | 0.92 ± 0.02 | 0.88 ± 0.01 | 0.96 ± 0.01 |
C2DL-MSC01 | 992x832 | 48 | 0.74 ± 0.08 | 0.59 ± 0.02 | 0.67 ± 0.03 | 0.76 ± 0.03 |
C2DL-MSC02 | 1200x782 | 48 | 0.58 ± 0.09 | 0.74 ± 0.08 | 0.62 ± 0.03 | 0.81 ± 0.03 |
N2DL-HeLa01 | 1100x700 | 92 | 0.72 ± 0.01 | 0.68 ± 0.02 | 0.80 ± 0.01 | 0.82 ± 0.01 |
N2DL-HeLa02 | 1100x700 | 92 | 0.84 ± 0.01 | 0.67 ± 0.03 | 0.87 ± 0.01 | 0.87 ± 0.01 |
N2DH-GOWT101 | 1024x1024 | 92 | 0.77 ± 0.01 | 0.71 ± 0.03 | 0.91 ± 0.03 | 0.90 ± 0.02 |
N2DH-GOWT102 | 1024x1024 | 92 | 0.68 ± 0.02 | 0.67 ± 0.03 | 0.91 ± 0.02 | 0.92 ± 0.02 |
Mean DSC | 0.78 | 0.78 | 0.85 | 0.89 |
In some of our datasets, the fluorescent images contain low intensity nuclei where there is absorption rather than fluorescence in local parts of the same image resulting in heterogeneous cell intensity levels. These darker regions have low intensity that can be mis-detected as background. As a result the region competition process would fail to delineate them. However, these cells are identified by the spatio-temporal motion analysis method because of their high temporal activity, which is proven to be more efficient in these cases. For example, we observed that CV was not able to detect some cells with very low intensity in both N2DL-Hela and N2DH-GOWT sequences. Conversely, those cells were very well delineated using the temporal differences between frames, i.e., by ST-Diff-TCV, thus significantly improving the segmentation accuracy from 0.72 to 0.82 for N2DL-Hela01, and from 0.68 to 0.92 for N2DH-GOWT102 leading to DSC improvements up to 24 %.
On the other hand, the proposed technique involves some motion diffusion – i.e., T S _{ Ratio }, λ _{ t }, and Parzen kernel parameters – which are experimentally determined for each sequence. ST-Diff-TCV performance exhibits moderate sensitivity to the parameter values. In this work we performed exhaustive grid search in the parameter space to identify the optimal settings. Alternate parameter optimization techniques may be required to achieve more accurate segmentation in sequences with significantly different quality levels and cell types. In summary, our experiments suggest that the joint ST-Diff-TCV method improves the segmentation accuracy compared to CV, TCV, and ST-Diff, especially when applied to simulated and real microscopy images with cells characterized by wide intensity variations and undergoing mitotic events, changes in density, and low SNR.
Conclusions
In this work, we introduced a local-global co-operative approach to dynamic cell segmentation. One component of this approach performs nonlinear spatio-temporal diffusion-based motion analysis, Parzen kernel-based detection of discontinuities, and watershed-based foreground-background separation. This local-based segmentation part generates a delineation that we use as the initial level-set in a region-based temporally linked level-set model. The improvement in segmentation accuracy is mainly achieved by using both the local motion and the global statistical information for segmenting cells with heterogeneous intensity levels. We evaluated the performance of our approach denoted by ST-Diff-TCV, two level-set based methods denoted by CV and TCV, and ST-Diff methods on datasets obtained from the online Cell Tracking challenge [29]. Every dataset addresses a different type of challenge for segmentation.
In comparison to CV and TCV, both ST-Diff and ST-Diff-TCV perform more robust cell segmentation, especially for cells undergoing mitosis, leaving and entering the field of view, and cells with lower mean intensity than the background intensity level. ST-Diff-TCV further improves the segmentation accuracy compared to ST-Diff by refining the cell delineation. Still, this method is dependent on some parameter retuning to optimize segmentation accuracy for different types of imaging sequences. This approach is beneficial for quantification of a wide range of types of image sequences. Future goals are to compute cell features representing cell morphology for classification and tracking.
Declarations
Acknowledgements
We acknowledge the support by the Center for Research and Education in Optical Sciences and Applications (CREOSA) in Delaware State University funded by NSF CREST HRD-1242067. Research reported in this publication was also supported by the National Institute of General Medical Sciences of the National Institutes of Health under Award Number SC3GM113754. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Declarations
This article has been published as part of BMC Medical Genomics Vol 9 Suppl 2 2016: Selected articles from the IEEE International Conference on Bioinformatics and Biomedicine 2015: medical genomics. The full contents of the supplement are available online at http://bmcmedgenomics.biomedcentral.com/articles/supplements/volume-9-supplement-2.
Funding
Publication charges for this article have been funded by the National Institutes of Health.
Availability of data and materials
The time-lapse microscopy videos and the reference datasets used in this paper are publicly available by the organizers of the cell tracking challenge at http://www.codesolorzano.com/celltrackingchallenge/Cell_Tracking_Challenge/Datasets.html.
Authors’ contributions
FB: Designed and implemented the algorithm, designed and performed validation experiments, interpreted the results and wrote the manuscript. SM: Contributed to designing and implementing the algorithm, designing the validation experiments, interpreting the results, and writing the manuscript. Both authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Consent for publication
Not applicable.
Ethics approval and consent to participate
The experimental procedures involving human subjects and animal models described in this paper were approved by the Institutional Review Board and the Institutional Animal Care and Ethics Committee of the institutions that provided the data.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Authors’ Affiliations
References
- Eils R, Athale C. Computational imaging in cell biology. J Cell Biol. 2003; 161(3):477–81. doi:10.1083/jcb.200302097.View ArticlePubMedPubMed CentralGoogle Scholar
- Stephens DJ, Allan VJ. Light microscopy techniques for live cell imaging. Science. 2003; 300(5616):82–6. doi:10.1126/science.1082160.View ArticlePubMedGoogle Scholar
- Yang X, Li H, Zhou X. Nuclei segmentation using marker-controlled watershed, tracking using mean-shift, and kalman filter in time-lapse microscopy. Circ. Syst. I: Regular Papers IEEE Trans. 2006; 53(11):2405–14. doi:10.1109/TCSI.2006.884469.Google Scholar
- Zhou X, Li F, Yan J, Wong STC. A novel cell segmentation method and cell phase identification using markov model. IEEE Trans Inf Technol Biomed. 2009; 13(2):152–7. doi:10.1109/TITB.2008.2007098.View ArticlePubMedPubMed CentralGoogle Scholar
- Li F, Zhou X, Ma J, Wong STC. Multiple nuclei tracking using integer programming for quantitative cancer cell cycle analysis. IEEE Trans Med Imaging. 2010; 29(1):96–105. doi:10.1109/TMI.2009.2027813.View ArticlePubMedGoogle Scholar
- Meijering E, Dzyubachyk O, Smal I. Methods for cell and particle tracking. Methods Enzymol. 2012; 504:183–200. doi:10.1016/B978-0-12-391857-4.00009-4.View ArticlePubMedGoogle Scholar
- Chen C, Wang W, Ozolek JA, Rohde GK. A flexible and robust approach for segmenting cell nuclei from 2d microscopy images using supervised learning and template matching. Cytom Part A. 2013; 83A(5):495–507. doi:10.1002/cyto.a.22280.View ArticleGoogle Scholar
- Maska M, Ulman V, Svoboda D, Matula P, Matula P, Ederra C, Urbiola A, Espana T, Venkatesan S, Balak DMW, Karas P, Bolckova T, Streitova M, Carthel C, Coraluppi S, Harder N, Rohr K, Magnusson KEG, Jalden J, Blau HM, Dzyubachyk O, Kizek P, Hagen GM, Pastor-Escuredo D, Jimenez-Carretero D, Ledesma-Carbayo MJ, Munoz-Barrutia A, Meijering E, Kozubek M, Ortiz-de-Solorzano C. A benchmark for comparison of cell tracking algorithms. Bioinformatics. 2014; 30(11):1609–17. doi:10.1093/bioinformatics/btu080.View ArticlePubMedPubMed CentralGoogle Scholar
- Faure E, Savy T, Rizzi B, Melani C, Stasova O, Fabreges D, Spir R, Hammons M, Cunderlik R, Recher G, Lombardot B, Duloquin L, Colin I, Kollar J, Desnoulez S, Affaticati P, Maury B, Boyreau A, Nief JY, Calvat P, Vernier P, Frain M, Lutfalla G, Kergosien Y, Suret P, Remesikova M, Doursat R, Sarti A, Mikula K, Peyrieras N, Bourgine P. A workflow to process 3d+time microscopy images of developing organisms and reconstruct their cell lineage. Nat Commun. 2016;7. doi:10.1038/ncomms9674.
- Kass M, Witkin A, Terzopoulos D. Snakes: Active contour models. Int J Comput Vis. 1988; 1(4):321–31. doi:10.1007/BF00133570.View ArticleGoogle Scholar
- Malladi R, Sethian JA, Vemuri BC. Shape modeling with front propagation: a level set approach. IEEE Trans Pattern Anal Mach Intell. 1995; 17:158–75. doi:10.1109/34.368173.View ArticleGoogle Scholar
- Cremers D, Rousson M, Deriche R. A review of statistical approaches to level set segmentation: Integrating color, texture, motion and shape. Int J Comput Vis. 2006; 72(2):195–215.View ArticleGoogle Scholar
- Dzyubachyk O, van Cappellen WA, Essers J, Niessen WJ, Meijering EHW. Advanced level-set-based cell tracking in time-lapse fluorescence microscopy. IEEE Trans Med Imaging. 2010; 29(3):852–67. doi:10.1109/TMI.2009.2038693.View ArticlePubMedGoogle Scholar
- Nath SK, Palaniappan K, Bunyak F. Cell Segmentation Using Coupled Level Sets and Graph-Vertex Coloring In: Larsen R, Nielsen M, Sporring J, editors. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2006: 9th International Conference, Copenhagen, Denmark, October 1-6, 2006. Proceedings, Part I. Berlin, Heidelberg: Springer: 2006. p. 101–8.Google Scholar
- Direkoglu C, Nixon MS. Moving-edge detection via heat flow analogy. Pattern Recogn Lett. 2011; 32(2):270–9. doi:10.1016/j.patrec.2010.08.012.View ArticleGoogle Scholar
- Makrogiannis SK, Bourbakis NG. Motion analysis with application to assistive vision technology. In: 16th IEEE International Conference On Tools with Artificial Intelligence, 2004. Piscataway, New Jersey: IEEE: 2004. p. 344–52, doi:10.1109/ICTAI.2004.89.Google Scholar
- Chan TF, Vese LA. Active contours without edges. Image Process IEEE Trans Image Process. 2001; 10(2):266–77. doi:10.1109/83.902291.View ArticlePubMedGoogle Scholar
- Boukari F, Makrogiannis S. Spatio-temporal level-set based cell segmentation in time-lapse image sequences In: Bebis G, Boyle R, Parvin B, Koracin D, McMahan R, Jerald J, Zhang H, Drucker S, Kambhamettu C, El Choubassi M, Deng Z, Carlson M, editors. Advances in Visual Computing. Lecture Notes in Computer Science, vol. 8888. Cham, Switzerland: Springer International: 2014. p. 41–50.Google Scholar
- Weickert J. Anisotropic Diffusion in Image Processing. ECMI Series. Stuttgart: Teubner; 1998. http://www.mia.uni-saarland.de/weickert/book.html.
- Black MJ, Sapiro G, Marimont DH, Heeger D. Robust anisotropic diffusion. IEEE Trans Image Process. 1998; 7(3):421–32. doi:10.1109/83.661192.View ArticlePubMedGoogle Scholar
- Perona P, Malik J. Scale-space and edge detection using anisotropic diffusion. Pattern Anal Mach Intell IEEE Trans. 1990; 12(7):629–39. doi:10.1109/34.56205.View ArticleGoogle Scholar
- You YL, Xu W, Tannenbaum A, Kaveh M. Behavioral analysis of anisotropic diffusion in image processing. IEEE Trans. Image Process. 1996; 5(11):1539–53. doi:10.1109/83.541424.View ArticlePubMedGoogle Scholar
- Tsiotsios C, Petrou M. On the choice of the parameters for anisotropic diffusion in image processing. Pattern Recognit. 2013; 46(5):1369–81. doi:10.1016/j.patcog.2012.11.012.View ArticleGoogle Scholar
- Parzen E. On estimation of a probability density function and mode. Ann Math Stat. 1962; 33(3):1065–76.View ArticleGoogle Scholar
- Izenman A. Recent developments in nonparametric density estimation. J Am Stat Assoc. 1991; 86:205–24.Google Scholar
- Comaniciu D, Meer P. Mean shift: A robust approach toward feature space analysis. IEEE Trans Pattern Anal Mach Intell. 2002; 24(5):603–19. doi:10.1109/34.1000236.View ArticleGoogle Scholar
- Soille P. Morphological image analysis: principles and applications, 2nd edn. Secaucus, NJ, USA: Springer; 2003.Google Scholar
- Mumford D, Shah J. Optimal approximations by piecewise smooth functions and associated variational problems. Comm Pure Appl Math. 1989; 42(5):577–685. doi:10.1002/cpa.3160420503.View ArticleGoogle Scholar
- Cell Tracking Challenge. 2013. http://www.grand-challenge.org/.