Segmentation module training details.
<p>(A) The results obtained from the Segmentation Ensemble. Each of the 2 rows displayed in this picture represent 2 different examples showing how implementing Segmentation Ensemble enabled a significant reduction in segmentation errors. The best results were achieved by adding outputs from S...
Saved in:
| Main Author: | |
|---|---|
| Other Authors: | , , , |
| Published: |
2025
|
| Subjects: | |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | <p>(A) The results obtained from the Segmentation Ensemble. Each of the 2 rows displayed in this picture represent 2 different examples showing how implementing Segmentation Ensemble enabled a significant reduction in segmentation errors. The best results were achieved by adding outputs from Segmentor and Refiner together as can be observed from these images. In the second column (BEFORE) and the last one (AFTER), the segmentation results are shown as contours, plotted on top of the fluorescent images for visualization. Note that the IoU values given in the top row are not for these samples alone. They are average IoUs for the whole validation set. (B) Comparison of training samples for the Segmentor and the Refiner networks. This figure illustrates how the training data for Segmentor and Refiner was created. For both neural networks, the same set of input patches was utilized for training, but the seed channels and weight maps were created differently. The seed for the Segmentor was a 40 x 40 square located at the cell of interest (marked by the red dot) in the previous frame. The seed for the Refiner was the output from the Segmentor, i.e., the segmented cell of interest (with some potential errors). The weight maps for both neural networks contained higher weights for the unwanted cells, i.e., the cells or their fragments which were to be ignored, and for the pixels between close cells (this is visualized as regions with lighter grey pixels). In addition, the weight map for the Refiner contains a bright region corresponding to the segmentation error produced by the Segmentor. (C) The learning curves for the Segmentor. <i>(</i>Left) The training and validation curves for the cross-entropy pixel wise loss function. (Right) The training and validation curves for the IoU (Intersection Over Union) metric. The model with the highest validation IoU (which corresponded to epoch 50) was saved. (D) The learning curves for the Refiner. (Left) Training and validation curves for the cross-entropy pixel wise loss function. Note that even though the validation curve is higher than the training curve, the absolute difference is small (0.014 at Epoch 30). (Right) Training and validation curves for the IoU (Intersection Over Union) metric. The model with the highest validation IoU was saved.</p> |
|---|