Experiments - Cambridge Data Set
1. Optimizing parameters over a single image
Two parameters were optimized - Beta & k :
Beta - Lies within the exponent in the neighboring pixels weight calculation. This parameter affects the shape of the exponent function, assisting manipulation of the variance of those weights.
k - Number of peaks in the Gaussian mixture models. The higher it is allows better description of the distribution; However there are no free lunch, so increasing it too much might cause over-fitting.
In order to optimize these parameters, a similarity measure was chosen to be the percentage of pixels labeled wrong compared to ground truth data. A single image ("llama") was chosen and multiple runs with different parameters were processed.
1. Optimizing parameters over a single image
Two parameters were optimized - Beta & k :
Beta - Lies within the exponent in the neighboring pixels weight calculation. This parameter affects the shape of the exponent function, assisting manipulation of the variance of those weights.
k - Number of peaks in the Gaussian mixture models. The higher it is allows better description of the distribution; However there are no free lunch, so increasing it too much might cause over-fitting.
In order to optimize these parameters, a similarity measure was chosen to be the percentage of pixels labeled wrong compared to ground truth data. A single image ("llama") was chosen and multiple runs with different parameters were processed.
Figure 1 - Similarity as a function of K & Beta
Figure 2 - Results for fixed K & fixed Beta
As we look at Figures 1 & 2, we see that K has less effect on the quality of result than Beta has (while increasing it will surely result in significant increase of run time).
Best results were received for Beta=0.13 & K=9. These numbers were the anchor point around which parameters were picked for the next experiment, as seen in the following.
2. Results over many images
Due to run-time attributes, only several parameters samples were chosen & used for multi-image experiments. The parameters and their corresponding results can be seen in Figure 3. The figure shows the distribution of similarity measure results over images (histograms of results) for the chosen parameters. In accordance with Figure 1, we see that the best results over multiple images (45 of them) are given for Beta=0.13 and K=9.
Best results were received for Beta=0.13 & K=9. These numbers were the anchor point around which parameters were picked for the next experiment, as seen in the following.
2. Results over many images
Due to run-time attributes, only several parameters samples were chosen & used for multi-image experiments. The parameters and their corresponding results can be seen in Figure 3. The figure shows the distribution of similarity measure results over images (histograms of results) for the chosen parameters. In accordance with Figure 1, we see that the best results over multiple images (45 of them) are given for Beta=0.13 and K=9.
Figure 3
3. Comparison to "Normalized Cut"
Trying to compare to current known segmentation algorithms, "Normalized Cut" was chosen as a candidate. Figure 4 shows a comparison between our best K & Beta similarity measure multi-image run, and the same for a normalized cut run, where number of segments was chosen to be 2. Clearly, as can be seen, the Grab Cut method outperforms Normalized Cut.
Figure 4