Distribution of average recognition error rates on the benchmark dataset when repeatedly implementing the hyperparameter-tuning method using <i>n</i> labeled images 1000 times.
<p>In each replicate, a few (1, 3, 5, 10, 20, 30, or 50) images are randomly selected from the benchmark dataset, and <i>λ</i> is tuned for each image to a proper value so that MCount gives the same counting number as the label. Then, the average of <i>λ</i> is chosen f...
محفوظ في:
| المؤلف الرئيسي: | |
|---|---|
| مؤلفون آخرون: | , , , |
| منشور في: |
2025
|
| الموضوعات: | |
| الوسوم: |
إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
|
| الملخص: | <p>In each replicate, a few (1, 3, 5, 10, 20, 30, or 50) images are randomly selected from the benchmark dataset, and <i>λ</i> is tuned for each image to a proper value so that MCount gives the same counting number as the label. Then, the average of <i>λ</i> is chosen for this replicate, and the average recognition error rate is calculated on the benchmark using this <i>λ</i>. By simulating this procedure for 1000 replicates, we can plot the distribution of average recognition error rates. As expected, increasing <i>n</i> results in a narrower distribution of average recognition error rates, leading to more consistent performance. Note that when , all recognition errors fall in the range of 3.5% to 13% with a mean of 5.17% (median of 4.77%), much lower than the recognition error rate of NICE at 16.54% (15.79%).</p> |
|---|