Search alternatives:
largest decrease » larger decrease (Expand Search), marked decrease (Expand Search)
less decrease » we decrease (Expand Search), levels decreased (Expand Search)
teer decrease » greater decrease (Expand Search)
largest decrease » larger decrease (Expand Search), marked decrease (Expand Search)
less decrease » we decrease (Expand Search), levels decreased (Expand Search)
teer decrease » greater decrease (Expand Search)
-
2261
Ablation experiments of various block.
Published 2025“…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …”
-
2262
Kappa coefficients for different algorithms.
Published 2025“…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …”
-
2263
The structure of ASPP+ block.
Published 2025“…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …”
-
2264
The structure of attention gate block [31].
Published 2025“…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …”
-
2265
-
2266
DSC block and its application network structure.
Published 2025“…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …”
-
2267
The structure of multi-scale residual block [30].
Published 2025“…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …”
-
2268
The structure of IRAU and Res2Net+ block [22].
Published 2025“…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …”
-
2269
-
2270
Dependent and independent variables (N = 316).
Published 2025“…<div><p>Introduction</p><p>Unmet oral health needs remain a significant issue among immigrant adolescents, often exacerbated by experiences of racial discrimination. …”
-
2271
-
2272
-
2273
-
2274
Dataset visualization diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
2275
Dataset sample images.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
2276
Performance comparison of different models.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
2277
C2f and BC2f module structure diagrams.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
2278
YOLOv8n detection results diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
2279
YOLOv8n-BWG model structure diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
2280
BiFormer structure diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”