Search alternatives:
greater decrease » greatest decrease (Expand Search), greater increase (Expand Search), greater disease (Expand Search)
greater decrease » greatest decrease (Expand Search), greater increase (Expand Search), greater disease (Expand Search)
-
2221
Repeat the detection experiment.
Published 2025“…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …”
-
2222
Detection network structure with IRAU [34].
Published 2025“…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …”
-
2223
Ablation experiments of various block.
Published 2025“…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …”
-
2224
Kappa coefficients for different algorithms.
Published 2025“…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …”
-
2225
The structure of ASPP+ block.
Published 2025“…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …”
-
2226
The structure of attention gate block [31].
Published 2025“…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …”
-
2227
DSC block and its application network structure.
Published 2025“…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …”
-
2228
The structure of multi-scale residual block [30].
Published 2025“…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …”
-
2229
The structure of IRAU and Res2Net+ block [22].
Published 2025“…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …”
-
2230
-
2231
-
2232
-
2233
Prediction of transition readiness.
Published 2025“…In most transition domains, help needed did not decrease with age and was not affected by function. …”
-
2234
Dataset visualization diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
2235
Dataset sample images.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
2236
Performance comparison of different models.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
2237
C2f and BC2f module structure diagrams.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
2238
YOLOv8n detection results diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
2239
YOLOv8n-BWG model structure diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
2240
BiFormer structure diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”