بدائل البحث:
significant decrease » significant increase (توسيع البحث), significantly increased (توسيع البحث)
significant mean » significant main (توسيع البحث), significant green (توسيع البحث), significant gap (توسيع البحث)
significant decrease » significant increase (توسيع البحث), significantly increased (توسيع البحث)
significant mean » significant main (توسيع البحث), significant green (توسيع البحث), significant gap (توسيع البحث)
-
2101
Kappa coefficients for different algorithms.
منشور في 2025"…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …"
-
2102
The structure of ASPP+ block.
منشور في 2025"…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …"
-
2103
The structure of attention gate block [31].
منشور في 2025"…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …"
-
2104
DSC block and its application network structure.
منشور في 2025"…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …"
-
2105
The structure of multi-scale residual block [30].
منشور في 2025"…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …"
-
2106
The structure of IRAU and Res2Net+ block [22].
منشور في 2025"…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …"
-
2107
Dataset visualization diagram.
منشور في 2025"…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
-
2108
Dataset sample images.
منشور في 2025"…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
-
2109
Performance comparison of different models.
منشور في 2025"…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
-
2110
C2f and BC2f module structure diagrams.
منشور في 2025"…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
-
2111
YOLOv8n detection results diagram.
منشور في 2025"…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
-
2112
YOLOv8n-BWG model structure diagram.
منشور في 2025"…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
-
2113
BiFormer structure diagram.
منشور في 2025"…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
-
2114
YOLOv8n-BWG detection results diagram.
منشور في 2025"…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
-
2115
GSConv module structure diagram.
منشور في 2025"…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
-
2116
Performance comparison of three loss functions.
منشور في 2025"…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
-
2117
mAP0.5 Curves of various models.
منشور في 2025"…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
-
2118
Network loss function change diagram.
منشور في 2025"…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
-
2119
Comparative diagrams of different indicators.
منشور في 2025"…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
-
2120
YOLOv8n structure diagram.
منشور في 2025"…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"