Search alternatives:
step decrease » sizes decrease (Expand Search), teer decrease (Expand Search), we decrease (Expand Search)
a decrease » _ decrease (Expand Search), _ decreased (Expand Search), _ decreases (Expand Search)
step decrease » sizes decrease (Expand Search), teer decrease (Expand Search), we decrease (Expand Search)
a decrease » _ decrease (Expand Search), _ decreased (Expand Search), _ decreases (Expand Search)
-
7381
Income satisfaction measure.
Published 2025“…In this paper, we apply a novel approach to try and address this issue. …”
-
7382
Marginal effect of HTD on income and SWB.
Published 2025“…In this paper, we apply a novel approach to try and address this issue. …”
-
7383
Estimation procedure flow-chart.
Published 2025“…In this paper, we apply a novel approach to try and address this issue. …”
-
7384
Structural model outline.
Published 2025“…In this paper, we apply a novel approach to try and address this issue. …”
-
7385
Amino acid metabolic pathways are influenced by the NC1 POM cycle over expression.
Published 2025“…Box and whisker plots of selected metabolites that displayed significant increase or decrease in the NC1 POM cycle producing strain (NC), red boxes, or empty vector producing strain (YC) green boxes. …”
-
7386
Prediction of transition readiness.
Published 2025“…In most transition domains, help needed did not decrease with age and was not affected by function. …”
-
7387
Dataset visualization diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
7388
Dataset sample images.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
7389
Performance comparison of different models.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
7390
C2f and BC2f module structure diagrams.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
7391
YOLOv8n detection results diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
7392
YOLOv8n-BWG model structure diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
7393
BiFormer structure diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
7394
YOLOv8n-BWG detection results diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
7395
GSConv module structure diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
7396
Performance comparison of three loss functions.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
7397
mAP0.5 Curves of various models.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
7398
Network loss function change diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
7399
Comparative diagrams of different indicators.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
7400
YOLOv8n structure diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”