Search alternatives:
point decrease » point increase (Expand Search)
we decrease » _ decrease (Expand Search), nn decrease (Expand Search), teer decrease (Expand Search)
a decrease » _ decrease (Expand Search), _ decreased (Expand Search), _ decreases (Expand Search)
a point » _ point (Expand Search), 5 point (Expand Search), _ points (Expand Search)
point decrease » point increase (Expand Search)
we decrease » _ decrease (Expand Search), nn decrease (Expand Search), teer decrease (Expand Search)
a decrease » _ decrease (Expand Search), _ decreased (Expand Search), _ decreases (Expand Search)
a point » _ point (Expand Search), 5 point (Expand Search), _ points (Expand Search)
-
12541
YOLOv8n-BWG model structure diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
12542
BiFormer structure diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
12543
YOLOv8n-BWG detection results diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
12544
GSConv module structure diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
12545
mAP0.5 Curves of various models.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
12546
Network loss function change diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
12547
Comparative diagrams of different indicators.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
12548
YOLOv8n structure diagram.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
12549
Geometric model of the binocular system.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
12550
Enhanced dataset sample images.
Published 2025“…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
-
12551
On a steepening environmental gradient, a sharp and stable range margin forms near the expansion threshold.
Published 2018“…(<b>b</b>) As the environmental gradient steepens, the frequency of limited adaptation within the metapopulation increases (black and grey), and hence neutral variation decreases (blue). The black line gives the proportion of demes with limited adaptation after 50,000 generations, when the range margin appears stable; grey gives the proportion after 40,000 generations (depicted is an average over a sliding window of 15 demes). …”
-
12552
Course of T<sub>body</sub> of a female pup before, during and after a thunderstorm.
Published 2013“…</b>: The thunderstorm stopped but it still rained on. <b>5:50 p.m.</b>: T<sub>air</sub> = 19.7°C. T<sub>body</sub> had decreased by 0.7°C to 36.9°C. …”
-
12553
Simulation Parameter Settings.
Published 2025“…After applying Taylor pruning to the model, its floating-point operations (FLOPs) were reduced from 40.5 M to 9.5 M, and its parameter memory was decreased from 2.6 M to 0.5 M. …”
-
12554
Complexity analysis of each model.
Published 2025“…After applying Taylor pruning to the model, its floating-point operations (FLOPs) were reduced from 40.5 M to 9.5 M, and its parameter memory was decreased from 2.6 M to 0.5 M. …”
-
12555
Radon transform of the constellation diagram.
Published 2025“…After applying Taylor pruning to the model, its floating-point operations (FLOPs) were reduced from 40.5 M to 9.5 M, and its parameter memory was decreased from 2.6 M to 0.5 M. …”
-
12556
The process of Taylor score pruning.
Published 2025“…After applying Taylor pruning to the model, its floating-point operations (FLOPs) were reduced from 40.5 M to 9.5 M, and its parameter memory was decreased from 2.6 M to 0.5 M. …”
-
12557
The principle of Radon transformation.
Published 2025“…After applying Taylor pruning to the model, its floating-point operations (FLOPs) were reduced from 40.5 M to 9.5 M, and its parameter memory was decreased from 2.6 M to 0.5 M. …”
-
12558
Ring constellation diagram.
Published 2025“…After applying Taylor pruning to the model, its floating-point operations (FLOPs) were reduced from 40.5 M to 9.5 M, and its parameter memory was decreased from 2.6 M to 0.5 M. …”
-
12559
-
12560