Showing 12,541 - 12,560 results of 42,560 for search '(( 50 ((we decrease) OR (((a decrease) OR (mean decrease)))) ) OR ( a point decrease ))', query time: 0.79s Refine Results
  1. 12541

    YOLOv8n-BWG model structure diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  2. 12542

    BiFormer structure diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  3. 12543

    YOLOv8n-BWG detection results diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  4. 12544

    GSConv module structure diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  5. 12545

    mAP0.5 Curves of various models. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  6. 12546

    Network loss function change diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  7. 12547

    Comparative diagrams of different indicators. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  8. 12548

    YOLOv8n structure diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  9. 12549

    Geometric model of the binocular system. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  10. 12550

    Enhanced dataset sample images. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  11. 12551

    On a steepening environmental gradient, a sharp and stable range margin forms near the expansion threshold. by Jitka Polechová (5400749)

    Published 2018
    “…(<b>b</b>) As the environmental gradient steepens, the frequency of limited adaptation within the metapopulation increases (black and grey), and hence neutral variation decreases (blue). The black line gives the proportion of demes with limited adaptation after 50,000 generations, when the range margin appears stable; grey gives the proportion after 40,000 generations (depicted is an average over a sliding window of 15 demes). …”
  12. 12552

    Course of T<sub>body</sub> of a female pup before, during and after a thunderstorm. by Nicola Erdsack (443390)

    Published 2013
    “…</b>: The thunderstorm stopped but it still rained on. <b>5:50 p.m.</b>: T<sub>air</sub> = 19.7°C. T<sub>body</sub> had decreased by 0.7°C to 36.9°C. …”
  13. 12553

    Simulation Parameter Settings. by Jinrong Liang (3918740)

    Published 2025
    “…After applying Taylor pruning to the model, its floating-point operations (FLOPs) were reduced from 40.5 M to 9.5 M, and its parameter memory was decreased from 2.6 M to 0.5 M. …”
  14. 12554

    Complexity analysis of each model. by Jinrong Liang (3918740)

    Published 2025
    “…After applying Taylor pruning to the model, its floating-point operations (FLOPs) were reduced from 40.5 M to 9.5 M, and its parameter memory was decreased from 2.6 M to 0.5 M. …”
  15. 12555

    Radon transform of the constellation diagram. by Jinrong Liang (3918740)

    Published 2025
    “…After applying Taylor pruning to the model, its floating-point operations (FLOPs) were reduced from 40.5 M to 9.5 M, and its parameter memory was decreased from 2.6 M to 0.5 M. …”
  16. 12556

    The process of Taylor score pruning. by Jinrong Liang (3918740)

    Published 2025
    “…After applying Taylor pruning to the model, its floating-point operations (FLOPs) were reduced from 40.5 M to 9.5 M, and its parameter memory was decreased from 2.6 M to 0.5 M. …”
  17. 12557

    The principle of Radon transformation. by Jinrong Liang (3918740)

    Published 2025
    “…After applying Taylor pruning to the model, its floating-point operations (FLOPs) were reduced from 40.5 M to 9.5 M, and its parameter memory was decreased from 2.6 M to 0.5 M. …”
  18. 12558

    Ring constellation diagram. by Jinrong Liang (3918740)

    Published 2025
    “…After applying Taylor pruning to the model, its floating-point operations (FLOPs) were reduced from 40.5 M to 9.5 M, and its parameter memory was decreased from 2.6 M to 0.5 M. …”
  19. 12559
  20. 12560