يعرض 2,441 - 2,460 نتائج من 4,639 نتيجة بحث عن 'significantly ((((((larger decrease) OR (nn decrease))) OR (linear decrease))) OR (mean decrease))', وقت الاستعلام: 0.55s تنقيح النتائج
  1. 2441
  2. 2442
  3. 2443
  4. 2444

    Prediction of transition readiness. حسب Sharon Barak (4803966)

    منشور في 2025
    "…In most transition domains, help needed did not decrease with age and was not affected by function. …"
  5. 2445
  6. 2446

    Dataset visualization diagram. حسب Yaojun Zhang (389482)

    منشور في 2025
    "…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
  7. 2447

    Dataset sample images. حسب Yaojun Zhang (389482)

    منشور في 2025
    "…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
  8. 2448

    Performance comparison of different models. حسب Yaojun Zhang (389482)

    منشور في 2025
    "…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
  9. 2449

    C2f and BC2f module structure diagrams. حسب Yaojun Zhang (389482)

    منشور في 2025
    "…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
  10. 2450

    YOLOv8n detection results diagram. حسب Yaojun Zhang (389482)

    منشور في 2025
    "…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
  11. 2451

    YOLOv8n-BWG model structure diagram. حسب Yaojun Zhang (389482)

    منشور في 2025
    "…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
  12. 2452

    BiFormer structure diagram. حسب Yaojun Zhang (389482)

    منشور في 2025
    "…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
  13. 2453

    YOLOv8n-BWG detection results diagram. حسب Yaojun Zhang (389482)

    منشور في 2025
    "…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
  14. 2454

    GSConv module structure diagram. حسب Yaojun Zhang (389482)

    منشور في 2025
    "…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
  15. 2455

    Performance comparison of three loss functions. حسب Yaojun Zhang (389482)

    منشور في 2025
    "…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
  16. 2456

    mAP0.5 Curves of various models. حسب Yaojun Zhang (389482)

    منشور في 2025
    "…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
  17. 2457

    Network loss function change diagram. حسب Yaojun Zhang (389482)

    منشور في 2025
    "…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
  18. 2458

    Comparative diagrams of different indicators. حسب Yaojun Zhang (389482)

    منشور في 2025
    "…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
  19. 2459

    YOLOv8n structure diagram. حسب Yaojun Zhang (389482)

    منشور في 2025
    "…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"
  20. 2460

    Geometric model of the binocular system. حسب Yaojun Zhang (389482)

    منشور في 2025
    "…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …"