Showing 121 - 140 results of 233 for search '(( significant decrease decrease ) OR ( significantly ((mean decrease) OR (point increase)) ))~', query time: 0.44s Refine Results
  1. 121
  2. 122
  3. 123
  4. 124
  5. 125
  6. 126

    <i>SLC2A2</i> is essential for liver differentiation in developing vertebrates by Yejin Kim (740789)

    Published 2025
    “…The mRNA expression levels were normalized to β-actin. A significant increase (p-value < 0.001, ***) in igf1r expression was observed in <i>SLC2A2</i> MO-injected embryos compared to the control group. …”
  7. 127
  8. 128

    Amino acid metabolic pathways are influenced by the NC1 POM cycle over expression. by Bonnie A. McNeil (22331601)

    Published 2025
    “…Box and whisker plots of selected metabolites that displayed significant increase or decrease in the NC1 POM cycle producing strain (NC), red boxes, or empty vector producing strain (YC) green boxes. …”
  9. 129

    Valve closing capability and hemolymph flow analysis. by Christian Meyer (6035)

    Published 2025
    “…Pixel intensity in ROI 1 increases if hemolymph packages pass and decreases, if valves close after each heartbeat, respectively. …”
  10. 130
  11. 131
  12. 132
  13. 133

    Comparison of Mean Absolute Error (MAE) in Millimeters as a Function of Kernel Size. by Liu Liu (512237)

    Published 2025
    “…After this point, the Reduced Depth model’s MAE increases significantly, while the Full Model’s performance stabilizes before slightly increasing again.…”
  14. 134

    Dataset visualization diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  15. 135

    Dataset sample images. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  16. 136

    Performance comparison of different models. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  17. 137

    C2f and BC2f module structure diagrams. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  18. 138

    YOLOv8n detection results diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  19. 139

    YOLOv8n-BWG model structure diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  20. 140

    BiFormer structure diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”