Showing 1,741 - 1,760 results of 2,767 for search '(( significantly ((a decrease) OR (mean decrease)) ) OR ( significant decrease decrease ))~', query time: 0.43s Refine Results
  1. 1741

    The structure of attention gate block [31]. by Yingying Liu (360782)

    Published 2025
    “…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …”
  2. 1742

    DSC block and its application network structure. by Yingying Liu (360782)

    Published 2025
    “…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …”
  3. 1743

    The structure of multi-scale residual block [30]. by Yingying Liu (360782)

    Published 2025
    “…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …”
  4. 1744

    The structure of IRAU and Res2Net+ block [22]. by Yingying Liu (360782)

    Published 2025
    “…The actual accuracy and mean latency time of the model were 92.43% and 260ms, respectively. …”
  5. 1745

    Effects of Ni<sup>2</sup><sup>+</sup> on the mitochondrial and glycolytic energy metabolism of BMDM. by Jesse Corbin (21736578)

    Published 2025
    “…A double dagger (‡) indicates a significant decrease of OCR (slope = −0.14 ± 0.04, <i>R</i><sup>2</sup> = 0.40, <i>F</i>(1, 22)=15.0, <i>p </i>< 0.001) from 0 to 72 ppm Ni<sup>2+</sup>. …”
  6. 1746

    Cortical and subcortical regions feature distinct early and late responses during audition. by Sarah H. McGill (22505225)

    Published 2025
    “…The mean no-stimulus trial spectrograms were subtracted from the mean maximum-intensity trial spectrograms, and a cluster-based permutation testing was employed to identify significant differences between the conditions (p < 0.05, 5000 iterations). …”
  7. 1747

    Passive listening and go/No-go tasks feature comparable electrophysiological responses. by Sarah H. McGill (22505225)

    Published 2025
    “…The mean no-stimulus trial spectrograms were subtracted from the mean maximum-intensity trial spectrograms, and a cluster-based permutation testing was employed to identify significant differences between the conditions (p < 0.05, 5000 iterations). …”
  8. 1748
  9. 1749
  10. 1750

    Chemogenetic inhibition of Calcrl<sup>+</sup> neurons attenuates chronic itch in multiple chronic itch models. by Huifeng Jiao (11537806)

    Published 2025
    “…<p><b>(A-C)</b> Allergic Contact Dermatitis (ACD) Model (n = 7 mice for each group): Experimental timeline showing intraspinal viral injection, ACD model induction, and behavioral testing phases <b>(A)</b>; Quantification of mechanical itch responses revealed significant CNO-mediated suppression in hM4Di mice, with no effect in vehicle-treated controls <b>(B)</b>; Spontaneous scratching frequency decreased following CNO administration <b>(C)</b>. …”
  11. 1751

    Dataset visualization diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  12. 1752

    Dataset sample images. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  13. 1753

    Performance comparison of different models. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  14. 1754

    C2f and BC2f module structure diagrams. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  15. 1755

    YOLOv8n detection results diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  16. 1756

    YOLOv8n-BWG model structure diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  17. 1757

    BiFormer structure diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  18. 1758

    YOLOv8n-BWG detection results diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  19. 1759

    GSConv module structure diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  20. 1760

    Performance comparison of three loss functions. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”