Showing 12,381 - 12,400 results of 42,334 for search '(( 50 ((((ns decrease) OR (a decrease))) OR (mean decrease)) ) OR ( a point decrease ))', query time: 1.08s Refine Results
  1. 12381

    Loop in the IB domain drives ParM monomer opening. by Natalie Ng (6560246)

    Published 2019
    “…Two simulations (ParM-ATP-2,3) consistently exhibited opening angles of ~102° after 50 ns and maintained that value, whereas in the other simulation (ParM-ATP-1), the opening angle increased beyond 105° after 100 ns and then eventually decreased to ~103° in the last 20 ns of the 200-ns simulation (<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1006683#pcbi.1006683.s004" target="_blank">S4A Fig</a>). …”
  2. 12382

    Data file. by Maryam Khalaji (21743611)

    Published 2025
    “…The results showed that mental fatigue significantly decreased QE duration (Mean difference = −138.75 ms, p = .0009) and fixation duration (Mean difference = 67.50 ms, p = .001), indicating a detrimental effect on sustained attention. …”
  3. 12383

    Structure-Based Design, Synthesis, and Biological Evaluation of New Triazolo[1,5‑<i>a</i>]Pyrimidine Derivatives as Highly Potent and Orally Active ABCB1 Modulators by Shuai Wang (109515)

    Published 2020
    “…In this work, we reported the structure-based design of triazolo­[1,5-<i>a</i>]­pyrimidines as new ABCB1 modulators, of which <b>WS-691</b> significantly increased sensitization of ABCB1-overexpressed SW620/Ad300 cells to paclitaxel (PTX) (IC<sub>50</sub> = 22.02 nM). …”
  4. 12384

    Structure-Based Design, Synthesis, and Biological Evaluation of New Triazolo[1,5‑<i>a</i>]Pyrimidine Derivatives as Highly Potent and Orally Active ABCB1 Modulators by Shuai Wang (109515)

    Published 2020
    “…In this work, we reported the structure-based design of triazolo­[1,5-<i>a</i>]­pyrimidines as new ABCB1 modulators, of which <b>WS-691</b> significantly increased sensitization of ABCB1-overexpressed SW620/Ad300 cells to paclitaxel (PTX) (IC<sub>50</sub> = 22.02 nM). …”
  5. 12385

    Structure-Based Design, Synthesis, and Biological Evaluation of New Triazolo[1,5‑<i>a</i>]Pyrimidine Derivatives as Highly Potent and Orally Active ABCB1 Modulators by Shuai Wang (109515)

    Published 2020
    “…In this work, we reported the structure-based design of triazolo­[1,5-<i>a</i>]­pyrimidines as new ABCB1 modulators, of which <b>WS-691</b> significantly increased sensitization of ABCB1-overexpressed SW620/Ad300 cells to paclitaxel (PTX) (IC<sub>50</sub> = 22.02 nM). …”
  6. 12386

    Dataset visualization diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  7. 12387

    Dataset sample images. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  8. 12388

    Performance comparison of different models. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  9. 12389

    C2f and BC2f module structure diagrams. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  10. 12390

    YOLOv8n detection results diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  11. 12391

    YOLOv8n-BWG model structure diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  12. 12392

    BiFormer structure diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  13. 12393

    YOLOv8n-BWG detection results diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  14. 12394

    GSConv module structure diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  15. 12395

    mAP0.5 Curves of various models. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  16. 12396

    Network loss function change diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  17. 12397

    Comparative diagrams of different indicators. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  18. 12398

    YOLOv8n structure diagram. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  19. 12399

    Geometric model of the binocular system. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”
  20. 12400

    Enhanced dataset sample images. by Yaojun Zhang (389482)

    Published 2025
    “…Results on a specialized dataset reveal that YOLOv8n-BWG outperforms YOLOv8n by increasing the mean Average Precision (mAP) by 4.2%, boosting recognition speed by 21.3% per second, and decreasing both the number of floating-point operations (FLOPs) by 28.9% and model size by 26.3%. …”