Showing 361 - 380 results of 1,012 for search 'significantly ((better decrease) OR (teer decrease))', query time: 0.34s Refine Results
  1. 361
  2. 362

    MGPC module. by Bo Tong (2138632)

    Published 2025
    “…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
  3. 363

    Comparative experiment. by Bo Tong (2138632)

    Published 2025
    “…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
  4. 364

    Pruning experiment. by Bo Tong (2138632)

    Published 2025
    “…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
  5. 365

    Parameter setting table. by Bo Tong (2138632)

    Published 2025
    “…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
  6. 366

    DTADH module. by Bo Tong (2138632)

    Published 2025
    “…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
  7. 367

    Ablation experiment. by Bo Tong (2138632)

    Published 2025
    “…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
  8. 368

    Multi scale detection. by Bo Tong (2138632)

    Published 2025
    “…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
  9. 369

    MFDPN module. by Bo Tong (2138632)

    Published 2025
    “…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
  10. 370

    Detection effect of different sizes. by Bo Tong (2138632)

    Published 2025
    “…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
  11. 371

    Radar chart comparing indicators. by Bo Tong (2138632)

    Published 2025
    “…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
  12. 372

    MFD-YOLO structure. by Bo Tong (2138632)

    Published 2025
    “…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
  13. 373

    Detection results of each category. by Bo Tong (2138632)

    Published 2025
    “…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
  14. 374
  15. 375
  16. 376

    Raw data of Figs 1–6 in this study. by Qi Qi Lu (17721401)

    Published 2025
    “…When gut epithelial PGAM5 receptor and apoptosis were inhibited by PGAM5-specific siRNA, inhibitor (LFHP-1c) and apoptosis inhibitor (Z-VAD-FMK), trans-epithelial electrical resistance (TEER) and TJs expression were obviously increased, and intestinal permeability was evidently decreased. …”
  17. 377

    Table 1_Prognostic significance of early alpha fetoprotein and des-gamma carboxy prothrombin responses in unresectable hepatocellular carcinoma patients undergoing triple combinati... by Teng Zhang (457128)

    Published 2024
    “…</p>Conclusion<p>AFP or DCP response at 6-8 weeks post-therapy predicts better oncological outcomes in patients with uHCC treated with triple therapy.…”
  18. 378
  19. 379

    Overall model framework. by Ke Yan (331581)

    Published 2024
    “…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …”
  20. 380

    Key parameters of LSTM training model. by Ke Yan (331581)

    Published 2024
    “…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …”