يعرض 41 - 60 نتائج من 366 نتيجة بحث عن '(( significantly better decrease ) OR ( significantly improved decrease ))~', وقت الاستعلام: 0.46s تنقيح النتائج
  1. 41

    MGPC module. حسب Bo Tong (2138632)

    منشور في 2025
    "…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
  2. 42

    Comparative experiment. حسب Bo Tong (2138632)

    منشور في 2025
    "…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
  3. 43

    Pruning experiment. حسب Bo Tong (2138632)

    منشور في 2025
    "…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
  4. 44

    Parameter setting table. حسب Bo Tong (2138632)

    منشور في 2025
    "…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
  5. 45

    DTADH module. حسب Bo Tong (2138632)

    منشور في 2025
    "…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
  6. 46

    Ablation experiment. حسب Bo Tong (2138632)

    منشور في 2025
    "…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
  7. 47

    Multi scale detection. حسب Bo Tong (2138632)

    منشور في 2025
    "…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
  8. 48

    MFDPN module. حسب Bo Tong (2138632)

    منشور في 2025
    "…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
  9. 49

    Detection effect of different sizes. حسب Bo Tong (2138632)

    منشور في 2025
    "…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
  10. 50

    Radar chart comparing indicators. حسب Bo Tong (2138632)

    منشور في 2025
    "…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
  11. 51

    MFD-YOLO structure. حسب Bo Tong (2138632)

    منشور في 2025
    "…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
  12. 52

    Detection results of each category. حسب Bo Tong (2138632)

    منشور في 2025
    "…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
  13. 53

    Overall model framework. حسب Ke Yan (331581)

    منشور في 2024
    "…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …"
  14. 54

    Key parameters of LSTM training model. حسب Ke Yan (331581)

    منشور في 2024
    "…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …"
  15. 55

    Comparison chart of model evaluation results. حسب Ke Yan (331581)

    منشور في 2024
    "…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …"
  16. 56

    Model performance evaluation results. حسب Ke Yan (331581)

    منشور في 2024
    "…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …"
  17. 57

    The result compared with other existing methods. حسب Ke Yan (331581)

    منشور في 2024
    "…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …"
  18. 58

    Accuracy test results. حسب Yuxuan Ji (13991895)

    منشور في 2025
    "…The research results show that after 600 rounds of training on the CIFAR-10 dataset, the final accuracy of the improved model reached 97.2%. The runtime memory usage on the CIFAR-100 dataset is only 44.52%, a decrease of 44.56% compared to the baseline model. …"
  19. 59

    Experiment environment and parameter. حسب Yuxuan Ji (13991895)

    منشور في 2025
    "…The research results show that after 600 rounds of training on the CIFAR-10 dataset, the final accuracy of the improved model reached 97.2%. The runtime memory usage on the CIFAR-100 dataset is only 44.52%, a decrease of 44.56% compared to the baseline model. …"
  20. 60

    Test results for NME and FR. حسب Yuxuan Ji (13991895)

    منشور في 2025
    "…The research results show that after 600 rounds of training on the CIFAR-10 dataset, the final accuracy of the improved model reached 97.2%. The runtime memory usage on the CIFAR-100 dataset is only 44.52%, a decrease of 44.56% compared to the baseline model. …"