بدائل البحث:
significantly improved » significantly increased (توسيع البحث)
significantly better » significantly greater (توسيع البحث), significantly higher (توسيع البحث), significantly lower (توسيع البحث)
improved decrease » improved urease (توسيع البحث), marked decrease (توسيع البحث)
better decrease » greater decrease (توسيع البحث), teer decrease (توسيع البحث), between decreased (توسيع البحث)
significantly improved » significantly increased (توسيع البحث)
significantly better » significantly greater (توسيع البحث), significantly higher (توسيع البحث), significantly lower (توسيع البحث)
improved decrease » improved urease (توسيع البحث), marked decrease (توسيع البحث)
better decrease » greater decrease (توسيع البحث), teer decrease (توسيع البحث), between decreased (توسيع البحث)
-
41
MGPC module.
منشور في 2025"…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
-
42
Comparative experiment.
منشور في 2025"…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
-
43
Pruning experiment.
منشور في 2025"…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
-
44
Parameter setting table.
منشور في 2025"…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
-
45
DTADH module.
منشور في 2025"…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
-
46
Ablation experiment.
منشور في 2025"…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
-
47
Multi scale detection.
منشور في 2025"…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
-
48
MFDPN module.
منشور في 2025"…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
-
49
Detection effect of different sizes.
منشور في 2025"…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
-
50
Radar chart comparing indicators.
منشور في 2025"…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
-
51
MFD-YOLO structure.
منشور في 2025"…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
-
52
Detection results of each category.
منشور في 2025"…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …"
-
53
Overall model framework.
منشور في 2024"…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …"
-
54
Key parameters of LSTM training model.
منشور في 2024"…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …"
-
55
Comparison chart of model evaluation results.
منشور في 2024"…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …"
-
56
Model performance evaluation results.
منشور في 2024"…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …"
-
57
The result compared with other existing methods.
منشور في 2024"…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …"
-
58
Accuracy test results.
منشور في 2025"…The research results show that after 600 rounds of training on the CIFAR-10 dataset, the final accuracy of the improved model reached 97.2%. The runtime memory usage on the CIFAR-100 dataset is only 44.52%, a decrease of 44.56% compared to the baseline model. …"
-
59
Experiment environment and parameter.
منشور في 2025"…The research results show that after 600 rounds of training on the CIFAR-10 dataset, the final accuracy of the improved model reached 97.2%. The runtime memory usage on the CIFAR-100 dataset is only 44.52%, a decrease of 44.56% compared to the baseline model. …"
-
60
Test results for NME and FR.
منشور في 2025"…The research results show that after 600 rounds of training on the CIFAR-10 dataset, the final accuracy of the improved model reached 97.2%. The runtime memory usage on the CIFAR-100 dataset is only 44.52%, a decrease of 44.56% compared to the baseline model. …"