Search alternatives:
better decrease » greater decrease (Expand Search), between decreased (Expand Search)
teer decrease » mean decrease (Expand Search), greater decrease (Expand Search)
better decrease » greater decrease (Expand Search), between decreased (Expand Search)
teer decrease » mean decrease (Expand Search), greater decrease (Expand Search)
-
361
-
362
MGPC module.
Published 2025“…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
-
363
Comparative experiment.
Published 2025“…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
-
364
Pruning experiment.
Published 2025“…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
-
365
Parameter setting table.
Published 2025“…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
-
366
DTADH module.
Published 2025“…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
-
367
Ablation experiment.
Published 2025“…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
-
368
Multi scale detection.
Published 2025“…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
-
369
MFDPN module.
Published 2025“…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
-
370
Detection effect of different sizes.
Published 2025“…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
-
371
Radar chart comparing indicators.
Published 2025“…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
-
372
MFD-YOLO structure.
Published 2025“…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
-
373
Detection results of each category.
Published 2025“…Experimental results indicate that at a pruning level of 1.5, mAP@0.5 and mAP@0.5:0.95 improved by 3.9% and 4.6%, respectively, while computational load decreased by 21% and parameter count dropped by 53%. …”
-
374
-
375
-
376
Raw data of Figs 1–6 in this study.
Published 2025“…When gut epithelial PGAM5 receptor and apoptosis were inhibited by PGAM5-specific siRNA, inhibitor (LFHP-1c) and apoptosis inhibitor (Z-VAD-FMK), trans-epithelial electrical resistance (TEER) and TJs expression were obviously increased, and intestinal permeability was evidently decreased. …”
-
377
Table 1_Prognostic significance of early alpha fetoprotein and des-gamma carboxy prothrombin responses in unresectable hepatocellular carcinoma patients undergoing triple combinati...
Published 2024“…</p>Conclusion<p>AFP or DCP response at 6-8 weeks post-therapy predicts better oncological outcomes in patients with uHCC treated with triple therapy.…”
-
378
-
379
Overall model framework.
Published 2024“…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …”
-
380
Key parameters of LSTM training model.
Published 2024“…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …”