Showing 19,601 - 19,620 results of 36,050 for search '(( significantly level increased ) OR ( significant decrease decrease ))', query time: 0.55s Refine Results
  1. 19601

    CNN model. by Longfei Gao (698900)

    Published 2025
    “…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
  2. 19602

    Ceramic bearings. by Longfei Gao (698900)

    Published 2025
    “…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
  3. 19603

    Geometric contact arc length model. by Longfei Gao (698900)

    Published 2025
    “…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
  4. 19604

    Indentation fracture mechanics model. by Longfei Gao (698900)

    Published 2025
    “…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
  5. 19605

    Grinding particle cutting machining model. by Longfei Gao (698900)

    Published 2025
    “…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
  6. 19606

    Three stages of abrasive cutting process. by Longfei Gao (698900)

    Published 2025
    “…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
  7. 19607

    CNN-LSTM action recognition process. by Longfei Gao (698900)

    Published 2025
    “…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
  8. 19608
  9. 19609
  10. 19610

    Raw data. by Sura Nashwan (21485672)

    Published 2025
    “…Furthermore, both iMSC- and ADMSC-derived EVs significantly increased HDF viability at 48 and 72 hours (p ≤ 0.01, p ≤ 0.05). …”
  11. 19611

    Overall model framework. by Ke Yan (331581)

    Published 2024
    “…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …”
  12. 19612

    Key parameters of LSTM training model. by Ke Yan (331581)

    Published 2024
    “…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …”
  13. 19613

    Comparison chart of model evaluation results. by Ke Yan (331581)

    Published 2024
    “…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …”
  14. 19614

    Model performance evaluation results. by Ke Yan (331581)

    Published 2024
    “…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …”
  15. 19615

    The result compared with other existing methods. by Ke Yan (331581)

    Published 2024
    “…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …”
  16. 19616
  17. 19617

    Hybrid Molecules of Benzothiazole and Hydroxamic Acid as Dual-Acting Biofilm Inhibitors with Antibacterial Synergistic Effect against Pseudomonas aeruginosa Infections by Zhen-Meng Zhang (20874497)

    Published 2025
    “…Moreover, <b>JH21</b> significantly enhanced the efficacy of tobramycin and ciprofloxacin by 200- and 1000-fold, respectively, in a mouse wound infection model. …”
  18. 19618

    Forest plot for sex-based differences in anxiety. by Liang-Tseng Kuo (4861600)

    Published 2025
    “…<div><p>The COVID-19 pandemic significantly affected elite athletes, leading to increased mental health issues such as stress, anxiety, and depression. …”
  19. 19619

    S2 Data - by Liang-Tseng Kuo (4861600)

    Published 2025
    “…<div><p>The COVID-19 pandemic significantly affected elite athletes, leading to increased mental health issues such as stress, anxiety, and depression. …”
  20. 19620

    Characteristics of included studies. by Liang-Tseng Kuo (4861600)

    Published 2025
    “…<div><p>The COVID-19 pandemic significantly affected elite athletes, leading to increased mental health issues such as stress, anxiety, and depression. …”