Showing 19,641 - 19,660 results of 36,050 for search '(( significant ((level increased) OR (greater decrease)) ) OR ( significant decrease decrease ))', query time: 0.70s Refine Results
  1. 19641

    Key modeling details for CoVPF and controls. by Zhong-yi Lei (22552944)

    Published 2025
    “…Furthermore, we found that accounting for epistasis was critical, as ignoring epistasis led to a 43% decrease in forecasting accuracy. Case studies showed that CoVPF delivered more accurate and timely forecasts for lineage expansions and inflections such as EG.5.1 and XBB.1.5. …”
  2. 19642

    Dataset. by Yehong Zhang (21615640)

    Published 2025
    “…</p><p>Results</p><p>Elevated NT-proBNP levels were significantly linked to increased all-cause (pooled OR = 2.322, 95% CI: 1.718 to 2.925) and cardiovascular mortality (pooled OR = 1.797, 95% CI: 1.161 to 2.433). …”
  3. 19643

    Risk of bias assessment of the included studies. by Yehong Zhang (21615640)

    Published 2025
    “…</p><p>Results</p><p>Elevated NT-proBNP levels were significantly linked to increased all-cause (pooled OR = 2.322, 95% CI: 1.718 to 2.925) and cardiovascular mortality (pooled OR = 1.797, 95% CI: 1.161 to 2.433). …”
  4. 19644

    Funnel plot for all-cause mortality. by Yehong Zhang (21615640)

    Published 2025
    “…</p><p>Results</p><p>Elevated NT-proBNP levels were significantly linked to increased all-cause (pooled OR = 2.322, 95% CI: 1.718 to 2.925) and cardiovascular mortality (pooled OR = 1.797, 95% CI: 1.161 to 2.433). …”
  5. 19645

    PRISMA flowchart. by Yehong Zhang (21615640)

    Published 2025
    “…</p><p>Results</p><p>Elevated NT-proBNP levels were significantly linked to increased all-cause (pooled OR = 2.322, 95% CI: 1.718 to 2.925) and cardiovascular mortality (pooled OR = 1.797, 95% CI: 1.161 to 2.433). …”
  6. 19646

    List of excluded studies with reasons. by Yehong Zhang (21615640)

    Published 2025
    “…</p><p>Results</p><p>Elevated NT-proBNP levels were significantly linked to increased all-cause (pooled OR = 2.322, 95% CI: 1.718 to 2.925) and cardiovascular mortality (pooled OR = 1.797, 95% CI: 1.161 to 2.433). …”
  7. 19647

    Funnel plot for functional outcomes. by Yehong Zhang (21615640)

    Published 2025
    “…</p><p>Results</p><p>Elevated NT-proBNP levels were significantly linked to increased all-cause (pooled OR = 2.322, 95% CI: 1.718 to 2.925) and cardiovascular mortality (pooled OR = 1.797, 95% CI: 1.161 to 2.433). …”
  8. 19648

    Characteristics of the included studies. by Yehong Zhang (21615640)

    Published 2025
    “…</p><p>Results</p><p>Elevated NT-proBNP levels were significantly linked to increased all-cause (pooled OR = 2.322, 95% CI: 1.718 to 2.925) and cardiovascular mortality (pooled OR = 1.797, 95% CI: 1.161 to 2.433). …”
  9. 19649

    LSTM model. by Longfei Gao (698900)

    Published 2025
    “…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
  10. 19650

    CNN model. by Longfei Gao (698900)

    Published 2025
    “…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
  11. 19651

    Ceramic bearings. by Longfei Gao (698900)

    Published 2025
    “…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
  12. 19652

    Geometric contact arc length model. by Longfei Gao (698900)

    Published 2025
    “…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
  13. 19653

    Indentation fracture mechanics model. by Longfei Gao (698900)

    Published 2025
    “…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
  14. 19654

    Grinding particle cutting machining model. by Longfei Gao (698900)

    Published 2025
    “…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
  15. 19655

    Three stages of abrasive cutting process. by Longfei Gao (698900)

    Published 2025
    “…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
  16. 19656

    CNN-LSTM action recognition process. by Longfei Gao (698900)

    Published 2025
    “…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
  17. 19657
  18. 19658

    Overall model framework. by Ke Yan (331581)

    Published 2024
    “…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …”
  19. 19659

    Key parameters of LSTM training model. by Ke Yan (331581)

    Published 2024
    “…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …”
  20. 19660

    Comparison chart of model evaluation results. by Ke Yan (331581)

    Published 2024
    “…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …”