Search alternatives:
significant decrease » significant increase (Expand Search), significantly increased (Expand Search)
greater decrease » greatest decrease (Expand Search), greater increase (Expand Search), greater disease (Expand Search)
level increased » levels increased (Expand Search), levels decreased (Expand Search), gene increased (Expand Search)
significant decrease » significant increase (Expand Search), significantly increased (Expand Search)
greater decrease » greatest decrease (Expand Search), greater increase (Expand Search), greater disease (Expand Search)
level increased » levels increased (Expand Search), levels decreased (Expand Search), gene increased (Expand Search)
-
19641
Key modeling details for CoVPF and controls.
Published 2025“…Furthermore, we found that accounting for epistasis was critical, as ignoring epistasis led to a 43% decrease in forecasting accuracy. Case studies showed that CoVPF delivered more accurate and timely forecasts for lineage expansions and inflections such as EG.5.1 and XBB.1.5. …”
-
19642
Dataset.
Published 2025“…</p><p>Results</p><p>Elevated NT-proBNP levels were significantly linked to increased all-cause (pooled OR = 2.322, 95% CI: 1.718 to 2.925) and cardiovascular mortality (pooled OR = 1.797, 95% CI: 1.161 to 2.433). …”
-
19643
Risk of bias assessment of the included studies.
Published 2025“…</p><p>Results</p><p>Elevated NT-proBNP levels were significantly linked to increased all-cause (pooled OR = 2.322, 95% CI: 1.718 to 2.925) and cardiovascular mortality (pooled OR = 1.797, 95% CI: 1.161 to 2.433). …”
-
19644
Funnel plot for all-cause mortality.
Published 2025“…</p><p>Results</p><p>Elevated NT-proBNP levels were significantly linked to increased all-cause (pooled OR = 2.322, 95% CI: 1.718 to 2.925) and cardiovascular mortality (pooled OR = 1.797, 95% CI: 1.161 to 2.433). …”
-
19645
PRISMA flowchart.
Published 2025“…</p><p>Results</p><p>Elevated NT-proBNP levels were significantly linked to increased all-cause (pooled OR = 2.322, 95% CI: 1.718 to 2.925) and cardiovascular mortality (pooled OR = 1.797, 95% CI: 1.161 to 2.433). …”
-
19646
List of excluded studies with reasons.
Published 2025“…</p><p>Results</p><p>Elevated NT-proBNP levels were significantly linked to increased all-cause (pooled OR = 2.322, 95% CI: 1.718 to 2.925) and cardiovascular mortality (pooled OR = 1.797, 95% CI: 1.161 to 2.433). …”
-
19647
Funnel plot for functional outcomes.
Published 2025“…</p><p>Results</p><p>Elevated NT-proBNP levels were significantly linked to increased all-cause (pooled OR = 2.322, 95% CI: 1.718 to 2.925) and cardiovascular mortality (pooled OR = 1.797, 95% CI: 1.161 to 2.433). …”
-
19648
Characteristics of the included studies.
Published 2025“…</p><p>Results</p><p>Elevated NT-proBNP levels were significantly linked to increased all-cause (pooled OR = 2.322, 95% CI: 1.718 to 2.925) and cardiovascular mortality (pooled OR = 1.797, 95% CI: 1.161 to 2.433). …”
-
19649
LSTM model.
Published 2025“…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
-
19650
CNN model.
Published 2025“…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
-
19651
Ceramic bearings.
Published 2025“…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
-
19652
Geometric contact arc length model.
Published 2025“…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
-
19653
Indentation fracture mechanics model.
Published 2025“…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
-
19654
Grinding particle cutting machining model.
Published 2025“…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
-
19655
Three stages of abrasive cutting process.
Published 2025“…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
-
19656
CNN-LSTM action recognition process.
Published 2025“…According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. …”
-
19657
-
19658
Overall model framework.
Published 2024“…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …”
-
19659
Key parameters of LSTM training model.
Published 2024“…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …”
-
19660
Comparison chart of model evaluation results.
Published 2024“…The results show that: (1) From the experimental data of word sense disambiguation, the accuracy of the SMOSS-LSTM model proposed in this paper is the lowest when the context range is "3+3", then it rises in turn at "5+5" and "7+7", reaches the highest at "7+7", and then begins to decrease at "10+10"; (2) Compared with the control group, the accuracy of syntactic analysis in the experimental group reached 89.5%, while that in the control group was only 73.2%. (3) In the aspect of English text error detection, the detection accuracy of the proposed model in the experimental group is as high as 94.8%, which is significantly better than the traditional SMOSS-based text error detection method, and its accuracy is only 68.3%. (4) Compared with other existing researches, although it is slightly inferior to Bidirectional Encoder Representations from Transformers (BERT) in word sense disambiguation, this proposed model performs well in syntactic analysis and English text error detection, and its comprehensive performance is excellent. …”