Search alternatives:
marked decrease » marked increase (Expand Search)
large decrease » larger decrease (Expand Search), large increases (Expand Search), large degree (Expand Search)
learning test » learning task (Expand Search), learning tasks (Expand Search), learning rates (Expand Search)
test decrease » teer decrease (Expand Search), cost decreased (Expand Search), mean decrease (Expand Search)
marked decrease » marked increase (Expand Search)
large decrease » larger decrease (Expand Search), large increases (Expand Search), large degree (Expand Search)
learning test » learning task (Expand Search), learning tasks (Expand Search), learning rates (Expand Search)
test decrease » teer decrease (Expand Search), cost decreased (Expand Search), mean decrease (Expand Search)
-
1
Data Sheet 1_Emotional prompting amplifies disinformation generation in AI large language models.docx
Published 2025“…Introduction<p>The emergence of artificial intelligence (AI) large language models (LLMs), which can produce text that closely resembles human-written content, presents both opportunities and risks. …”
-
2
-
3
Data Sheet 1_Large language models for closed-library multi-document query, test generation, and evaluation.docx
Published 2025“…Large Language Models (LLMs) provide a framework for artificial intelligence-assisted knowledge acquisition and continued learning. …”
-
4
Feasibility of AI-powered assessment scoring: Can large language models replace human raters?
Published 2025“…<b>Method:</b> Thirty-five deidentified BICAMS protocols, including the Symbol Digit Modalities Test (SDMT), California Verbal Learning Test–II (CVLT-II), and Brief Visuospatial Memory Test–Revised (BVMT-R), were independently scored by two trained human raters and ChatGPT-4.5. …”
-
5
-
6
-
7
-
8
-
9
-
10
-
11
-
12
-
13
Training Data/Validation/Test.
Published 2025“…The trials used a dataset of 162 individuals with IDC, split into training (113 photos) and testing (49 images) groups. Every model was subjected to individual testing. …”
-
14
Comprehensive evaluation of machine-learning models in the training cohort.
Published 2025Subjects: -
15
-
16
-
17
Testing set error.
Published 2025“…Further integration of Spearman correlation analysis and PCA dimensionality reduction created multidimensional feature sets, revealing substantial accuracy improvements: The BiLSTM model achieved an 83.6% cumulative MAE reduction from 1.65 (raw data) to 0.27 (STL-PCA), while traditional models like Prophet showed an 82.2% MAE decrease after feature engineering optimization. Finally, the Beluga Whale Optimization (BWO)-tuned STL-PCA-BWO-BiLSTM hybrid model delivered optimal performance on test sets (RMSE = 0.22, MAE = 0.16, MAPE = 0.99%, ), exhibiting 40.7% higher accuracy than unoptimized BiLSTM (MAE = 0.27). …”
-
18
-
19
-
20