Showing 1 - 20 results of 472 for search '(( learning test decrease ) OR ( ai ((large decrease) OR (marked decrease)) ))', query time: 0.47s Refine Results
  1. 1

    Data Sheet 1_Emotional prompting amplifies disinformation generation in AI large language models.docx by Rasita Vinay (21006911)

    Published 2025
    “…Introduction<p>The emergence of artificial intelligence (AI) large language models (LLMs), which can produce text that closely resembles human-written content, presents both opportunities and risks. …”
  2. 2
  3. 3

    Data Sheet 1_Large language models for closed-library multi-document query, test generation, and evaluation.docx by Claire Randolph (19747105)

    Published 2025
    “…Large Language Models (LLMs) provide a framework for artificial intelligence-assisted knowledge acquisition and continued learning. …”
  4. 4

    Feasibility of AI-powered assessment scoring: Can large language models replace human raters? by Michael Jaworski III (22156096)

    Published 2025
    “…<b>Method:</b> Thirty-five deidentified BICAMS protocols, including the Symbol Digit Modalities Test (SDMT), California Verbal Learning Test–II (CVLT-II), and Brief Visuospatial Memory Test–Revised (BVMT-R), were independently scored by two trained human raters and ChatGPT-4.5. …”
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13

    Training Data/Validation/Test. by Mudhafar Jalil Jassim Ghrabat (22177655)

    Published 2025
    “…The trials used a dataset of 162 individuals with IDC, split into training (113 photos) and testing (49 images) groups. Every model was subjected to individual testing. …”
  14. 14
  15. 15
  16. 16
  17. 17

    Testing set error. by Xiangjuan Liu (618000)

    Published 2025
    “…Further integration of Spearman correlation analysis and PCA dimensionality reduction created multidimensional feature sets, revealing substantial accuracy improvements: The BiLSTM model achieved an 83.6% cumulative MAE reduction from 1.65 (raw data) to 0.27 (STL-PCA), while traditional models like Prophet showed an 82.2% MAE decrease after feature engineering optimization. Finally, the Beluga Whale Optimization (BWO)-tuned STL-PCA-BWO-BiLSTM hybrid model delivered optimal performance on test sets (RMSE = 0.22, MAE = 0.16, MAPE = 0.99%, ), exhibiting 40.7% higher accuracy than unoptimized BiLSTM (MAE = 0.27). …”
  18. 18
  19. 19
  20. 20