يعرض 1 - 20 نتائج من 1,073 نتيجة بحث عن '(((( learning test decrease ) OR ( _ largest decrease ))) OR ( ai large decrease ))', وقت الاستعلام: 0.36s تنقيح النتائج
  1. 1

    Data Sheet 1_Emotional prompting amplifies disinformation generation in AI large language models.docx حسب Rasita Vinay (21006911)

    منشور في 2025
    "…Introduction<p>The emergence of artificial intelligence (AI) large language models (LLMs), which can produce text that closely resembles human-written content, presents both opportunities and risks. …"
  2. 2
  3. 3

    Data Sheet 1_Large language models for closed-library multi-document query, test generation, and evaluation.docx حسب Claire Randolph (19747105)

    منشور في 2025
    "…Large Language Models (LLMs) provide a framework for artificial intelligence-assisted knowledge acquisition and continued learning. …"
  4. 4
  5. 5

    Feasibility of AI-powered assessment scoring: Can large language models replace human raters? حسب Michael Jaworski III (22156096)

    منشور في 2025
    "…<b>Method:</b> Thirty-five deidentified BICAMS protocols, including the Symbol Digit Modalities Test (SDMT), California Verbal Learning Test–II (CVLT-II), and Brief Visuospatial Memory Test–Revised (BVMT-R), were independently scored by two trained human raters and ChatGPT-4.5. …"
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16

    Training Data/Validation/Test. حسب Mudhafar Jalil Jassim Ghrabat (22177655)

    منشور في 2025
    "…The trials used a dataset of 162 individuals with IDC, split into training (113 photos) and testing (49 images) groups. Every model was subjected to individual testing. …"
  17. 17
  18. 18
  19. 19
  20. 20

    Testing set error. حسب Xiangjuan Liu (618000)

    منشور في 2025
    "…Further integration of Spearman correlation analysis and PCA dimensionality reduction created multidimensional feature sets, revealing substantial accuracy improvements: The BiLSTM model achieved an 83.6% cumulative MAE reduction from 1.65 (raw data) to 0.27 (STL-PCA), while traditional models like Prophet showed an 82.2% MAE decrease after feature engineering optimization. Finally, the Beluga Whale Optimization (BWO)-tuned STL-PCA-BWO-BiLSTM hybrid model delivered optimal performance on test sets (RMSE = 0.22, MAE = 0.16, MAPE = 0.99%, ), exhibiting 40.7% higher accuracy than unoptimized BiLSTM (MAE = 0.27). …"