Showing 1 - 20 results of 1,417 for search '(( learning ((we decrease) OR (a decrease)) ) OR ( ai ((large decrease) OR (marked decrease)) ))', query time: 0.56s Refine Results
  1. 1
  2. 2

    Data Sheet 1_Emotional prompting amplifies disinformation generation in AI large language models.docx by Rasita Vinay (21006911)

    Published 2025
    “…Introduction<p>The emergence of artificial intelligence (AI) large language models (LLMs), which can produce text that closely resembles human-written content, presents both opportunities and risks. …”
  3. 3

    Data from: Colony losses of stingless bees increase in agricultural areas, but decrease in forested areas by Malena Sibaja Leyton (18400983)

    Published 2025
    “…On average, meliponiculturists lost 43.4 % of their stingless bee colonies annually, 33.3 % during the rainy season, and 22.0 % during the dry season. We found that colony losses during the rainy season decreased with higher abundance of forested areas and increased with higher abundance of agricultural area around meliponaries. …”
  4. 4

    Overview of the WeARTolerance program. by Ana Beato (20489933)

    Published 2024
    “…The quantitative results from Phase 1 demonstrated a decreasing trend in all primary outcomes. In phase 2, participants acknowledged the activities’ relevance, reported overall satisfaction with the program, and showed great enthusiasm and willingness to learn more. …”
  5. 5
  6. 6
  7. 7

    Feasibility of AI-powered assessment scoring: Can large language models replace human raters? by Michael Jaworski III (22156096)

    Published 2025
    “…<p><b>Objective:</b> To assess the feasibility, accuracy, and reliability of using ChatGPT-4.5 (early-access), a large language model (LLM), for automated scoring of Brief International Cognitive Assessment for Multiple Sclerosis (BICAMS) protocols. …”
  8. 8
  9. 9
  10. 10
  11. 11

    Supplementary file 1_Harnessing AI for aphasia: a case report on ChatGPT's role in supporting written expression.docx by Avery K. Allen (21449492)

    Published 2025
    “…While writing aids show promise, artificial intelligence (AI) tools, such as large language models (LLMs), offer new opportunities for individuals with language-based writing challenges.…”
  12. 12

    Evaluation of the effectiveness of double task. by Fan Yang (1413)

    Published 2025
    “…The Spatial Attention Based Dual-Branch Information Fusion Block links these branches, enabling mutual benefit. Furthermore, we present a structured pruning method grounded in channel attention to decrease parameter count, mitigate overfitting, and uphold segmentation accuracy. …”
  13. 13

    Evaluation of the effectiveness of pruning. by Fan Yang (1413)

    Published 2025
    “…The Spatial Attention Based Dual-Branch Information Fusion Block links these branches, enabling mutual benefit. Furthermore, we present a structured pruning method grounded in channel attention to decrease parameter count, mitigate overfitting, and uphold segmentation accuracy. …”
  14. 14

    The summary of ablation experiment. by Fan Yang (1413)

    Published 2025
    “…The Spatial Attention Based Dual-Branch Information Fusion Block links these branches, enabling mutual benefit. Furthermore, we present a structured pruning method grounded in channel attention to decrease parameter count, mitigate overfitting, and uphold segmentation accuracy. …”
  15. 15

    Schematic of SADBIFB. by Fan Yang (1413)

    Published 2025
    “…The Spatial Attention Based Dual-Branch Information Fusion Block links these branches, enabling mutual benefit. Furthermore, we present a structured pruning method grounded in channel attention to decrease parameter count, mitigate overfitting, and uphold segmentation accuracy. …”
  16. 16

    Schematic of the residual attention block. by Fan Yang (1413)

    Published 2025
    “…The Spatial Attention Based Dual-Branch Information Fusion Block links these branches, enabling mutual benefit. Furthermore, we present a structured pruning method grounded in channel attention to decrease parameter count, mitigate overfitting, and uphold segmentation accuracy. …”
  17. 17
  18. 18
  19. 19
  20. 20