Showing 1 - 20 results of 119 for search '(( wise mean decrease ) OR ( ai ((((large decrease) OR (larger decrease))) OR (marked decrease)) ))', query time: 0.40s Refine Results
  1. 1

    Data Sheet 1_Emotional prompting amplifies disinformation generation in AI large language models.docx by Rasita Vinay (21006911)

    Published 2025
    “…Introduction<p>The emergence of artificial intelligence (AI) large language models (LLMs), which can produce text that closely resembles human-written content, presents both opportunities and risks. …”
  2. 2
  3. 3

    A novel RNN architecture to improve the precision of ship trajectory predictions by Martha Dais Ferreira (18704596)

    Published 2025
    “…To solve these challenges, Recurrent Neural Network (RNN) models have been applied to STP to allow scalability for large data sets and to capture larger regions or anomalous vessels behavior. …”
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11

    Data Sheet 1_Changes in voxel-wise gray matter asymmetry over time.pdf by Florian Kurth (350282)

    Published 2025
    “…Here, we set out to further explore age-related changes in brain asymmetry, with a particular focus on voxel-wise gray matter asymmetry. For this purpose, we selected a sample of 2,322 participants (1,150 women/1,172 men), aged between 47 and 80 years (mean 62.3 years), from the UK Biobank. …”
  12. 12

    Algorithm training accuracy experiments. by Yingying Liu (360782)

    Published 2025
    “…However, after removing the integrated residual attention unit and depth-wise separable convolution, the accuracy decreased by 1.91% and the latency increased by 117ms. …”
  13. 13

    Repeat the detection experiment. by Yingying Liu (360782)

    Published 2025
    “…However, after removing the integrated residual attention unit and depth-wise separable convolution, the accuracy decreased by 1.91% and the latency increased by 117ms. …”
  14. 14

    Detection network structure with IRAU [34]. by Yingying Liu (360782)

    Published 2025
    “…However, after removing the integrated residual attention unit and depth-wise separable convolution, the accuracy decreased by 1.91% and the latency increased by 117ms. …”
  15. 15

    Ablation experiments of various block. by Yingying Liu (360782)

    Published 2025
    “…However, after removing the integrated residual attention unit and depth-wise separable convolution, the accuracy decreased by 1.91% and the latency increased by 117ms. …”
  16. 16

    Kappa coefficients for different algorithms. by Yingying Liu (360782)

    Published 2025
    “…However, after removing the integrated residual attention unit and depth-wise separable convolution, the accuracy decreased by 1.91% and the latency increased by 117ms. …”
  17. 17

    The structure of ASPP+ block. by Yingying Liu (360782)

    Published 2025
    “…However, after removing the integrated residual attention unit and depth-wise separable convolution, the accuracy decreased by 1.91% and the latency increased by 117ms. …”
  18. 18

    The structure of attention gate block [31]. by Yingying Liu (360782)

    Published 2025
    “…However, after removing the integrated residual attention unit and depth-wise separable convolution, the accuracy decreased by 1.91% and the latency increased by 117ms. …”
  19. 19

    DSC block and its application network structure. by Yingying Liu (360782)

    Published 2025
    “…However, after removing the integrated residual attention unit and depth-wise separable convolution, the accuracy decreased by 1.91% and the latency increased by 117ms. …”
  20. 20

    The structure of multi-scale residual block [30]. by Yingying Liu (360782)

    Published 2025
    “…However, after removing the integrated residual attention unit and depth-wise separable convolution, the accuracy decreased by 1.91% and the latency increased by 117ms. …”