Search alternatives:
marked decrease » marked increase (Expand Search)
large decrease » large increases (Expand Search), large degree (Expand Search)
wise mean » wide mean (Expand Search), wise meta (Expand Search), i.e mean (Expand Search)
marked decrease » marked increase (Expand Search)
large decrease » large increases (Expand Search), large degree (Expand Search)
wise mean » wide mean (Expand Search), wise meta (Expand Search), i.e mean (Expand Search)
-
1
Data Sheet 1_Emotional prompting amplifies disinformation generation in AI large language models.docx
Published 2025“…Introduction<p>The emergence of artificial intelligence (AI) large language models (LLMs), which can produce text that closely resembles human-written content, presents both opportunities and risks. …”
-
2
-
3
A novel RNN architecture to improve the precision of ship trajectory predictions
Published 2025“…To solve these challenges, Recurrent Neural Network (RNN) models have been applied to STP to allow scalability for large data sets and to capture larger regions or anomalous vessels behavior. …”
-
4
-
5
-
6
-
7
-
8
-
9
-
10
-
11
Data Sheet 1_Changes in voxel-wise gray matter asymmetry over time.pdf
Published 2025“…Here, we set out to further explore age-related changes in brain asymmetry, with a particular focus on voxel-wise gray matter asymmetry. For this purpose, we selected a sample of 2,322 participants (1,150 women/1,172 men), aged between 47 and 80 years (mean 62.3 years), from the UK Biobank. …”
-
12
Algorithm training accuracy experiments.
Published 2025“…However, after removing the integrated residual attention unit and depth-wise separable convolution, the accuracy decreased by 1.91% and the latency increased by 117ms. …”
-
13
Repeat the detection experiment.
Published 2025“…However, after removing the integrated residual attention unit and depth-wise separable convolution, the accuracy decreased by 1.91% and the latency increased by 117ms. …”
-
14
Detection network structure with IRAU [34].
Published 2025“…However, after removing the integrated residual attention unit and depth-wise separable convolution, the accuracy decreased by 1.91% and the latency increased by 117ms. …”
-
15
Ablation experiments of various block.
Published 2025“…However, after removing the integrated residual attention unit and depth-wise separable convolution, the accuracy decreased by 1.91% and the latency increased by 117ms. …”
-
16
Kappa coefficients for different algorithms.
Published 2025“…However, after removing the integrated residual attention unit and depth-wise separable convolution, the accuracy decreased by 1.91% and the latency increased by 117ms. …”
-
17
The structure of ASPP+ block.
Published 2025“…However, after removing the integrated residual attention unit and depth-wise separable convolution, the accuracy decreased by 1.91% and the latency increased by 117ms. …”
-
18
The structure of attention gate block [31].
Published 2025“…However, after removing the integrated residual attention unit and depth-wise separable convolution, the accuracy decreased by 1.91% and the latency increased by 117ms. …”
-
19
DSC block and its application network structure.
Published 2025“…However, after removing the integrated residual attention unit and depth-wise separable convolution, the accuracy decreased by 1.91% and the latency increased by 117ms. …”
-
20
The structure of multi-scale residual block [30].
Published 2025“…However, after removing the integrated residual attention unit and depth-wise separable convolution, the accuracy decreased by 1.91% and the latency increased by 117ms. …”