Search alternatives:
codon optimization » wolf optimization (Expand Search)
binary base » binary mask (Expand Search), ciliary base (Expand Search), binary image (Expand Search)
swarm » warm (Expand Search)
codon optimization » wolf optimization (Expand Search)
binary base » binary mask (Expand Search), ciliary base (Expand Search), binary image (Expand Search)
swarm » warm (Expand Search)
-
1
-
2
-
3
-
4
-
5
Recall curve of higher-order hybrid clustering algorithm incorporating LDA model.
Published 2024Subjects: -
6
-
7
-
8
-
9
-
10
Comparison of clustering performance under different text vector formation methods.
Published 2024Subjects: -
11
-
12
Proposed architecture testing phase.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
13
-
14
-
15
Comparison with existing SOTA techniques.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
16
Proposed inverted residual parallel block.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
17
Inverted residual bottleneck block.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
18
Sample classes from the HMDB51 dataset.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
19
Sample classes from UCF101 dataset [40].
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
20
Self-attention module for the features learning.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”