Search alternatives:
scale optimization » whale optimization (Expand Search), swarm optimization (Expand Search), phase optimization (Expand Search)
self optimization » wolf optimization (Expand Search), field optimization (Expand Search), lead optimization (Expand Search)
final layer » single layer (Expand Search)
layer self » lower self (Expand Search)
scale optimization » whale optimization (Expand Search), swarm optimization (Expand Search), phase optimization (Expand Search)
self optimization » wolf optimization (Expand Search), field optimization (Expand Search), lead optimization (Expand Search)
final layer » single layer (Expand Search)
layer self » lower self (Expand Search)
-
1
Self-attention module for the features learning.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
2
-
3
-
4
-
5
Improved support vector machine classification algorithm based on adaptive feature weight updating in the Hadoop cluster environment
Published 2019“…The MapReduce parallel programming model on the Hadoop platform is used to perform an adaptive fusion of hue, local binary pattern (LBP) and scale-invariant feature transform (SIFT) features extracted from images to derive optimal combinations of weights. …”
-
6
Comparison with existing SOTA techniques.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
7
Proposed inverted residual parallel block.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
8
Inverted residual bottleneck block.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
9
Proposed architecture testing phase.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
10
Sample classes from the HMDB51 dataset.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
11
Sample classes from UCF101 dataset [40].
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
12
Residual behavior.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
13
-
14
-
15
Sample image for illustration.
Published 2024“…The results demonstrate that CBFD achieves a average precision of 0.97 for the test image, outperforming Superpoint, Directional Intensified Tertiary Filtering (DITF), Binary Robust Independent Elementary Features (BRIEF), Binary Robust Invariant Scalable Keypoints (BRISK), Speeded Up Robust Features (SURF), and Scale Invariant Feature Transform (SIFT), which achieve scores of 0.95, 0.92, 0.72, 0.66, 0.63 and 0.50 respectively. …”
-
16
Quadratic polynomial in 2D image plane.
Published 2024“…The results demonstrate that CBFD achieves a average precision of 0.97 for the test image, outperforming Superpoint, Directional Intensified Tertiary Filtering (DITF), Binary Robust Independent Elementary Features (BRIEF), Binary Robust Invariant Scalable Keypoints (BRISK), Speeded Up Robust Features (SURF), and Scale Invariant Feature Transform (SIFT), which achieve scores of 0.95, 0.92, 0.72, 0.66, 0.63 and 0.50 respectively. …”
-
17
Steps in the extraction of 14 coordinates from the CT slices for the curved MPR.
Published 2025“…The image is then cleaned in c) using morphological filtering with an <i>opening</i> operation to remove small-scale noise. …”
-
18
-
19
-
20