بدائل البحث:
process optimization » model optimization (توسيع البحث)
wolf optimization » whale optimization (توسيع البحث), swarm optimization (توسيع البحث), _ optimization (توسيع البحث)
phase process » phase proteins (توسيع البحث), whole process (توسيع البحث), phase protein (توسيع البحث)
process optimization » model optimization (توسيع البحث)
wolf optimization » whale optimization (توسيع البحث), swarm optimization (توسيع البحث), _ optimization (توسيع البحث)
phase process » phase proteins (توسيع البحث), whole process (توسيع البحث), phase protein (توسيع البحث)
-
1
-
2
-
3
-
4
-
5
-
6
-
7
Flow chart of particle swarm algorithm.
منشور في 2024"…The third phase is the training and testing phase. Finally, the best-performing model was selected and compared with the currently established models (Alexnet, Squeezenet, Googlenet, Resnet50).…"
-
8
-
9
-
10
-
11
-
12
Proposed architecture testing phase.
منشور في 2025"…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …"
-
13
-
14
-
15
The structure of the Resnet50.
منشور في 2024"…The third phase is the training and testing phase. Finally, the best-performing model was selected and compared with the currently established models (Alexnet, Squeezenet, Googlenet, Resnet50).…"
-
16
The bottleneck residual block for Resnet50.
منشور في 2024"…The third phase is the training and testing phase. Finally, the best-performing model was selected and compared with the currently established models (Alexnet, Squeezenet, Googlenet, Resnet50).…"
-
17
DeepDate model’s architecture design.
منشور في 2024"…The third phase is the training and testing phase. Finally, the best-performing model was selected and compared with the currently established models (Alexnet, Squeezenet, Googlenet, Resnet50).…"
-
18
-
19
-
20
Comparison with existing SOTA techniques.
منشور في 2025"…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …"