Showing 21 - 40 results of 67 for search '(( binary based case optimization algorithm ) OR ( final phase process optimization algorithm ))', query time: 0.50s Refine Results
  1. 21

    The bottleneck residual block for Resnet50. by Nour Eldeen Mahmoud Khalifa (19259450)

    Published 2024
    “…The third phase is the training and testing phase. Finally, the best-performing model was selected and compared with the currently established models (Alexnet, Squeezenet, Googlenet, Resnet50).…”
  2. 22

    DeepDate model’s architecture design. by Nour Eldeen Mahmoud Khalifa (19259450)

    Published 2024
    “…The third phase is the training and testing phase. Finally, the best-performing model was selected and compared with the currently established models (Alexnet, Squeezenet, Googlenet, Resnet50).…”
  3. 23
  4. 24
  5. 25
  6. 26

    Comparison with existing SOTA techniques. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  7. 27

    Proposed inverted residual parallel block. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  8. 28

    Inverted residual bottleneck block. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  9. 29

    Sample classes from the HMDB51 dataset. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  10. 30

    Sample classes from UCF101 dataset [40]. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  11. 31

    Self-attention module for the features learning. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  12. 32

    Residual behavior. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  13. 33
  14. 34
  15. 35
  16. 36

    Overall framework diagram. by Yanhua Xian (21417128)

    Published 2025
    “…Secondly, addressing the issue of weight and threshold initialization in BPNN, the Coati Optimization Algorithm (COA) was employed to optimize the network (COA-BPNN). …”
  17. 37

    Data_Sheet_1_Multiclass Classification Based on Combined Motor Imageries.pdf by Cecilia Lindig-León (7889777)

    Published 2020
    “…And we propose two new multilabel uses of the Common Spatial Pattern (CSP) algorithm to optimize the signal-to-noise ratio, namely MC2CMI and MC2SMI approaches. …”
  18. 38
  19. 39

    Analysis and design of algorithms for the manufacturing process of integrated circuits by Sonia Fleytas (16856403)

    Published 2023
    “…From this, we propose: (i) a new ILP model, and (ii) a new solution representation, which, unlike the reference work, guarantees that feasible solutions are obtained throughout the generation of new individuals. Based on this new representation, we proposed and evaluated other approximate methods, including a greedy algorithm and a genetic algorithm that improve the state-of-the-art results for test cases usually used in the literature. …”
  20. 40

    Summary of LITNET-2020 dataset. by Asmaa Ahmed Awad (16726315)

    Published 2023
    “…The ILSTM was then used to build an efficient intrusion detection system for binary and multi-class classification cases. The proposed algorithm has two phases: phase one involves training a conventional LSTM network to get initial weights, and phase two involves using the hybrid swarm algorithms, CBOA and PSO, to optimize the weights of LSTM to improve the accuracy. …”