Showing 41 - 60 results of 111 for search '(( final phase model optimization algorithm ) OR ( binary three process optimization algorithm ))', query time: 0.57s Refine Results
  1. 41

    Steady simulation for Fig 12. by Feng Gao (3548)

    Published 2024
    “…Differences in exhaled gas vorticity and jet penetration depth across different flow models were identified. Finally, combined with the non-iterative algorithm, the optimal strategy of human respiration simulation was proposed. …”
  2. 42

    Data logging in Fig 16. by Feng Gao (3548)

    Published 2024
    “…Differences in exhaled gas vorticity and jet penetration depth across different flow models were identified. Finally, combined with the non-iterative algorithm, the optimal strategy of human respiration simulation was proposed. …”
  3. 43

    Data in Fig 10. by Feng Gao (3548)

    Published 2024
    “…Differences in exhaled gas vorticity and jet penetration depth across different flow models were identified. Finally, combined with the non-iterative algorithm, the optimal strategy of human respiration simulation was proposed. …”
  4. 44

    N2O mass fraction in Fig 15-turbulence changed. by Feng Gao (3548)

    Published 2024
    “…Differences in exhaled gas vorticity and jet penetration depth across different flow models were identified. Finally, combined with the non-iterative algorithm, the optimal strategy of human respiration simulation was proposed. …”
  5. 45

    N<sub>2</sub>O mass fraction at 9 sampling points. by Feng Gao (3548)

    Published 2024
    “…Differences in exhaled gas vorticity and jet penetration depth across different flow models were identified. Finally, combined with the non-iterative algorithm, the optimal strategy of human respiration simulation was proposed. …”
  6. 46

    N2O mass fraction in Fig 15-only RNG. by Feng Gao (3548)

    Published 2024
    “…Differences in exhaled gas vorticity and jet penetration depth across different flow models were identified. Finally, combined with the non-iterative algorithm, the optimal strategy of human respiration simulation was proposed. …”
  7. 47
  8. 48
  9. 49

    Parameter settings. by Yang Cao (53545)

    Published 2024
    “…<div><p>Differential Evolution (DE) is widely recognized as a highly effective evolutionary algorithm for global optimization. It has proven its efficacy in tackling diverse problems across various fields and real-world applications. …”
  10. 50

    Comparison with existing SOTA techniques. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  11. 51

    Proposed inverted residual parallel block. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  12. 52

    Inverted residual bottleneck block. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  13. 53

    Sample classes from the HMDB51 dataset. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  14. 54

    Sample classes from UCF101 dataset [40]. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  15. 55

    Self-attention module for the features learning. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  16. 56

    Residual behavior. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  17. 57

    BiLSTM model structure diagram [30]. by Yuye Zou (22806476)

    Published 2025
    “…The model employs a sophisticated three-phase methodology: (1) decomposition through Variational Mode Decomposition (VMD) to extract multiple intrinsic mode functions (IMFs) from the original time series, effectively capturing its nonlinear and complex patterns; (2) optimization using a Chaotic Particle Swarm Optimization (CPSO) algorithm to fine-tune the Bi-directional Long Short-Term Memory (BiLSTM) network parameters, thereby improving both predictive accuracy and model stability; and (3) integration of predictions from both high-frequency and low-frequency components to generate comprehensive final forecasts. …”
  18. 58

    Proposed RVCNet architecture. by Fatema Binte Alam (17708708)

    Published 2023
    “…Finally, these results are compared with some recent deep-learning models. …”
  19. 59

    The ROC curves of the proposed RVCNet. by Fatema Binte Alam (17708708)

    Published 2023
    “…Finally, these results are compared with some recent deep-learning models. …”
  20. 60

    Radiography X-ray image from the dataset. by Fatema Binte Alam (17708708)

    Published 2023
    “…Finally, these results are compared with some recent deep-learning models. …”