Showing 141 - 160 results of 505 for search '(( final model weights optimization algorithm ) OR ( binary data based optimization algorithm ))', query time: 1.23s Refine Results
  1. 141

    A* Path-Finding Algorithm to Determine Cell Connections by Max Weng (22327159)

    Published 2025
    “…The integration of heuristic optimization and machine learning significantly enhances both speed and precision in astrocyte data analysis. …”
  2. 142

    Diagnosis network model flowchart. by Xianlin Ren (22783589)

    Published 2025
    “…Next it combines with composite multiscale permutation entropy to finish feature extraction and create feature vectors. Finally, an enhanced inertia weights and Cauchy chaotic mutation-Sine Cosine Algorithm is utilized to optimize the hyperparameters of the stacked denoising auto-encoders network and construct a fault diagnosis model. …”
  3. 143

    Node centrality and average weight. by Xuejiao Zhang (3089274)

    Published 2025
    “…Furthermore, to reduce the bias in attribute weight assessment caused by peer effects, a social network-based algorithm that enables precise quantification of subgroup and member weights is proposed. …”
  4. 144

    MEA-BP neural network algorithm flowchart. by Dongling Ma (1269888)

    Published 2025
    “…A multi-population genetic algorithm (MEA) was used to optimize the weights and thresholds of a backpropagation (BP) neural network for case adaptation and reuse. …”
  5. 145

    <i>hi</i>PRS algorithm process flow. by Michela C. Massi (14599915)

    Published 2023
    “…<p><b>(A)</b> Input data is a list of genotype-level SNPs. <b>(B)</b> Focusing on the positive class only, the algorithm exploits FIM (<i>apriori</i> algorithm) to build a list of candidate interactions of any desired order, retaining those that have an empirical frequency above a given threshold <i>δ</i>. …”
  6. 146

    Diagnosis accuracy of models after adding noise. by Xianlin Ren (22783589)

    Published 2025
    “…Next it combines with composite multiscale permutation entropy to finish feature extraction and create feature vectors. Finally, an enhanced inertia weights and Cauchy chaotic mutation-Sine Cosine Algorithm is utilized to optimize the hyperparameters of the stacked denoising auto-encoders network and construct a fault diagnosis model. …”
  7. 147

    Comparison of algorithm search curves. by Bowen Li (200859)

    Published 2023
    “…The optimal parameters such as the width and weight of RBF are determined, and the optimal RDC-RBF fault diagnosis model is established. …”
  8. 148

    Evaluation grade of comfort. by Jianjun Yang (124022)

    Published 2023
    “…<div><p>Aiming at the comfort evaluation of automobile intelligent cockpit, an evaluation model based on improved combination weighting-cloud model is established. …”
  9. 149

    Two parameters of EMCM. by Jianjun Yang (124022)

    Published 2023
    “…<div><p>Aiming at the comfort evaluation of automobile intelligent cockpit, an evaluation model based on improved combination weighting-cloud model is established. …”
  10. 150

    Standard evaluation cloud parameters. by Jianjun Yang (124022)

    Published 2023
    “…<div><p>Aiming at the comfort evaluation of automobile intelligent cockpit, an evaluation model based on improved combination weighting-cloud model is established. …”
  11. 151

    Second-class index scoring. by Jianjun Yang (124022)

    Published 2023
    “…<div><p>Aiming at the comfort evaluation of automobile intelligent cockpit, an evaluation model based on improved combination weighting-cloud model is established. …”
  12. 152

    Parameters for model construction. by Wei Wang (17594)

    Published 2024
    “…<div><p>In order to ensure the safety of coal mine production, a mine water source identification model is proposed to improve the accuracy of mine water inrush source identification and effectively prevent water inrush accidents based on kernel principal component analysis (KPCA) and improved sparrow search algorithm (ISSA) optimized kernel extreme learning machine (KELM). …”
  13. 153

    Comparison with existing SOTA techniques. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  14. 154

    Proposed inverted residual parallel block. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  15. 155

    Inverted residual bottleneck block. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  16. 156

    Proposed architecture testing phase. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  17. 157

    Sample classes from the HMDB51 dataset. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  18. 158

    Sample classes from UCF101 dataset [40]. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  19. 159

    Self-attention module for the features learning. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  20. 160

    Residual behavior. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”