Search alternatives:
weights optimization » weight optimization (Expand Search), weights initialization (Expand Search), design optimization (Expand Search)
based optimization » whale optimization (Expand Search)
model weights » body weights (Expand Search)
final model » animal model (Expand Search)
binary data » primary data (Expand Search), dietary data (Expand Search)
data based » data used (Expand Search)
weights optimization » weight optimization (Expand Search), weights initialization (Expand Search), design optimization (Expand Search)
based optimization » whale optimization (Expand Search)
model weights » body weights (Expand Search)
final model » animal model (Expand Search)
binary data » primary data (Expand Search), dietary data (Expand Search)
data based » data used (Expand Search)
-
141
A* Path-Finding Algorithm to Determine Cell Connections
Published 2025“…The integration of heuristic optimization and machine learning significantly enhances both speed and precision in astrocyte data analysis. …”
-
142
Diagnosis network model flowchart.
Published 2025“…Next it combines with composite multiscale permutation entropy to finish feature extraction and create feature vectors. Finally, an enhanced inertia weights and Cauchy chaotic mutation-Sine Cosine Algorithm is utilized to optimize the hyperparameters of the stacked denoising auto-encoders network and construct a fault diagnosis model. …”
-
143
Node centrality and average weight.
Published 2025“…Furthermore, to reduce the bias in attribute weight assessment caused by peer effects, a social network-based algorithm that enables precise quantification of subgroup and member weights is proposed. …”
-
144
MEA-BP neural network algorithm flowchart.
Published 2025“…A multi-population genetic algorithm (MEA) was used to optimize the weights and thresholds of a backpropagation (BP) neural network for case adaptation and reuse. …”
-
145
<i>hi</i>PRS algorithm process flow.
Published 2023“…<p><b>(A)</b> Input data is a list of genotype-level SNPs. <b>(B)</b> Focusing on the positive class only, the algorithm exploits FIM (<i>apriori</i> algorithm) to build a list of candidate interactions of any desired order, retaining those that have an empirical frequency above a given threshold <i>δ</i>. …”
-
146
Diagnosis accuracy of models after adding noise.
Published 2025“…Next it combines with composite multiscale permutation entropy to finish feature extraction and create feature vectors. Finally, an enhanced inertia weights and Cauchy chaotic mutation-Sine Cosine Algorithm is utilized to optimize the hyperparameters of the stacked denoising auto-encoders network and construct a fault diagnosis model. …”
-
147
Comparison of algorithm search curves.
Published 2023“…The optimal parameters such as the width and weight of RBF are determined, and the optimal RDC-RBF fault diagnosis model is established. …”
-
148
Evaluation grade of comfort.
Published 2023“…<div><p>Aiming at the comfort evaluation of automobile intelligent cockpit, an evaluation model based on improved combination weighting-cloud model is established. …”
-
149
Two parameters of EMCM.
Published 2023“…<div><p>Aiming at the comfort evaluation of automobile intelligent cockpit, an evaluation model based on improved combination weighting-cloud model is established. …”
-
150
Standard evaluation cloud parameters.
Published 2023“…<div><p>Aiming at the comfort evaluation of automobile intelligent cockpit, an evaluation model based on improved combination weighting-cloud model is established. …”
-
151
Second-class index scoring.
Published 2023“…<div><p>Aiming at the comfort evaluation of automobile intelligent cockpit, an evaluation model based on improved combination weighting-cloud model is established. …”
-
152
Parameters for model construction.
Published 2024“…<div><p>In order to ensure the safety of coal mine production, a mine water source identification model is proposed to improve the accuracy of mine water inrush source identification and effectively prevent water inrush accidents based on kernel principal component analysis (KPCA) and improved sparrow search algorithm (ISSA) optimized kernel extreme learning machine (KELM). …”
-
153
Comparison with existing SOTA techniques.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
154
Proposed inverted residual parallel block.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
155
Inverted residual bottleneck block.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
156
Proposed architecture testing phase.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
157
Sample classes from the HMDB51 dataset.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
158
Sample classes from UCF101 dataset [40].
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
159
Self-attention module for the features learning.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
160
Residual behavior.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”