Search alternatives:
processing optimization » process optimization (Expand Search), process optimisation (Expand Search), routing optimization (Expand Search)
based optimization » whale optimization (Expand Search)
pre processing » _ processing (Expand Search), rna processing (Expand Search), image processing (Expand Search)
final pre » final p (Expand Search)
binary a » binary _ (Expand Search), binary b (Expand Search), hilary a (Expand Search)
a based » ai based (Expand Search), _ based (Expand Search), 1 based (Expand Search)
processing optimization » process optimization (Expand Search), process optimisation (Expand Search), routing optimization (Expand Search)
based optimization » whale optimization (Expand Search)
pre processing » _ processing (Expand Search), rna processing (Expand Search), image processing (Expand Search)
final pre » final p (Expand Search)
binary a » binary _ (Expand Search), binary b (Expand Search), hilary a (Expand Search)
a based » ai based (Expand Search), _ based (Expand Search), 1 based (Expand Search)
-
101
<i>hi</i>PRS algorithm process flow.
Published 2023“…<b>(B)</b> Focusing on the positive class only, the algorithm exploits FIM (<i>apriori</i> algorithm) to build a list of candidate interactions of any desired order, retaining those that have an empirical frequency above a given threshold <i>δ</i>. …”
-
102
-
103
Triplet Matching for Estimating Causal Effects With Three Treatment Arms: A Comparative Study of Mortality by Trauma Center Level
Published 2021“…We implement the evidence factors method for binary outcomes, which includes a randomization-based testing strategy and a sensitivity analysis for hidden bias in three-group matched designs. …”
-
104
Multicategory Angle-Based Learning for Estimating Optimal Dynamic Treatment Regimes With Censored Data
Published 2021“…In this article, we develop a novel angle-based approach to search the optimal DTR under a multicategory treatment framework for survival data. …”
-
105
Comparison with existing SOTA techniques.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
106
Proposed inverted residual parallel block.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
107
Inverted residual bottleneck block.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
108
Proposed architecture testing phase.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
109
Sample classes from the HMDB51 dataset.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
110
Sample classes from UCF101 dataset [40].
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
111
Self-attention module for the features learning.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
112
Residual behavior.
Published 2025“…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
-
113
Flowchart for developed MAOA.
Published 2023“…The resultant processed data is given into two models named (i) Autoencoder with Deep Belief Network (DBN), in which the optimal features are selected from Autoencoder with the aid of Modified Archimedes Optimization Algorithm (MAOA). …”
-
114
Representation of autoencoder with DBN.
Published 2023“…The resultant processed data is given into two models named (i) Autoencoder with Deep Belief Network (DBN), in which the optimal features are selected from Autoencoder with the aid of Modified Archimedes Optimization Algorithm (MAOA). …”
-
115
Threats in MQTT set.
Published 2023“…The resultant processed data is given into two models named (i) Autoencoder with Deep Belief Network (DBN), in which the optimal features are selected from Autoencoder with the aid of Modified Archimedes Optimization Algorithm (MAOA). …”
-
116
Features of MQTT set.
Published 2023“…The resultant processed data is given into two models named (i) Autoencoder with Deep Belief Network (DBN), in which the optimal features are selected from Autoencoder with the aid of Modified Archimedes Optimization Algorithm (MAOA). …”
-
117
Architectural view of smart home atmosphere.
Published 2023“…The resultant processed data is given into two models named (i) Autoencoder with Deep Belief Network (DBN), in which the optimal features are selected from Autoencoder with the aid of Modified Archimedes Optimization Algorithm (MAOA). …”
-
118
-
119
SHAP bar plot.
Published 2025“…</p><p>Objective</p><p>This study aimed to develop a risk prediction model for CI in CKD patients using machine learning algorithms, with the objective of enhancing risk prediction accuracy and facilitating early intervention.…”
-
120
Sample screening flowchart.
Published 2025“…</p><p>Objective</p><p>This study aimed to develop a risk prediction model for CI in CKD patients using machine learning algorithms, with the objective of enhancing risk prediction accuracy and facilitating early intervention.…”