Search alternatives:
guided optimization » based optimization (Expand Search), model optimization (Expand Search)
when optimization » whale optimization (Expand Search), other optimization (Expand Search), wolf optimization (Expand Search)
binary samples » biopsy samples (Expand Search), lunar samples (Expand Search)
samples when » samples used (Expand Search), samples n (Expand Search), samples were (Expand Search)
binary task » binary mask (Expand Search)
guided optimization » based optimization (Expand Search), model optimization (Expand Search)
when optimization » whale optimization (Expand Search), other optimization (Expand Search), wolf optimization (Expand Search)
binary samples » biopsy samples (Expand Search), lunar samples (Expand Search)
samples when » samples used (Expand Search), samples n (Expand Search), samples were (Expand Search)
binary task » binary mask (Expand Search)
-
1
The Pseudo-Code of the IRBMO Algorithm.
Published 2025“…To adapt to the feature selection problem, we convert the continuous optimization algorithm to binary form via transfer function, which further enhances the applicability of the algorithm. …”
-
2
IRBMO vs. meta-heuristic algorithms boxplot.
Published 2025“…To adapt to the feature selection problem, we convert the continuous optimization algorithm to binary form via transfer function, which further enhances the applicability of the algorithm. …”
-
3
IRBMO vs. feature selection algorithm boxplot.
Published 2025“…To adapt to the feature selection problem, we convert the continuous optimization algorithm to binary form via transfer function, which further enhances the applicability of the algorithm. …”
-
4
ROC curve for binary classification.
Published 2024“…The proposed model yielded notable results, such as an accuracy of 93.45% and an area under the curve value of 0.99 when trained on the three classes. The model further showed superior results on binary classification compared with existing methods. …”
-
5
Confusion matrix for binary classification.
Published 2024“…The proposed model yielded notable results, such as an accuracy of 93.45% and an area under the curve value of 0.99 when trained on the three classes. The model further showed superior results on binary classification compared with existing methods. …”
-
6
-
7
Pseudo Code of RBMO.
Published 2025“…To adapt to the feature selection problem, we convert the continuous optimization algorithm to binary form via transfer function, which further enhances the applicability of the algorithm. …”
-
8
P-value on CEC-2017(Dim = 30).
Published 2025“…To adapt to the feature selection problem, we convert the continuous optimization algorithm to binary form via transfer function, which further enhances the applicability of the algorithm. …”
-
9
Memory storage behavior.
Published 2025“…To adapt to the feature selection problem, we convert the continuous optimization algorithm to binary form via transfer function, which further enhances the applicability of the algorithm. …”
-
10
Elite search behavior.
Published 2025“…To adapt to the feature selection problem, we convert the continuous optimization algorithm to binary form via transfer function, which further enhances the applicability of the algorithm. …”
-
11
Description of the datasets.
Published 2025“…To adapt to the feature selection problem, we convert the continuous optimization algorithm to binary form via transfer function, which further enhances the applicability of the algorithm. …”
-
12
S and V shaped transfer functions.
Published 2025“…To adapt to the feature selection problem, we convert the continuous optimization algorithm to binary form via transfer function, which further enhances the applicability of the algorithm. …”
-
13
S- and V-Type transfer function diagrams.
Published 2025“…To adapt to the feature selection problem, we convert the continuous optimization algorithm to binary form via transfer function, which further enhances the applicability of the algorithm. …”
-
14
Collaborative hunting behavior.
Published 2025“…To adapt to the feature selection problem, we convert the continuous optimization algorithm to binary form via transfer function, which further enhances the applicability of the algorithm. …”
-
15
Friedman average rank sum test results.
Published 2025“…To adapt to the feature selection problem, we convert the continuous optimization algorithm to binary form via transfer function, which further enhances the applicability of the algorithm. …”
-
16
IRBMO vs. variant comparison adaptation data.
Published 2025“…To adapt to the feature selection problem, we convert the continuous optimization algorithm to binary form via transfer function, which further enhances the applicability of the algorithm. …”
-
17
Thesis-RAMIS-Figs_Slides
Published 2024“…<br><br>Finally, although the developed concepts, ideas and algorithms have been developed for inverse problems in geostatistics, the results are applicable to a wide range of disciplines where similar sampling problems need to be faced, included but not limited to design of communication networks, optimal integration and communication of swarms of robots and drones, remote sensing.…”
-
18
Testing results for classifying AD, MCI and NC.
Published 2024“…The proposed model yielded notable results, such as an accuracy of 93.45% and an area under the curve value of 0.99 when trained on the three classes. The model further showed superior results on binary classification compared with existing methods. …”
-
19
Summary of existing CNN models.
Published 2024“…The proposed model yielded notable results, such as an accuracy of 93.45% and an area under the curve value of 0.99 when trained on the three classes. The model further showed superior results on binary classification compared with existing methods. …”
-
20
Supplementary file 1_Comparative evaluation of fast-learning classification algorithms for urban forest tree species identification using EO-1 hyperion hyperspectral imagery.docx
Published 2025“…</p>Methods<p>Thirteen supervised classification algorithms were comparatively evaluated, encompassing traditional spectral/statistical classifiers—Maximum Likelihood, Mahalanobis Distance, Minimum Distance, Parallelepiped, Spectral Angle Mapper (SAM), Spectral Information Divergence (SID), and Binary Encoding—and machine learning algorithms including Decision Tree (DT), K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Random Forest (RF), and Artificial Neural Network (ANN). …”