Search alternatives:
robust classification » forest classification (Expand Search), risk classification (Expand Search), group classification (Expand Search)
wolf optimization » whale optimization (Expand Search), swarm optimization (Expand Search), _ optimization (Expand Search)
binary task » binary mask (Expand Search)
task robust » based robust (Expand Search)
where » here (Expand Search)
robust classification » forest classification (Expand Search), risk classification (Expand Search), group classification (Expand Search)
wolf optimization » whale optimization (Expand Search), swarm optimization (Expand Search), _ optimization (Expand Search)
binary task » binary mask (Expand Search)
task robust » based robust (Expand Search)
where » here (Expand Search)
-
1
-
2
-
3
The Pseudo-Code of the IRBMO Algorithm.
Published 2025“…Experiments demonstrate that IRBMO exhibits high efficiency, generality and excellent generalization ability in feature selection tasks. In addition, used in conjunction with the KNN classifier, IRBMO significantly improves the classification accuracy, with an average accuracy improvement of 43.89% on 12 medical datasets compared to the original Red-billed Blue Magpie algorithm. …”
-
4
IRBMO vs. meta-heuristic algorithms boxplot.
Published 2025“…Experiments demonstrate that IRBMO exhibits high efficiency, generality and excellent generalization ability in feature selection tasks. In addition, used in conjunction with the KNN classifier, IRBMO significantly improves the classification accuracy, with an average accuracy improvement of 43.89% on 12 medical datasets compared to the original Red-billed Blue Magpie algorithm. …”
-
5
IRBMO vs. feature selection algorithm boxplot.
Published 2025“…Experiments demonstrate that IRBMO exhibits high efficiency, generality and excellent generalization ability in feature selection tasks. In addition, used in conjunction with the KNN classifier, IRBMO significantly improves the classification accuracy, with an average accuracy improvement of 43.89% on 12 medical datasets compared to the original Red-billed Blue Magpie algorithm. …”
-
6
Pseudo Code of RBMO.
Published 2025“…Experiments demonstrate that IRBMO exhibits high efficiency, generality and excellent generalization ability in feature selection tasks. In addition, used in conjunction with the KNN classifier, IRBMO significantly improves the classification accuracy, with an average accuracy improvement of 43.89% on 12 medical datasets compared to the original Red-billed Blue Magpie algorithm. …”
-
7
P-value on CEC-2017(Dim = 30).
Published 2025“…Experiments demonstrate that IRBMO exhibits high efficiency, generality and excellent generalization ability in feature selection tasks. In addition, used in conjunction with the KNN classifier, IRBMO significantly improves the classification accuracy, with an average accuracy improvement of 43.89% on 12 medical datasets compared to the original Red-billed Blue Magpie algorithm. …”
-
8
Memory storage behavior.
Published 2025“…Experiments demonstrate that IRBMO exhibits high efficiency, generality and excellent generalization ability in feature selection tasks. In addition, used in conjunction with the KNN classifier, IRBMO significantly improves the classification accuracy, with an average accuracy improvement of 43.89% on 12 medical datasets compared to the original Red-billed Blue Magpie algorithm. …”
-
9
Elite search behavior.
Published 2025“…Experiments demonstrate that IRBMO exhibits high efficiency, generality and excellent generalization ability in feature selection tasks. In addition, used in conjunction with the KNN classifier, IRBMO significantly improves the classification accuracy, with an average accuracy improvement of 43.89% on 12 medical datasets compared to the original Red-billed Blue Magpie algorithm. …”
-
10
Description of the datasets.
Published 2025“…Experiments demonstrate that IRBMO exhibits high efficiency, generality and excellent generalization ability in feature selection tasks. In addition, used in conjunction with the KNN classifier, IRBMO significantly improves the classification accuracy, with an average accuracy improvement of 43.89% on 12 medical datasets compared to the original Red-billed Blue Magpie algorithm. …”
-
11
S and V shaped transfer functions.
Published 2025“…Experiments demonstrate that IRBMO exhibits high efficiency, generality and excellent generalization ability in feature selection tasks. In addition, used in conjunction with the KNN classifier, IRBMO significantly improves the classification accuracy, with an average accuracy improvement of 43.89% on 12 medical datasets compared to the original Red-billed Blue Magpie algorithm. …”
-
12
S- and V-Type transfer function diagrams.
Published 2025“…Experiments demonstrate that IRBMO exhibits high efficiency, generality and excellent generalization ability in feature selection tasks. In addition, used in conjunction with the KNN classifier, IRBMO significantly improves the classification accuracy, with an average accuracy improvement of 43.89% on 12 medical datasets compared to the original Red-billed Blue Magpie algorithm. …”
-
13
Collaborative hunting behavior.
Published 2025“…Experiments demonstrate that IRBMO exhibits high efficiency, generality and excellent generalization ability in feature selection tasks. In addition, used in conjunction with the KNN classifier, IRBMO significantly improves the classification accuracy, with an average accuracy improvement of 43.89% on 12 medical datasets compared to the original Red-billed Blue Magpie algorithm. …”
-
14
Friedman average rank sum test results.
Published 2025“…Experiments demonstrate that IRBMO exhibits high efficiency, generality and excellent generalization ability in feature selection tasks. In addition, used in conjunction with the KNN classifier, IRBMO significantly improves the classification accuracy, with an average accuracy improvement of 43.89% on 12 medical datasets compared to the original Red-billed Blue Magpie algorithm. …”
-
15
IRBMO vs. variant comparison adaptation data.
Published 2025“…Experiments demonstrate that IRBMO exhibits high efficiency, generality and excellent generalization ability in feature selection tasks. In addition, used in conjunction with the KNN classifier, IRBMO significantly improves the classification accuracy, with an average accuracy improvement of 43.89% on 12 medical datasets compared to the original Red-billed Blue Magpie algorithm. …”
-
16
Related studies on IDS using deep learning.
Published 2024“…The suggested model’s accuracies on binary and multi-class classification tasks using the NSL-KDD dataset are 99.67% and 99.88%, respectively. …”
-
17
The architecture of the BI-LSTM model.
Published 2024“…The suggested model’s accuracies on binary and multi-class classification tasks using the NSL-KDD dataset are 99.67% and 99.88%, respectively. …”
-
18
Comparison of accuracy and DR on UNSW-NB15.
Published 2024“…The suggested model’s accuracies on binary and multi-class classification tasks using the NSL-KDD dataset are 99.67% and 99.88%, respectively. …”
-
19
Comparison of DR and FPR of UNSW-NB15.
Published 2024“…The suggested model’s accuracies on binary and multi-class classification tasks using the NSL-KDD dataset are 99.67% and 99.88%, respectively. …”
-
20
iNCog-EEG (ideal vs. Noisy Cognitive EEG for Workload Assessment) Dataset
Published 2025“…</p><h3>Applications</h3><p dir="ltr">This dataset can be applied to a wide range of research areas, including:</p><ul><li>EEG signal denoising and artifact rejection</li><li>Binary and hierarchical <b>cognitive workload classification</b></li><li>Development of <b>robust Brain–Computer Interfaces (BCIs)</b></li><li>Benchmarking algorithms under <b>ideal and noisy conditions</b></li><li>Multitasking and mental workload assessment in <b>real-world scenarios</b></li></ul><p dir="ltr">By combining controlled multitasking protocols with deliberately introduced environmental noise, <b>iNCog-EEG provides a comprehensive benchmark</b> for advancing EEG-based workload recognition systems in both clean and challenging conditions.…”