Search alternatives:
based optimization » whale optimization (Expand Search)
model optimization » codon optimization (Expand Search), global optimization (Expand Search), wolf optimization (Expand Search)
laboratory based » laboratory values (Expand Search), laboratory data (Expand Search), laboratory tests (Expand Search)
binary task » binary mask (Expand Search)
task based » risk based (Expand Search)
based optimization » whale optimization (Expand Search)
model optimization » codon optimization (Expand Search), global optimization (Expand Search), wolf optimization (Expand Search)
laboratory based » laboratory values (Expand Search), laboratory data (Expand Search), laboratory tests (Expand Search)
binary task » binary mask (Expand Search)
task based » risk based (Expand Search)
-
41
P-value on CEC-2017(Dim = 30).
Published 2025“…In order to comprehensively verify the performance of IRBMO, this paper designs a series of experiments to compare it with nine mainstream binary optimization algorithms. The experiments are based on 12 medical datasets, and the results show that IRBMO achieves optimal overall performance in key metrics such as fitness value, classification accuracy and specificity. …”
-
42
Memory storage behavior.
Published 2025“…In order to comprehensively verify the performance of IRBMO, this paper designs a series of experiments to compare it with nine mainstream binary optimization algorithms. The experiments are based on 12 medical datasets, and the results show that IRBMO achieves optimal overall performance in key metrics such as fitness value, classification accuracy and specificity. …”
-
43
Elite search behavior.
Published 2025“…In order to comprehensively verify the performance of IRBMO, this paper designs a series of experiments to compare it with nine mainstream binary optimization algorithms. The experiments are based on 12 medical datasets, and the results show that IRBMO achieves optimal overall performance in key metrics such as fitness value, classification accuracy and specificity. …”
-
44
Description of the datasets.
Published 2025“…In order to comprehensively verify the performance of IRBMO, this paper designs a series of experiments to compare it with nine mainstream binary optimization algorithms. The experiments are based on 12 medical datasets, and the results show that IRBMO achieves optimal overall performance in key metrics such as fitness value, classification accuracy and specificity. …”
-
45
S and V shaped transfer functions.
Published 2025“…In order to comprehensively verify the performance of IRBMO, this paper designs a series of experiments to compare it with nine mainstream binary optimization algorithms. The experiments are based on 12 medical datasets, and the results show that IRBMO achieves optimal overall performance in key metrics such as fitness value, classification accuracy and specificity. …”
-
46
S- and V-Type transfer function diagrams.
Published 2025“…In order to comprehensively verify the performance of IRBMO, this paper designs a series of experiments to compare it with nine mainstream binary optimization algorithms. The experiments are based on 12 medical datasets, and the results show that IRBMO achieves optimal overall performance in key metrics such as fitness value, classification accuracy and specificity. …”
-
47
Collaborative hunting behavior.
Published 2025“…In order to comprehensively verify the performance of IRBMO, this paper designs a series of experiments to compare it with nine mainstream binary optimization algorithms. The experiments are based on 12 medical datasets, and the results show that IRBMO achieves optimal overall performance in key metrics such as fitness value, classification accuracy and specificity. …”
-
48
Friedman average rank sum test results.
Published 2025“…In order to comprehensively verify the performance of IRBMO, this paper designs a series of experiments to compare it with nine mainstream binary optimization algorithms. The experiments are based on 12 medical datasets, and the results show that IRBMO achieves optimal overall performance in key metrics such as fitness value, classification accuracy and specificity. …”
-
49
IRBMO vs. variant comparison adaptation data.
Published 2025“…In order to comprehensively verify the performance of IRBMO, this paper designs a series of experiments to compare it with nine mainstream binary optimization algorithms. The experiments are based on 12 medical datasets, and the results show that IRBMO achieves optimal overall performance in key metrics such as fitness value, classification accuracy and specificity. …”
-
50
-
51
Location of study area and sampling sizes.
Published 2023“…Characteristic bands were selected from each type of spectra by the competitive adaptive reweighted sampling (CARS) algorithm, respectively. Thirdly, SOM prediction models were established based on random forest (RF), support vector regression (SVR), deep neural networks (DNN) and partial least squares regression (PLSR) methods using optimal spectral indexes, denoted here as SI-based models. …”
-
52
S1 Data set -
Published 2023“…Characteristic bands were selected from each type of spectra by the competitive adaptive reweighted sampling (CARS) algorithm, respectively. Thirdly, SOM prediction models were established based on random forest (RF), support vector regression (SVR), deep neural networks (DNN) and partial least squares regression (PLSR) methods using optimal spectral indexes, denoted here as SI-based models. …”
-
53
The flowchart of this research.
Published 2023“…Characteristic bands were selected from each type of spectra by the competitive adaptive reweighted sampling (CARS) algorithm, respectively. Thirdly, SOM prediction models were established based on random forest (RF), support vector regression (SVR), deep neural networks (DNN) and partial least squares regression (PLSR) methods using optimal spectral indexes, denoted here as SI-based models. …”
-
54
Key variables selected by CARS of raw spectra.
Published 2023“…Characteristic bands were selected from each type of spectra by the competitive adaptive reweighted sampling (CARS) algorithm, respectively. Thirdly, SOM prediction models were established based on random forest (RF), support vector regression (SVR), deep neural networks (DNN) and partial least squares regression (PLSR) methods using optimal spectral indexes, denoted here as SI-based models. …”
-
55
-
56
Supplementary Material for: Prediction Model of Cardiac Risk for Dental Extraction in Elderly Patients with Cardiovascular Diseases
Published 2019“…<b><i>Objectives:</i></b> The aim of this retrospective, observational study was to establish and validate a prediction model based on the random forest (RF) algorithm for the risk of cardiac complications of dental extraction in elderly patients with CVDs. …”
-
57
Image 2_Development and application of machine learning models for hematological disease diagnosis using routine laboratory parameters: a user-friendly diagnostic platform.jpeg
Published 2025“…</p>Methods<p>In this study, we employed 54 clinical and conventional laboratory parameters. By optimally combining multiple feature selection methods and machine learning algorithms, we developed 7 machine learning models with varying feature set sizes. …”
-
58
Data Sheet 1_Development and application of machine learning models for hematological disease diagnosis using routine laboratory parameters: a user-friendly diagnostic platform.doc...
Published 2025“…</p>Methods<p>In this study, we employed 54 clinical and conventional laboratory parameters. By optimally combining multiple feature selection methods and machine learning algorithms, we developed 7 machine learning models with varying feature set sizes. …”
-
59
Image 1_Development and application of machine learning models for hematological disease diagnosis using routine laboratory parameters: a user-friendly diagnostic platform.jpeg
Published 2025“…</p>Methods<p>In this study, we employed 54 clinical and conventional laboratory parameters. By optimally combining multiple feature selection methods and machine learning algorithms, we developed 7 machine learning models with varying feature set sizes. …”
-
60