Search alternatives:
robust classification » forest classification (Expand Search), risk classification (Expand Search), group classification (Expand Search)
codon optimization » wolf optimization (Expand Search)
binary data » primary data (Expand Search), dietary data (Expand Search)
data codon » data code (Expand Search), data codes (Expand Search), data codings (Expand Search)
robust classification » forest classification (Expand Search), risk classification (Expand Search), group classification (Expand Search)
codon optimization » wolf optimization (Expand Search)
binary data » primary data (Expand Search), dietary data (Expand Search)
data codon » data code (Expand Search), data codes (Expand Search), data codings (Expand Search)
-
21
The architecture of the BI-LSTM model.
Published 2024“…The model’s binary and multi-class classification accuracies on the UNSW-NB15 dataset are 99.56% and 99.45%, respectively. …”
-
22
Comparison of accuracy and DR on UNSW-NB15.
Published 2024“…The model’s binary and multi-class classification accuracies on the UNSW-NB15 dataset are 99.56% and 99.45%, respectively. …”
-
23
Comparison of DR and FPR of UNSW-NB15.
Published 2024“…The model’s binary and multi-class classification accuracies on the UNSW-NB15 dataset are 99.56% and 99.45%, respectively. …”
-
24
Table_1_Near infrared spectroscopy for cooking time classification of cassava genotypes.docx
Published 2024“…Two NIRs devices, the portable QualitySpec® Trek (QST) and the benchtop NIRFlex N-500 were used to collect spectral data. Classification of genotypes was carried out using the K-nearest neighbor algorithm (KNN) and partial least squares (PLS) models. …”
-
25
-
26
Models and Dataset
Published 2025“…Its simplicity and lack of algorithm-specific parameters make it computationally efficient and easy to apply in high-dimensional problems such as gene selection for cancer classification.…”
-
27
Supplementary Material 8
Published 2025“…</li><li><b>Radial basis function kernel-support vector machine (RBF-SVM): </b>A more flexible version of SVM that uses a non-linear kernel to capture complex relationships in genomic data, improving classification accuracy.</li><li><b>Extra trees classifier: </b>This tree-based ensemble method enhances classification by randomly selecting features and thresholds, improving robustness in <i>E. coli</i> strain differentiation.…”
-
28
iNCog-EEG (ideal vs. Noisy Cognitive EEG for Workload Assessment) Dataset
Published 2025“…Inside each folder, four <b>.EDF</b> files represent the workload conditions:</p><pre><pre>subxx_nw.EDF → No Workload (resting state) <br>subxx_lw.EDF → Low Workload (easy multitasking) <br>subxx_mw.EDF → Moderate Workload (medium multitasking) <br>subxx_hw.EDF → High Workload (hard multitasking) <br></pre></pre><ul><li><b>Subjects 01–30:</b> Clean EEG recordings</li><li><b>Subjects 31–40:</b> Noisy EEG recordings with real-world artifacts</li></ul><p dir="ltr">This structure ensures straightforward differentiation between clean vs. noisy data and across workload levels.</p><h3>Applications</h3><p dir="ltr">This dataset can be applied to a wide range of research areas, including:</p><ul><li>EEG signal denoising and artifact rejection</li><li>Binary and hierarchical <b>cognitive workload classification</b></li><li>Development of <b>robust Brain–Computer Interfaces (BCIs)</b></li><li>Benchmarking algorithms under <b>ideal and noisy conditions</b></li><li>Multitasking and mental workload assessment in <b>real-world scenarios</b></li></ul><p dir="ltr">By combining controlled multitasking protocols with deliberately introduced environmental noise, <b>iNCog-EEG provides a comprehensive benchmark</b> for advancing EEG-based workload recognition systems in both clean and challenging conditions.…”
-
29
Image_1_Validation of miRNA signatures for ovarian cancer earlier detection in the pre-diagnosis setting using machine learning approaches.pdf
Published 2024“…We employed the extreme gradient boosting (XGBoost) algorithm to train a binary classification model using 70% of the available data, while the model was tested on the remaining 30% of the dataset.…”