Search alternatives:
features classification » feature classification (Expand Search), gesture classification (Expand Search), patches classification (Expand Search)
all features » cell features (Expand Search), fault features (Expand Search), main features (Expand Search)
features classification » feature classification (Expand Search), gesture classification (Expand Search), patches classification (Expand Search)
all features » cell features (Expand Search), fault features (Expand Search), main features (Expand Search)
-
1
-
2
-
3
-
4
CNN structure for feature extraction.
Published 2025“…This research offered state-of-the-art results, which achieved remarkable performance metrics with an accuracy, AUC, precision, recall, F1 score, Cohen’s Kappa and Matthews Correlation Coefficient (MCC) of 99.49%, 99.73%, 100%, 99%, 99%, 99.15% and 99.16%, respectively, addressing the prior research gaps and setting a new benchmark in the field. Furthermore, in binary class classification, all the performance indicators attained a perfect score of 100%. …”
-
5
-
6
-
7
-
8
Result comparison with other existing models.
Published 2025“…This research offered state-of-the-art results, which achieved remarkable performance metrics with an accuracy, AUC, precision, recall, F1 score, Cohen’s Kappa and Matthews Correlation Coefficient (MCC) of 99.49%, 99.73%, 100%, 99%, 99%, 99.15% and 99.16%, respectively, addressing the prior research gaps and setting a new benchmark in the field. Furthermore, in binary class classification, all the performance indicators attained a perfect score of 100%. …”
-
9
Dataset distribution.
Published 2025“…This research offered state-of-the-art results, which achieved remarkable performance metrics with an accuracy, AUC, precision, recall, F1 score, Cohen’s Kappa and Matthews Correlation Coefficient (MCC) of 99.49%, 99.73%, 100%, 99%, 99%, 99.15% and 99.16%, respectively, addressing the prior research gaps and setting a new benchmark in the field. Furthermore, in binary class classification, all the performance indicators attained a perfect score of 100%. …”
-
10
Comparison with previous studies.
Published 2023“…This retrospective study used electrocardiographs obtained at Yonsei University Wonju Severance Christian Hospital, Wonju, Korea, from October 2010 to February 2020. Binary classification was performed for primary screening for left ventricular hypertrophy. …”
-
11
Dataset characteristics.
Published 2023“…This retrospective study used electrocardiographs obtained at Yonsei University Wonju Severance Christian Hospital, Wonju, Korea, from October 2010 to February 2020. Binary classification was performed for primary screening for left ventricular hypertrophy. …”
-
12
Acronym table.
Published 2023“…This retrospective study used electrocardiographs obtained at Yonsei University Wonju Severance Christian Hospital, Wonju, Korea, from October 2010 to February 2020. Binary classification was performed for primary screening for left ventricular hypertrophy. …”
-
13
-
14
-
15
Important citation identification by exploiting content and section-wise in-text citation count
Published 2020“…This research presents a novel approach for binary citation classification by exploiting section-wise in-text citation frequencies, similarity score, and overall citation count-based features. …”
-
16
Table_1_An efficient decision support system for leukemia identification utilizing nature-inspired deep feature optimization.pdf
Published 2024“…To optimize feature selection, a customized binary Grey Wolf Algorithm is utilized, achieving an impressive 80% reduction in feature size while preserving key discriminative information. …”
-
17
Flow diagram of the proposed model.
Published 2025“…Local Interpretable Model-agnostic Explanations (LIME) were applied to improve interpretability. Across all algorithm models, LR–ABC hybrids outperformed their baseline models (e.g., Random Forest: 85.2% → 91.36% accuracy). …”
-
18
Related studies on IDS using deep learning.
Published 2024“…The attention layer and the BI-LSTM features are concatenated to create mapped features before feeding them to the random forest algorithm for classification. …”
-
19
The architecture of the BI-LSTM model.
Published 2024“…The attention layer and the BI-LSTM features are concatenated to create mapped features before feeding them to the random forest algorithm for classification. …”
-
20
Comparison of accuracy and DR on UNSW-NB15.
Published 2024“…The attention layer and the BI-LSTM features are concatenated to create mapped features before feeding them to the random forest algorithm for classification. …”