Showing 41 - 60 results of 64 for search '(( binary data forest classification algorithm ) OR ( binary b codon optimization algorithm ))', query time: 0.66s Refine Results
  1. 41

    The architecture of the BI-LSTM model. by Arshad Hashmi (13835488)

    Published 2024
    “…The attention layer and the BI-LSTM features are concatenated to create mapped features before feeding them to the random forest algorithm for classification. Our methodology and model performance were validated using NSL-KDD and UNSW-NB-15, two widely available IDS datasets. …”
  2. 42

    Comparison of accuracy and DR on UNSW-NB15. by Arshad Hashmi (13835488)

    Published 2024
    “…The attention layer and the BI-LSTM features are concatenated to create mapped features before feeding them to the random forest algorithm for classification. Our methodology and model performance were validated using NSL-KDD and UNSW-NB-15, two widely available IDS datasets. …”
  3. 43

    Comparison of DR and FPR of UNSW-NB15. by Arshad Hashmi (13835488)

    Published 2024
    “…The attention layer and the BI-LSTM features are concatenated to create mapped features before feeding them to the random forest algorithm for classification. Our methodology and model performance were validated using NSL-KDD and UNSW-NB-15, two widely available IDS datasets. …”
  4. 44

    Candidate predictors by Kexin Qu (10285073)

    Published 2025
    “…Performance was evaluated on models developed on the training data, on the same models applied to an external test set and through internal validation with three bootstrap algorithms to correct for overoptimism. …”
  5. 45
  6. 46

    Image_1_A predictive model based on random forest for shoulder-hand syndrome.JPEG by Suli Yu (14947807)

    Published 2023
    “…</p>Results<p>A binary classification model was trained based on 25 handpicked features. …”
  7. 47
  8. 48
  9. 49

    DataSheet1_Exploring the Common Mechanism of Fungal sRNA Transboundary Regulation of Plants Based on Ensemble Learning Methods.docx by Junxia Chi (12075389)

    Published 2022
    “…Five Ensemble learning algorithms of Gradient Boosting Decision Tree, Random Forest, Adaboost, XGBoost, and Light Gradient Boosting Machine are used to construct a binary classification prediction model on the data set. …”
  10. 50

    Data_Sheet_3_Using Serum Metabolomics to Predict Development of Anti-drug Antibodies in Multiple Sclerosis Patients Treated With IFNβ.xlsx by Kirsty E. Waddington (5754545)

    Published 2020
    “…We tested the efficacy of six binary classification models using 10-fold cross validation; k-nearest neighbors, decision tree, random forest, support vector machine and lasso (Least Absolute Shrinkage and Selection Operator) logistic regression with and without interactions.…”
  11. 51

    Data_Sheet_2_Using Serum Metabolomics to Predict Development of Anti-drug Antibodies in Multiple Sclerosis Patients Treated With IFNβ.xlsx by Kirsty E. Waddington (5754545)

    Published 2020
    “…We tested the efficacy of six binary classification models using 10-fold cross validation; k-nearest neighbors, decision tree, random forest, support vector machine and lasso (Least Absolute Shrinkage and Selection Operator) logistic regression with and without interactions.…”
  12. 52

    Data_Sheet_1_Using Serum Metabolomics to Predict Development of Anti-drug Antibodies in Multiple Sclerosis Patients Treated With IFNβ.xlsx by Kirsty E. Waddington (5754545)

    Published 2020
    “…We tested the efficacy of six binary classification models using 10-fold cross validation; k-nearest neighbors, decision tree, random forest, support vector machine and lasso (Least Absolute Shrinkage and Selection Operator) logistic regression with and without interactions.…”
  13. 53

    Data_Sheet_4_Using Serum Metabolomics to Predict Development of Anti-drug Antibodies in Multiple Sclerosis Patients Treated With IFNβ.pdf by Kirsty E. Waddington (5754545)

    Published 2020
    “…We tested the efficacy of six binary classification models using 10-fold cross validation; k-nearest neighbors, decision tree, random forest, support vector machine and lasso (Least Absolute Shrinkage and Selection Operator) logistic regression with and without interactions.…”
  14. 54
  15. 55

    Pan-cancer machine learning predictions of MEKi response. by John P. Lloyd (10196288)

    Published 2021
    “…<b>Regul</b>: regularized regression; <b>RF (reg):</b> regression-based random forest; <b>Logit:</b> logistic regression; <b>RF (bin):</b> classification-based (binary) random forest.…”
  16. 56

    Data_Sheet_1_Alzheimer’s Disease Diagnosis and Biomarker Analysis Using Resting-State Functional MRI Functional Brain Network With Multi-Measures Features and Hippocampal Subfield... by Uttam Khatri (12689072)

    Published 2022
    “…The accuracy obtained by the proposed method was reported for binary classification. More importantly, the classification results of the less commonly reported MCIs vs. …”
  17. 57
  18. 58

    Table 1_Non-obtrusive monitoring of obstructive sleep apnea syndrome based on ballistocardiography: a preliminary study.docx by Biyong Zhang (20906192)

    Published 2025
    “…</p>Results<p>Cross-validated on 32 subjects, the proposed approach achieved an accuracy of 71.9% for four-class severity classification and 87.5% for binary classification (AHI less than 15 or not).…”
  19. 59

    Supplementary Material 8 by Nishitha R Kumar (19750617)

    Published 2025
    “…</li><li><b>XGboost: </b>An optimized gradient boosting algorithm that efficiently handles large genomic datasets, commonly used for high-accuracy predictions in <i>E. coli</i> classification.…”
  20. 60

    Accessibility of translation initiation sites is the strongest predictor of heterologous protein expression in <i>E. coli</i>. by Bikash K. Bhandari (11524776)

    Published 2021
    “…This partition function approach can be customised and executed using the algorithm implemented in RNAplfold. B: mRNA features ranked by Gini importance for random forest classification of the expression outcomes of the PSI:Biology targets (N = 8,780 and 2,650, ‘success’ and ‘failure’ groups, respectively). …”