Showing 61 - 80 results of 230 for search '(( library based based optimization algorithm ) OR ( binary mask process optimization algorithm ))', query time: 0.33s Refine Results
  1. 61

    FEP Augmentation as a Means to Solve Data Paucity Problems for Machine Learning in Chemical Biology by Pieter B. Burger (4172578)

    Published 2024
    “…Ultimately, the study advocates for the synergy of physics-based methods and ML to expedite the lead optimization process. …”
  2. 62

    FEP Augmentation as a Means to Solve Data Paucity Problems for Machine Learning in Chemical Biology by Pieter B. Burger (4172578)

    Published 2024
    “…Ultimately, the study advocates for the synergy of physics-based methods and ML to expedite the lead optimization process. …”
  3. 63

    FEP Augmentation as a Means to Solve Data Paucity Problems for Machine Learning in Chemical Biology by Pieter B. Burger (4172578)

    Published 2024
    “…Ultimately, the study advocates for the synergy of physics-based methods and ML to expedite the lead optimization process. …”
  4. 64

    SSA4LSMOP by Zongbin Qiao (19814199)

    Published 2024
    “…<p dir="ltr">SSA4LSMOP is a multi-objective optimization algorithm library, which aims to provide researchers and developers with efficient and easy-to-use multi-objective optimization solutions. …”
  5. 65

    Data_Sheet_1_A real-time driver fatigue identification method based on GA-GRNN.ZIP by Xiaoyuan Wang (492534)

    Published 2022
    “…In this paper, a non-invasive and low-cost method of fatigue driving state identification based on genetic algorithm optimization of generalized regression neural network model is proposed. …”
  6. 66

    Addressing Imbalanced Classification Problems in Drug Discovery and Development Using Random Forest, Support Vector Machine, AutoGluon-Tabular, and H2O AutoML by Ayush Garg (21090944)

    Published 2025
    “…The important findings of our studies are as follows: (i) there is no effect of threshold optimization on ranking metrics such as AUC and AUPR, but AUC and AUPR get affected by class-weighting and SMOTTomek; (ii) for ML methods RF and SVM, significant percentage improvement up to 375, 33.33, and 450 over all the data sets can be achieved, respectively, for F1 score, MCC, and balanced accuracy, which are suitable for performance evaluation of imbalanced data sets; (iii) for AutoML libraries AutoGluon-Tabular and H2O AutoML, significant percentage improvement up to 383.33, 37.25, and 533.33 over all the data sets can be achieved, respectively, for F1 score, MCC, and balanced accuracy; (iv) the general pattern of percentage improvement in balanced accuracy is that the percentage improvement increases when the class ratio is systematically decreased from 0.5 to 0.1; in the case of F1 score and MCC, maximum improvement is achieved at the class ratio of 0.3; (v) for both ML and AutoML with balancing, it is observed that any individual class-balancing technique does not outperform all other methods on a significantly higher number of data sets based on F1 score; (vi) the three external balancing techniques combined outperformed the internal balancing methods of the ML and AutoML; (vii) AutoML tools perform as good as the ML models and in some cases perform even better for handling imbalanced classification when applied with imbalance handling techniques. …”
  7. 67

    Table_1_Data-based modeling for hypoglycemia prediction: Importance, trends, and implications for clinical practice.docx by Liyin Zhang (6371999)

    Published 2023
    “…As a result, a review is needed to summarize the existing prediction algorithms and models to guide better clinical practice in hypoglycemia prevention.…”
  8. 68

    Optimal 8-mer and 9-mer SARS-CoV-2 epitope identification. by Mariah Hassert (5746874)

    Published 2020
    “…As controls for strong Kb and Db binders respectively, Ova peptide and ZIKV E294 were input into the same algorithm. Optimal peptide epitopes are highlighted based on functional T cell data in combination with RMA-S stabilization assay data.…”
  9. 69
  10. 70
  11. 71
  12. 72
  13. 73
  14. 74
  15. 75
  16. 76
  17. 77
  18. 78
  19. 79
  20. 80