Showing 141 - 160 results of 327 for search '(( binary case based optimization algorithm ) OR ( primary data model optimization algorithm ))', query time: 0.68s Refine Results
  1. 141

    Set of variables VS model performance. by Gaosha Li (20570760)

    Published 2025
    “…A combination of four machine learning algorithms (XGBoost、Logistic Regression、Random Forest、AdaBoost) was employed to predict NPM recurrence, and the model with the highest Area Under the Curve (AUC) in the test set was selected as the best model. …”
  2. 142

    Performance metrics for BrC. by Afnan M. Alhassan (18349378)

    Published 2024
    “…Consequently, the prediction of BrC depends critically on the quick and precise processing of imaging data. The primary reason deep learning models are used in breast cancer detection is that they can produce findings more quickly and accurately than current machine learning-based techniques. …”
  3. 143

    Proposed methodology. by Afnan M. Alhassan (18349378)

    Published 2024
    “…Consequently, the prediction of BrC depends critically on the quick and precise processing of imaging data. The primary reason deep learning models are used in breast cancer detection is that they can produce findings more quickly and accurately than current machine learning-based techniques. …”
  4. 144

    Loss vs. Epoch. by Afnan M. Alhassan (18349378)

    Published 2024
    “…Consequently, the prediction of BrC depends critically on the quick and precise processing of imaging data. The primary reason deep learning models are used in breast cancer detection is that they can produce findings more quickly and accurately than current machine learning-based techniques. …”
  5. 145

    Sample images from the BreakHis dataset. by Afnan M. Alhassan (18349378)

    Published 2024
    “…Consequently, the prediction of BrC depends critically on the quick and precise processing of imaging data. The primary reason deep learning models are used in breast cancer detection is that they can produce findings more quickly and accurately than current machine learning-based techniques. …”
  6. 146

    Accuracy vs. Epoch. by Afnan M. Alhassan (18349378)

    Published 2024
    “…Consequently, the prediction of BrC depends critically on the quick and precise processing of imaging data. The primary reason deep learning models are used in breast cancer detection is that they can produce findings more quickly and accurately than current machine learning-based techniques. …”
  7. 147

    S1 Dataset - by Afnan M. Alhassan (18349378)

    Published 2024
    “…Consequently, the prediction of BrC depends critically on the quick and precise processing of imaging data. The primary reason deep learning models are used in breast cancer detection is that they can produce findings more quickly and accurately than current machine learning-based techniques. …”
  8. 148

    CSCO’s flowchart. by Afnan M. Alhassan (18349378)

    Published 2024
    “…Consequently, the prediction of BrC depends critically on the quick and precise processing of imaging data. The primary reason deep learning models are used in breast cancer detection is that they can produce findings more quickly and accurately than current machine learning-based techniques. …”
  9. 149
  10. 150

    Inconsistency concept for a triad (2, 5, 3). by Waldemar W. Koczkodaj (22008783)

    Published 2025
    “…The proposed regeneration method emulates three primary phases of a biological process: identifying the most damaged areas (by identifying inconsistencies in the pairwise comparison matrix), cell proliferation (filling in missing data), and stabilization (optimization of global consistency). …”
  11. 151
  12. 152

    Supplementary file 1_Development of a venous thromboembolism risk prediction model for patients with primary membranous nephropathy based on machine learning.docx by Lian Li (49049)

    Published 2025
    “…Objective<p>This study utilizes real-world data from primary membranous nephropathy (PMN) patients to preliminarily develop a venous thromboembolism (VTE) risk prediction model with machine learning. …”
  13. 153

    ResNeXt101 training and results. by Subathra Gunasekaran (19492680)

    Published 2024
    “…Next, we employ batch normalization to smooth and enhance the collected data, followed by feature extraction using the AlexNet model. …”
  14. 154

    Architecture of ConvNet. by Subathra Gunasekaran (19492680)

    Published 2024
    “…Next, we employ batch normalization to smooth and enhance the collected data, followed by feature extraction using the AlexNet model. …”
  15. 155

    Comparison of state-of-the-art method. by Subathra Gunasekaran (19492680)

    Published 2024
    “…Next, we employ batch normalization to smooth and enhance the collected data, followed by feature extraction using the AlexNet model. …”
  16. 156

    Proposed ResNeXt101 operational flow. by Subathra Gunasekaran (19492680)

    Published 2024
    “…Next, we employ batch normalization to smooth and enhance the collected data, followed by feature extraction using the AlexNet model. …”
  17. 157
  18. 158

    Proposed method approach. by Muhammad Usman Tariq (11022141)

    Published 2024
    “…Analytic approaches, both predictive and retrospective in nature, were used to interpret the data. Our primary objective was to determine the most effective model for predicting COVID-19 cases in the United Arab Emirates (UAE) and Malaysia. …”
  19. 159

    Descriptive statistics. by Muhammad Usman Tariq (11022141)

    Published 2024
    “…Analytic approaches, both predictive and retrospective in nature, were used to interpret the data. Our primary objective was to determine the most effective model for predicting COVID-19 cases in the United Arab Emirates (UAE) and Malaysia. …”
  20. 160

    Analysis and design of algorithms for the manufacturing process of integrated circuits by Sonia Fleytas (16856403)

    Published 2023
    “…From this, we propose: (i) a new ILP model, and (ii) a new solution representation, which, unlike the reference work, guarantees that feasible solutions are obtained throughout the generation of new individuals. Based on this new representation, we proposed and evaluated other approximate methods, including a greedy algorithm and a genetic algorithm that improve the state-of-the-art results for test cases usually used in the literature. …”