Showing 81 - 100 results of 408 for search '(( binary pre processing optimization algorithm ) OR ( less based model optimization algorithm ))', query time: 0.68s Refine Results
  1. 81
  2. 82

    The loss curve for model training. by Jie Fang (306330)

    Published 2023
    “…The pointer network with an encoder and decoder structure is taken as the basic network for the deep reinforcement learning algorithm. A model-free reinforcement learning algorithm is designed to train network parameters to optimize the packing sequence. …”
  3. 83
  4. 84

    Cuff-less Blood Pressure Measurement based on Four-wavelength PPG Signals by Liang yongbo (4822017)

    Published 2023
    “…<a href="https://www.mdpi.com/2079-6374/8/4/101" target="_blank"><b>Link</b></a></p><p dir="ltr">[12] Xuhao Dong Ziyi Wang, Liangli Cao, Zhencheng Chen*, <b>Yongbo Liang*</b>. Whale Optimization Algorithm with a Hybrid Relation Vector Machine: A Highly Robust Respiratory Rate Prediction Model Using Photoplethysmography Signals [J]. …”
  5. 85

    Iteration diagram of genetic algorithm. by Ke Peng (2220973)

    Published 2023
    “…The results show that: (1) The applied SMOTEENN is more effective than SMOTE and ADASYN in dealing with the imbalance of banking data. (2) The F1 and AUC values of the model improved and optimized by XGBoost using genetic algorithm can reach 90% and 99%, respectively, which are optimal compared to other six machine learning models. …”
  6. 86

    Genetic algorithm flow chart. by Ke Peng (2220973)

    Published 2023
    “…The results show that: (1) The applied SMOTEENN is more effective than SMOTE and ADASYN in dealing with the imbalance of banking data. (2) The F1 and AUC values of the model improved and optimized by XGBoost using genetic algorithm can reach 90% and 99%, respectively, which are optimal compared to other six machine learning models. …”
  7. 87
  8. 88

    KNN algorithm flowchart. by Guilian Feng (18530806)

    Published 2024
    “…In order to improve the efficiency and accuracy of high-dimensional data processing, a feature selection method based on optimized genetic algorithm is proposed in this study. …”
  9. 89

    MGA algorithm flowchart. by Guilian Feng (18530806)

    Published 2024
    “…In order to improve the efficiency and accuracy of high-dimensional data processing, a feature selection method based on optimized genetic algorithm is proposed in this study. …”
  10. 90

    LSTM model validation results. by Yao Hu (3479972)

    Published 2025
    “…The outcome indicates that the standard error of the LSTM algorithm model training is less than 0.18, and the decision coefficients were all greater than 0.9. …”
  11. 91
  12. 92

    Results of genetic algorithm tuning parameters. by Ke Peng (2220973)

    Published 2023
    “…The results show that: (1) The applied SMOTEENN is more effective than SMOTE and ADASYN in dealing with the imbalance of banking data. (2) The F1 and AUC values of the model improved and optimized by XGBoost using genetic algorithm can reach 90% and 99%, respectively, which are optimal compared to other six machine learning models. …”
  13. 93
  14. 94
  15. 95
  16. 96
  17. 97

    Structure diagram of LSTM cell model. by Yao Hu (3479972)

    Published 2025
    “…The outcome indicates that the standard error of the LSTM algorithm model training is less than 0.18, and the decision coefficients were all greater than 0.9. …”
  18. 98

    Intelligent risk assessment model diagram. by Yao Hu (3479972)

    Published 2025
    “…The outcome indicates that the standard error of the LSTM algorithm model training is less than 0.18, and the decision coefficients were all greater than 0.9. …”
  19. 99

    LSTM model training accuracy verification. by Yao Hu (3479972)

    Published 2025
    “…The outcome indicates that the standard error of the LSTM algorithm model training is less than 0.18, and the decision coefficients were all greater than 0.9. …”
  20. 100

    LSTM model training stability verification. by Yao Hu (3479972)

    Published 2025
    “…The outcome indicates that the standard error of the LSTM algorithm model training is less than 0.18, and the decision coefficients were all greater than 0.9. …”