Showing 81 - 100 results of 156 for search '(( final sample processing optimization algorithm ) OR ( binary a codon optimization algorithm ))', query time: 0.69s Refine Results
  1. 81

    Data set presentation. by Wenguang Li (6528113)

    Published 2024
    “…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
  2. 82

    Pearson correlation coefficient matrix plot. by Wenguang Li (6528113)

    Published 2024
    “…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
  3. 83

    SHAP of stacking. by Wenguang Li (6528113)

    Published 2024
    “…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
  4. 84

    Demonstration of data imbalance. by Wenguang Li (6528113)

    Published 2024
    “…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
  5. 85

    Stacking ROC curve chart. by Wenguang Li (6528113)

    Published 2024
    “…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
  6. 86

    Confusion matrix. by Wenguang Li (6528113)

    Published 2024
    “…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
  7. 87

    GA-XGBoost feature importances. by Wenguang Li (6528113)

    Published 2024
    “…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
  8. 88

    Partial results of the chi-square test. by Wenguang Li (6528113)

    Published 2024
    “…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
  9. 89

    Stacking confusion matrix. by Wenguang Li (6528113)

    Published 2024
    “…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
  10. 90

    Stacking schematic diagram. by Wenguang Li (6528113)

    Published 2024
    “…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
  11. 91

    Flowchart of stacking model integration. by Wenguang Li (6528113)

    Published 2024
    “…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
  12. 92

    Prediction results of the stacking model. by Wenguang Li (6528113)

    Published 2024
    “…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
  13. 93
  14. 94

    Blessing from Human-AI Interaction: Super Policy Learning in Confounded Environments by Jiayi Wang (294312)

    Published 2025
    “…Building upon on these novel identification results, we develop several super-policy learning algorithms and systematically study their theoretical properties such as finite-sample regret guarantee. …”
  15. 95

    Methodological steps. by Luan Carlos de Sena Monteiro Ozelim (16914117)

    Published 2023
    “…Finally, it has been demonstrated that the weights obtained by building a Hierarchical Risk Parity (HPR) portfolio perform better for various input random variables, indicating better out-of-sample performance. …”
  16. 96

    Heatmap of weights. by Luan Carlos de Sena Monteiro Ozelim (16914117)

    Published 2023
    “…Finally, it has been demonstrated that the weights obtained by building a Hierarchical Risk Parity (HPR) portfolio perform better for various input random variables, indicating better out-of-sample performance. …”
  17. 97

    Five types of Beta functions considered. by Luan Carlos de Sena Monteiro Ozelim (16914117)

    Published 2023
    “…Finally, it has been demonstrated that the weights obtained by building a Hierarchical Risk Parity (HPR) portfolio perform better for various input random variables, indicating better out-of-sample performance. …”
  18. 98

    Dimensional variables of I-beam [87]. by Luan Carlos de Sena Monteiro Ozelim (16914117)

    Published 2023
    “…Finally, it has been demonstrated that the weights obtained by building a Hierarchical Risk Parity (HPR) portfolio perform better for various input random variables, indicating better out-of-sample performance. …”
  19. 99
  20. 100