Search alternatives:
processing optimization » process optimization (Expand Search), process optimisation (Expand Search), routing optimization (Expand Search)
codon optimization » wolf optimization (Expand Search)
sample processing » image processing (Expand Search), waste processing (Expand Search), pre processing (Expand Search)
final sample » fecal samples (Expand Search), total sample (Expand Search)
binary a » binary _ (Expand Search), binary b (Expand Search), hilary a (Expand Search)
a codon » _ codon (Expand Search), a common (Expand Search)
processing optimization » process optimization (Expand Search), process optimisation (Expand Search), routing optimization (Expand Search)
codon optimization » wolf optimization (Expand Search)
sample processing » image processing (Expand Search), waste processing (Expand Search), pre processing (Expand Search)
final sample » fecal samples (Expand Search), total sample (Expand Search)
binary a » binary _ (Expand Search), binary b (Expand Search), hilary a (Expand Search)
a codon » _ codon (Expand Search), a common (Expand Search)
-
81
Data set presentation.
Published 2024“…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
-
82
Pearson correlation coefficient matrix plot.
Published 2024“…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
-
83
SHAP of stacking.
Published 2024“…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
-
84
Demonstration of data imbalance.
Published 2024“…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
-
85
Stacking ROC curve chart.
Published 2024“…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
-
86
Confusion matrix.
Published 2024“…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
-
87
GA-XGBoost feature importances.
Published 2024“…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
-
88
Partial results of the chi-square test.
Published 2024“…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
-
89
Stacking confusion matrix.
Published 2024“…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
-
90
Stacking schematic diagram.
Published 2024“…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
-
91
Flowchart of stacking model integration.
Published 2024“…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
-
92
Prediction results of the stacking model.
Published 2024“…Firstly, the dataset was balanced using various sampling methods; secondly, a Stacking model based on GA-XGBoost (XGBoost model optimized by genetic algorithm) was constructed for the risk prediction of diabetes; finally, the interpretability of the model was deeply analyzed using Shapley values. …”
-
93
-
94
Blessing from Human-AI Interaction: Super Policy Learning in Confounded Environments
Published 2025“…Building upon on these novel identification results, we develop several super-policy learning algorithms and systematically study their theoretical properties such as finite-sample regret guarantee. …”
-
95
Methodological steps.
Published 2023“…Finally, it has been demonstrated that the weights obtained by building a Hierarchical Risk Parity (HPR) portfolio perform better for various input random variables, indicating better out-of-sample performance. …”
-
96
Heatmap of weights.
Published 2023“…Finally, it has been demonstrated that the weights obtained by building a Hierarchical Risk Parity (HPR) portfolio perform better for various input random variables, indicating better out-of-sample performance. …”
-
97
Five types of Beta functions considered.
Published 2023“…Finally, it has been demonstrated that the weights obtained by building a Hierarchical Risk Parity (HPR) portfolio perform better for various input random variables, indicating better out-of-sample performance. …”
-
98
Dimensional variables of I-beam [87].
Published 2023“…Finally, it has been demonstrated that the weights obtained by building a Hierarchical Risk Parity (HPR) portfolio perform better for various input random variables, indicating better out-of-sample performance. …”
-
99
-
100