Search alternatives:
processing optimization » process optimization (Expand Search), process optimisation (Expand Search), routing optimization (Expand Search)
while optimization » whale optimization (Expand Search), wolf optimization (Expand Search), phase optimization (Expand Search)
sample processing » image processing (Expand Search), waste processing (Expand Search), pre processing (Expand Search)
data sample » data samples (Expand Search)
binary d » binary _ (Expand Search), binary b (Expand Search)
d while » a while (Expand Search), red while (Expand Search), de chile (Expand Search)
processing optimization » process optimization (Expand Search), process optimisation (Expand Search), routing optimization (Expand Search)
while optimization » whale optimization (Expand Search), wolf optimization (Expand Search), phase optimization (Expand Search)
sample processing » image processing (Expand Search), waste processing (Expand Search), pre processing (Expand Search)
data sample » data samples (Expand Search)
binary d » binary _ (Expand Search), binary b (Expand Search)
d while » a while (Expand Search), red while (Expand Search), de chile (Expand Search)
-
121
Data_Sheet_1_Posiform planting: generating QUBO instances for benchmarking.pdf
Published 2023“…<p>We are interested in benchmarking both quantum annealing and classical algorithms for minimizing quadratic unconstrained binary optimization (QUBO) problems. …”
-
122
-
123
-
124
Modeling CO<sub>2</sub> solubility in polyethylene glycol polymer using data driven methods
Published 2025“…In this research, a Random Forest (RF) machine learning model is meticulously tuned through four sophisticated optimization algorithms: Batch Bayesian Optimization (BBO), Self-Adaptive Differential Evolution (SADE), Bayesian Probability Improvement (BPI), and Gaussian Processes Optimization (GPO). …”
-
125
Paeameter ranges and optimal values.
Published 2025“…Additionally, considering the imbalanced in population spatial distribution, we used the K-means ++ clustering algorithm to cluster the optimal feature subset, and we used the bootstrap sampling method to extract the same amount of data from each cluster and fuse it with the training subset to build an improved random forest model. …”
-
126
Thesis-RAMIS-Figs_Slides
Published 2024“…In addition, the practical benefits for \emph{<i>MPS</i>} in the context of simulating channelized facies models is demonstrated using synthetic data and real geological facies. Importantly, this strategy locates samples adaptively on the transition between facies which improves the performance of conventional \emph{<i>MPS</i>} algorithms. …”
-
127
-
128
-
129
MLP vs classification algorithms.
Published 2024“…We propose SPAM-XAI, a hybrid model integrating novel sampling, feature selection, and eXplainable-AI (XAI) algorithms to address these challenges. …”
-
130
-
131
Multimodal Mass Spectrometry Imaging of Rat Brain Using IR-MALDESI and NanoPOTS-LC-MS/MS
Published 2021“…The aim of this work was to create a multimodal MSI approach that measures metabolomic and proteomic data from a single biological organ by combining infrared matrix-assisted laser desorption electrospray ionization (IR-MALDESI) for metabolomic MSI and nanodroplet processing in one pot for trace samples (nanoPOTS) LC-MS/MS for spatially resolved proteome profiling. …”
-
132
Multimodal Mass Spectrometry Imaging of Rat Brain Using IR-MALDESI and NanoPOTS-LC-MS/MS
Published 2021“…The aim of this work was to create a multimodal MSI approach that measures metabolomic and proteomic data from a single biological organ by combining infrared matrix-assisted laser desorption electrospray ionization (IR-MALDESI) for metabolomic MSI and nanodroplet processing in one pot for trace samples (nanoPOTS) LC-MS/MS for spatially resolved proteome profiling. …”
-
133
REDUCTION OF SAMPLE SIZE IN THE ANALYSIS OF SPATIAL VARIABILITY OF NONSTATIONARY SOIL CHEMICAL ATTRIBUTES
Published 2019“…<div><p>ABSTRACT In the study of spatial variability of soil attributes, it is essential to define a sampling plan with adequate sample size. This study aimed to evaluate, through simulated data, the influence of parameters of the geostatistical model and sampling configuration on the optimization process, and resize and reduce the sample size of a sampling configuration of a commercial area composed of 102 points. …”
-
134
-
135
Technical approach.
Published 2024“…The results show: (1) Random oversampling, ADASYN, SMOTE, and SMOTEENN were used for data balance processing, among which SMOTEENN showed better efficiency and effect in dealing with data imbalance. (2) The GA-XGBoost model optimized the hyperparameters of the XGBoost model through a genetic algorithm to improve the model’s predictive accuracy. …”
-
136
Pearson correlation coefficient matrix plot.
Published 2024“…The results show: (1) Random oversampling, ADASYN, SMOTE, and SMOTEENN were used for data balance processing, among which SMOTEENN showed better efficiency and effect in dealing with data imbalance. (2) The GA-XGBoost model optimized the hyperparameters of the XGBoost model through a genetic algorithm to improve the model’s predictive accuracy. …”
-
137
SHAP of stacking.
Published 2024“…The results show: (1) Random oversampling, ADASYN, SMOTE, and SMOTEENN were used for data balance processing, among which SMOTEENN showed better efficiency and effect in dealing with data imbalance. (2) The GA-XGBoost model optimized the hyperparameters of the XGBoost model through a genetic algorithm to improve the model’s predictive accuracy. …”
-
138
Stacking ROC curve chart.
Published 2024“…The results show: (1) Random oversampling, ADASYN, SMOTE, and SMOTEENN were used for data balance processing, among which SMOTEENN showed better efficiency and effect in dealing with data imbalance. (2) The GA-XGBoost model optimized the hyperparameters of the XGBoost model through a genetic algorithm to improve the model’s predictive accuracy. …”
-
139
Confusion matrix.
Published 2024“…The results show: (1) Random oversampling, ADASYN, SMOTE, and SMOTEENN were used for data balance processing, among which SMOTEENN showed better efficiency and effect in dealing with data imbalance. (2) The GA-XGBoost model optimized the hyperparameters of the XGBoost model through a genetic algorithm to improve the model’s predictive accuracy. …”
-
140
GA-XGBoost feature importances.
Published 2024“…The results show: (1) Random oversampling, ADASYN, SMOTE, and SMOTEENN were used for data balance processing, among which SMOTEENN showed better efficiency and effect in dealing with data imbalance. (2) The GA-XGBoost model optimized the hyperparameters of the XGBoost model through a genetic algorithm to improve the model’s predictive accuracy. …”