Search alternatives:
from optimization » fox optimization (Expand Search), swarm optimization (Expand Search), codon optimization (Expand Search)
binary ages » binary values (Expand Search), binary labels (Expand Search), binary image (Expand Search)
ages from » age from (Expand Search), images from (Expand Search), cases from (Expand Search)
from optimization » fox optimization (Expand Search), swarm optimization (Expand Search), codon optimization (Expand Search)
binary ages » binary values (Expand Search), binary labels (Expand Search), binary image (Expand Search)
ages from » age from (Expand Search), images from (Expand Search), cases from (Expand Search)
-
1
-
2
Supplementary Material for: Penalized Logistic Regression Analysis for Genetic Association Studies of Binary Phenotypes
Published 2022“…We consider two approximate approaches to maximizing the marginal likelihood: (i) a Monte Carlo EM algorithm (MCEM) and (ii) a Laplace approximation (LA) to each integral, followed by derivative-free optimization of the approximation. …”
-
3
SHAP bar plot.
Published 2025“…Models based on NNET, RF, LR, and SVM algorithms were developed, achieving AUC of 0.918, 0.889, 0.872, and 0.760, respectively, on the test set. …”
-
4
Sample screening flowchart.
Published 2025“…Models based on NNET, RF, LR, and SVM algorithms were developed, achieving AUC of 0.918, 0.889, 0.872, and 0.760, respectively, on the test set. …”
-
5
Descriptive statistics for variables.
Published 2025“…Models based on NNET, RF, LR, and SVM algorithms were developed, achieving AUC of 0.918, 0.889, 0.872, and 0.760, respectively, on the test set. …”
-
6
SHAP summary plot.
Published 2025“…Models based on NNET, RF, LR, and SVM algorithms were developed, achieving AUC of 0.918, 0.889, 0.872, and 0.760, respectively, on the test set. …”
-
7
ROC curves for the test set of four models.
Published 2025“…Models based on NNET, RF, LR, and SVM algorithms were developed, achieving AUC of 0.918, 0.889, 0.872, and 0.760, respectively, on the test set. …”
-
8
Display of the web prediction interface.
Published 2025“…Models based on NNET, RF, LR, and SVM algorithms were developed, achieving AUC of 0.918, 0.889, 0.872, and 0.760, respectively, on the test set. …”
-
9
-
10
-
11
An intelligent decision-making system for embryo transfer in reproductive technology: a machine learning-based approach
Published 2025“…Four popular ML algorithms were used, including random forest (RF), logistic regression (LR), support vector machine (SVM), and artificial neural network (ANN), considering seven criteria: the woman’s age, sperm origin, the developmental qualities of four potential embryos, infertility duration, assessment of the woman, morphological qualities of the four best embryos on the day of transfer, and number of oocytes extracted. …”
-
12
Table 1_Heavy metal biomarkers and their impact on hearing loss risk: a machine learning framework analysis.docx
Published 2025“…Demographic, clinical, and heavy metal biomarker data (e.g., blood lead and cadmium levels) were analyzed as features, with hearing loss status—defined as a pure-tone average threshold exceeding 25 dB HL across 500, 1,000, 2000, and 4,000 Hz in the better ear—serving as the binary outcome. Multiple machine learning algorithms, including Random Forest, XGBoost, Gradient Boosting, Logistic Regression, CatBoost, and MLP, were optimized and evaluated. …”
-
13
DataSheet_1_Multi-Parametric MRI-Based Radiomics Models for Predicting Molecular Subtype and Androgen Receptor Expression in Breast Cancer.docx
Published 2021“…We applied several feature selection strategies including the least absolute shrinkage and selection operator (LASSO), and recursive feature elimination (RFE), the maximum relevance minimum redundancy (mRMR), Boruta and Pearson correlation analysis, to select the most optimal features. We then built 120 diagnostic models using distinct classification algorithms and feature sets divided by MRI sequences and selection strategies to predict molecular subtype and AR expression of breast cancer in the testing dataset of leave-one-out cross-validation (LOOCV). …”