Search alternatives:
based optimization » whale optimization (Expand Search)
step optimization » after optimization (Expand Search), swarm optimization (Expand Search), model optimization (Expand Search)
binary data » primary data (Expand Search), dietary data (Expand Search)
tasks based » task based (Expand Search), cases based (Expand Search)
data step » data set (Expand Search)
based optimization » whale optimization (Expand Search)
step optimization » after optimization (Expand Search), swarm optimization (Expand Search), model optimization (Expand Search)
binary data » primary data (Expand Search), dietary data (Expand Search)
tasks based » task based (Expand Search), cases based (Expand Search)
data step » data set (Expand Search)
-
21
Collaborative hunting behavior.
Published 2025“…To address this problem, this paper proposes an improved red-billed blue magpie algorithm (IRBMO), which is specifically optimized for the feature selection task, and significantly improves the performance and efficiency of the algorithm on medical data by introducing multiple innovative behavioral strategies. …”
-
22
Friedman average rank sum test results.
Published 2025“…To address this problem, this paper proposes an improved red-billed blue magpie algorithm (IRBMO), which is specifically optimized for the feature selection task, and significantly improves the performance and efficiency of the algorithm on medical data by introducing multiple innovative behavioral strategies. …”
-
23
Proposed Algorithm.
Published 2025“…Hence, an Energy-Harvesting Reinforcement Learning-based Offloading Decision Algorithm (EHRL) is proposed. …”
-
24
Comparisons between ADAM and NADAM optimizers.
Published 2025“…Hence, an Energy-Harvesting Reinforcement Learning-based Offloading Decision Algorithm (EHRL) is proposed. …”
-
25
Flowchart scheme of the ML-based model.
Published 2024“…<b>J)</b> Optimization of hyperparameter tuning. <b>K)</b> Algorithm selection from all models. …”
-
26
-
27
-
28
An Example of a WPT-MEC Network.
Published 2025“…Hence, an Energy-Harvesting Reinforcement Learning-based Offloading Decision Algorithm (EHRL) is proposed. …”
-
29
Related Work Summary.
Published 2025“…Hence, an Energy-Harvesting Reinforcement Learning-based Offloading Decision Algorithm (EHRL) is proposed. …”
-
30
Simulation parameters.
Published 2025“…Hence, an Energy-Harvesting Reinforcement Learning-based Offloading Decision Algorithm (EHRL) is proposed. …”
-
31
Training losses for N = 10.
Published 2025“…Hence, an Energy-Harvesting Reinforcement Learning-based Offloading Decision Algorithm (EHRL) is proposed. …”
-
32
Normalized computation rate for N = 10.
Published 2025“…Hence, an Energy-Harvesting Reinforcement Learning-based Offloading Decision Algorithm (EHRL) is proposed. …”
-
33
Summary of Notations Used in this paper.
Published 2025“…Hence, an Energy-Harvesting Reinforcement Learning-based Offloading Decision Algorithm (EHRL) is proposed. …”
-
34
-
35
-
36
-
37
Identification and quantitation of clinically relevant microbes in patient samples: Comparison of three k-mer based classifiers for speed, accuracy, and sensitivity
Published 2019“…Adopting metagenomic analysis for clinical use requires that all aspects of the workflow are optimized and tested, including data analysis and computational time and resources. …”
-
38
-
39
Supplementary Material for: Penalized Logistic Regression Analysis for Genetic Association Studies of Binary Phenotypes
Published 2022“…Our estimate of m is the maximizer of a marginal likelihood obtained by integrating the latent log-ORs out of the joint distribution of the parameters and observed data. We consider two approximate approaches to maximizing the marginal likelihood: (i) a Monte Carlo EM algorithm (MCEM) and (ii) a Laplace approximation (LA) to each integral, followed by derivative-free optimization of the approximation. …”
-
40
Algoritmo de clasificación de expresiones de odio por tipos en español (Algorithm for classifying hate expressions by type in Spanish)
Published 2024“…</p><h2>Model Architecture</h2><p dir="ltr">The model is based on <code>pysentimiento/robertuito-base-uncased</code> with the following modifications:</p><ul><li>A dense classification layer was added over the base model</li><li>Uses input IDs and attention masks as inputs</li><li>Generates a multi-class classification with 5 hate categories</li></ul><h2>Dataset</h2><p dir="ltr"><b>HATEMEDIA Dataset</b>: Custom hate speech dataset with categorization by type:</p><ul><li><b>Labels</b>: 5 hate type categories (0-4)</li><li><b>Preprocessing</b>:</li><li>Null values removed from text and labels</li><li>Reindexing and relabeling (original labels are adjusted by subtracting 1)</li><li>Exclusion of category 2 during training</li><li>Conversion of category 5 to category 2</li></ul><h2>Training Process</h2><h3>Configuration</h3><ul><li><b>Batch size</b>: 128</li><li><b>Epoches</b>: 5</li><li><b>Learning rate</b>: 2e-5 with 10% warmup steps</li><li><b>Early stopping</b> with patience=2</li><li><b>Class weights</b>: Balanced to handle class imbalance</li></ul><h3>Custom Metrics</h3><ul><li>Recall for specific classes (focus on class 2)</li><li>Precision for specific classes (focus on class 3)</li><li>F1-score (weighted)</li><li>AUC-PR</li><li>Recall at precision=0.6 (class 3)</li><li>Precision at recall=0.6 (class 2)</li></ul><h2>Evaluation Metrics</h2><p dir="ltr">The model is evaluated using:</p><ul><li>Macro recall, precision, and F1-score</li><li>One-vs-Rest AUC</li><li>Accuracy</li><li>Per-class metrics</li><li>Confusion matrix</li><li>Full classification report</li></ul><h2>Technical Features</h2><h3>Data Preprocessing</h3><ul><li><b>Tokenization</b>: Maximum length of 128 tokens (truncation and padding)</li><li><b>Encoding of labels</b>: One-hot encoding for multi-class classification</li><li><b>Data split</b>: 80% training, 10% validation, 10% testing</li></ul><h3>Optimization</h3><ul><li><b>Optimizer</b>: Adam with linear warmup scheduling</li><li><b>Loss function</b>: Categorical Crossentropy (from_logits=True)</li><li><b>Imbalance handling</b>: Class weights computed automatically</li></ul><h2>Requirements</h2><p dir="ltr">The following Python packages are required:</p><ul><li>TensorFlow</li><li>Transformers</li><li>scikit-learn</li><li>pandas</li><li>datasets</li><li>matplotlib</li><li>seaborn</li><li>numpy</li></ul><h2>Usage</h2><ol><li><b>Data format</b>:</li></ol><ul><li>CSV file or Pandas DataFrame</li><li>Required column name: <code>text</code> (string type)</li><li>Required column name: Data type label (integer type, 0-4) - optional for evaluation</li></ul><ol><li><b>Text preprocessing</b>:</li></ol><ul><li>Automatic tokenization with a maximum length of 128 tokens</li><li>Long texts will be automatically truncated</li><li>Handling of special characters, URLs, and emojis included</li></ul><ol><li><b>Label encoding</b>:</li></ol><ul><li>The model classifies hate speech into 5 categories (0-4)</li><li><code>0</code>: Political hatred: Expressions directed against individuals or groups based on political orientation.…”