Search alternatives:
based optimization » whale optimization (Expand Search)
binary labels » trinary labels (Expand Search)
binary change » binary image (Expand Search)
labels based » levels based (Expand Search), models based (Expand Search), areas based (Expand Search)
change based » image based (Expand Search), change rate (Expand Search), case based (Expand Search)
based optimization » whale optimization (Expand Search)
binary labels » trinary labels (Expand Search)
binary change » binary image (Expand Search)
labels based » levels based (Expand Search), models based (Expand Search), areas based (Expand Search)
change based » image based (Expand Search), change rate (Expand Search), case based (Expand Search)
-
1
A* Path-Finding Algorithm to Determine Cell Connections
Published 2025“…Connections were labeled as disconnected, networked, or connected based on path existence and threshold criteria.…”
-
2
-
3
Proposed Algorithm.
Published 2025“…EHRL integrates Reinforcement Learning (RL) with Deep Neural Networks (DNNs) to dynamically optimize binary offloading decisions, which in turn obviates the requirement for manually labeled training data and thus avoids the need for solving complex optimization problems repeatedly. …”
-
4
Comparisons between ADAM and NADAM optimizers.
Published 2025“…EHRL integrates Reinforcement Learning (RL) with Deep Neural Networks (DNNs) to dynamically optimize binary offloading decisions, which in turn obviates the requirement for manually labeled training data and thus avoids the need for solving complex optimization problems repeatedly. …”
-
5
-
6
An Example of a WPT-MEC Network.
Published 2025“…EHRL integrates Reinforcement Learning (RL) with Deep Neural Networks (DNNs) to dynamically optimize binary offloading decisions, which in turn obviates the requirement for manually labeled training data and thus avoids the need for solving complex optimization problems repeatedly. …”
-
7
Related Work Summary.
Published 2025“…EHRL integrates Reinforcement Learning (RL) with Deep Neural Networks (DNNs) to dynamically optimize binary offloading decisions, which in turn obviates the requirement for manually labeled training data and thus avoids the need for solving complex optimization problems repeatedly. …”
-
8
Simulation parameters.
Published 2025“…EHRL integrates Reinforcement Learning (RL) with Deep Neural Networks (DNNs) to dynamically optimize binary offloading decisions, which in turn obviates the requirement for manually labeled training data and thus avoids the need for solving complex optimization problems repeatedly. …”
-
9
Training losses for N = 10.
Published 2025“…EHRL integrates Reinforcement Learning (RL) with Deep Neural Networks (DNNs) to dynamically optimize binary offloading decisions, which in turn obviates the requirement for manually labeled training data and thus avoids the need for solving complex optimization problems repeatedly. …”
-
10
Normalized computation rate for N = 10.
Published 2025“…EHRL integrates Reinforcement Learning (RL) with Deep Neural Networks (DNNs) to dynamically optimize binary offloading decisions, which in turn obviates the requirement for manually labeled training data and thus avoids the need for solving complex optimization problems repeatedly. …”
-
11
Summary of Notations Used in this paper.
Published 2025“…EHRL integrates Reinforcement Learning (RL) with Deep Neural Networks (DNNs) to dynamically optimize binary offloading decisions, which in turn obviates the requirement for manually labeled training data and thus avoids the need for solving complex optimization problems repeatedly. …”
-
12
Design and implementation of the Multiple Criteria Decision Making (MCDM) algorithm for predicting the severity of COVID-19.
Published 2021“…<p>(A). The MCDM algorithm-Stage 1. Preprocessing, this stage is the process of refining the collected raw data to eliminate noise, including correlation analysis and feature selection based on P values. …”
-
13
Triplet Matching for Estimating Causal Effects With Three Treatment Arms: A Comparative Study of Mortality by Trauma Center Level
Published 2021“…Our algorithm outperforms the nearest neighbor algorithm and is shown to produce matched samples with total distance no larger than twice the optimal distance. …”
-
14
Flowchart scheme of the ML-based model.
Published 2024“…<b>I)</b> Testing data consisting of 20% of the entire dataset. <b>J)</b> Optimization of hyperparameter tuning. <b>K)</b> Algorithm selection from all models. …”
-
15
Algoritmo de clasificación de expresiones de odio por tipos en español (Algorithm for classifying hate expressions by type in Spanish)
Published 2024“…</p><h2>Model Architecture</h2><p dir="ltr">The model is based on <code>pysentimiento/robertuito-base-uncased</code> with the following modifications:</p><ul><li>A dense classification layer was added over the base model</li><li>Uses input IDs and attention masks as inputs</li><li>Generates a multi-class classification with 5 hate categories</li></ul><h2>Dataset</h2><p dir="ltr"><b>HATEMEDIA Dataset</b>: Custom hate speech dataset with categorization by type:</p><ul><li><b>Labels</b>: 5 hate type categories (0-4)</li><li><b>Preprocessing</b>:</li><li>Null values removed from text and labels</li><li>Reindexing and relabeling (original labels are adjusted by subtracting 1)</li><li>Exclusion of category 2 during training</li><li>Conversion of category 5 to category 2</li></ul><h2>Training Process</h2><h3>Configuration</h3><ul><li><b>Batch size</b>: 128</li><li><b>Epoches</b>: 5</li><li><b>Learning rate</b>: 2e-5 with 10% warmup steps</li><li><b>Early stopping</b> with patience=2</li><li><b>Class weights</b>: Balanced to handle class imbalance</li></ul><h3>Custom Metrics</h3><ul><li>Recall for specific classes (focus on class 2)</li><li>Precision for specific classes (focus on class 3)</li><li>F1-score (weighted)</li><li>AUC-PR</li><li>Recall at precision=0.6 (class 3)</li><li>Precision at recall=0.6 (class 2)</li></ul><h2>Evaluation Metrics</h2><p dir="ltr">The model is evaluated using:</p><ul><li>Macro recall, precision, and F1-score</li><li>One-vs-Rest AUC</li><li>Accuracy</li><li>Per-class metrics</li><li>Confusion matrix</li><li>Full classification report</li></ul><h2>Technical Features</h2><h3>Data Preprocessing</h3><ul><li><b>Tokenization</b>: Maximum length of 128 tokens (truncation and padding)</li><li><b>Encoding of labels</b>: One-hot encoding for multi-class classification</li><li><b>Data split</b>: 80% training, 10% validation, 10% testing</li></ul><h3>Optimization</h3><ul><li><b>Optimizer</b>: Adam with linear warmup scheduling</li><li><b>Loss function</b>: Categorical Crossentropy (from_logits=True)</li><li><b>Imbalance handling</b>: Class weights computed automatically</li></ul><h2>Requirements</h2><p dir="ltr">The following Python packages are required:</p><ul><li>TensorFlow</li><li>Transformers</li><li>scikit-learn</li><li>pandas</li><li>datasets</li><li>matplotlib</li><li>seaborn</li><li>numpy</li></ul><h2>Usage</h2><ol><li><b>Data format</b>:</li></ol><ul><li>CSV file or Pandas DataFrame</li><li>Required column name: <code>text</code> (string type)</li><li>Required column name: Data type label (integer type, 0-4) - optional for evaluation</li></ul><ol><li><b>Text preprocessing</b>:</li></ol><ul><li>Automatic tokenization with a maximum length of 128 tokens</li><li>Long texts will be automatically truncated</li><li>Handling of special characters, URLs, and emojis included</li></ul><ol><li><b>Label encoding</b>:</li></ol><ul><li>The model classifies hate speech into 5 categories (0-4)</li><li><code>0</code>: Political hatred: Expressions directed against individuals or groups based on political orientation.…”
-
16
Data Sheet 1_Detection of litchi fruit maturity states based on unmanned aerial vehicle remote sensing and improved YOLOv8 model.docx
Published 2025“…In addition, YOLOv8-FPDW was more competitive than mainstream object detection algorithms. The study predicted the optimal harvest period for litchis, providing scientific support for orchard batch harvesting and fine management.…”
-
17
Supplementary Material 8
Published 2025“…</li><li><b>XGboost: </b>An optimized gradient boosting algorithm that efficiently handles large genomic datasets, commonly used for high-accuracy predictions in <i>E. coli</i> classification.…”
-
18
Machine Learning-Ready Dataset for Cytotoxicity Prediction of Metal Oxide Nanoparticles
Published 2025“…</p><p dir="ltr">These biological metrics were used to define a binary toxicity label: entries were classified as toxic (1) or non-toxic (0) based on thresholds from standardized guidelines (e.g., ISO 10993-5:2009) and literature consensus. …”
-
19
Data_Sheet_1_Alzheimer’s Disease Diagnosis and Biomarker Analysis Using Resting-State Functional MRI Functional Brain Network With Multi-Measures Features and Hippocampal Subfield...
Published 2022“…Finally, we implemented and compared the different feature selection algorithms to integrate the structural features, brain networks, and voxel features to optimize the diagnostic identifications of AD using support vector machine (SVM) classifiers. …”