Search alternatives:
process detection » process reflection (Expand Search), protein detection (Expand Search), stress detection (Expand Search)
step optimization » after optimization (Expand Search), swarm optimization (Expand Search), based optimization (Expand Search)
data process » data processing (Expand Search), damage process (Expand Search), data access (Expand Search)
binary data » primary data (Expand Search), dietary data (Expand Search)
data step » data set (Expand Search)
process detection » process reflection (Expand Search), protein detection (Expand Search), stress detection (Expand Search)
step optimization » after optimization (Expand Search), swarm optimization (Expand Search), based optimization (Expand Search)
data process » data processing (Expand Search), damage process (Expand Search), data access (Expand Search)
binary data » primary data (Expand Search), dietary data (Expand Search)
data step » data set (Expand Search)
-
1
-
2
-
3
-
4
-
5
-
6
-
7
-
8
-
9
-
10
Joint Detection of Change Points in Multichannel Single-Molecule Measurements
Published 2021Subjects: “…complex biomolecular processes…”
-
11
The Pseudo-Code of the IRBMO Algorithm.
Published 2025“…To adapt to the feature selection problem, we convert the continuous optimization algorithm to binary form via transfer function, which further enhances the applicability of the algorithm. …”
-
12
Optimized Bayesian regularization-back propagation neural network using data-driven intrusion detection system in Internet of Things
Published 2025“…In general, BRBPNN does not show any optimization adaption methods to determine the optimal parameter for appropriate detection. Hence, Binary Black Widow Optimization Algorithm (BBWOA) is proposed in this manuscript to improve the BRBPNN classifier that detects intrusion precisely. …”
-
13
IRBMO vs. meta-heuristic algorithms boxplot.
Published 2025“…To adapt to the feature selection problem, we convert the continuous optimization algorithm to binary form via transfer function, which further enhances the applicability of the algorithm. …”
-
14
IRBMO vs. feature selection algorithm boxplot.
Published 2025“…To adapt to the feature selection problem, we convert the continuous optimization algorithm to binary form via transfer function, which further enhances the applicability of the algorithm. …”
-
15
GSE96058 information.
Published 2024“…</p><p>Results</p><p>In this study, five main steps were followed for the analysis of mRNA expression data: reading, preprocessing, feature selection, classification, and SHAP algorithm. …”
-
16
The performance of classifiers.
Published 2024“…</p><p>Results</p><p>In this study, five main steps were followed for the analysis of mRNA expression data: reading, preprocessing, feature selection, classification, and SHAP algorithm. …”
-
17
-
18
Algoritmo de clasificación de expresiones de odio por tipos en español (Algorithm for classifying hate expressions by type in Spanish)
Published 2024“…</li></ul><p dir="ltr"><b>File Structure</b></p><p dir="ltr">The code generates and saves:</p><ul><li>Weights of the trained model (.h5)</li><li>Configured tokenizer</li><li>Training history in CSV</li><li>Requirements file</li></ul><p dir="ltr"><b>Important Notes</b></p><ul><li>The model excludes category 2 during training</li><li>Implements transfer learning from a pre-trained model for binary hate detection</li><li>Includes early stopping callbacks to prevent overfitting</li><li>Uses class weighting to handle category imbalances</li></ul><p dir="ltr">The process of creating this algorithm is explained in the technical report located at: Blanco-Valencia, X., De Gregorio-Vicente, O., Ruiz Iniesta, A., & Said-Hung, E. (2025). …”
-
19
-
20
Algoritmo de detección de odio en español (Algorithm for detection of hate speech in Spanish)
Published 2024“…</li></ul><h2>Training Process</h2><h3>Pre-workout</h3><ul><li>Batch size: 16</li><li>Epochs: 5</li><li>Learning rate: 2e-5 with 10% warmup steps</li><li>Early stopping with patience=2</li></ul><h3>Fine-tuning</h3><ul><li>Batch size: 128</li><li>Epochs: 5</li><li>Learning rate: 2e-5 with 10% warmup steps</li><li>Early stopping with patience=2</li><li>Custom metrics:</li><li>Recall for non-hate class</li><li>Precision for hate class</li><li>F1-score (weighted)</li><li>AUC-PR</li><li>Recall at precision=0.9 (non-hate)</li><li>Precision at recall=0.9 (hate)</li></ul><h2>Evaluation Metrics</h2><p dir="ltr">The model is evaluated using:</p><ul><li>Macro recall, precision, and F1-score</li><li>One-vs-Rest AUC</li><li>Accuracy</li><li>Metrics by class</li><li>Confusion matrix</li></ul><h2>Requirements</h2><p dir="ltr">The following Python packages are required (see requirements.txt for the full list):</p><ul><li>TensorFlow</li><li>Transformers</li><li>scikit-learn</li><li>pandas</li><li>datasets</li><li>matplotlib</li><li>seaborn</li></ul><h2>Usage</h2><p dir="ltr">The model expects input data with the following specifications:</p><ol><li><b>Data Format</b>:</li></ol><ul><li>CSV file or Pandas DataFrame</li><li>Mandatory column name: <code>text</code> (type string)</li><li>Optional column name: <code>label</code> (type integer, 0 or 1) if available for evaluation</li></ul><ol><li><b>Text Preprocessing</b>:</li></ol><ul><li>Text will be automatically converted to lowercase during processing</li><li>Maximum length: 128 tokens (longer texts will be truncated)</li><li>Special characters, URLs, and emojis must remain in the text (the tokenizer handles these)</li></ul><ol><li><b>Label Encoding</b>:</li></ol><ul><li><code>0</code> = No hateful content (including neutral/positive content)</li><li>1 = Hate speech</li></ul><p dir="ltr">The process of creating this algorithm is explained in the technical report located at:Blanco-Valencia, X., De Gregorio-Vicente, O., Ruiz Iniesta, A., & Said-Hung, E. (2025). …”