Search alternatives:
feature optimization » resource optimization (Expand Search), feature elimination (Expand Search), structure optimization (Expand Search)
codings optimization » codon optimization (Expand Search), joint optimization (Expand Search), routing optimization (Expand Search)
input feature » input features (Expand Search)
binary input » binary depot (Expand Search)
data codings » data recordings (Expand Search), data encoding (Expand Search), data codes (Expand Search)
binary data » primary data (Expand Search), dietary data (Expand Search)
feature optimization » resource optimization (Expand Search), feature elimination (Expand Search), structure optimization (Expand Search)
codings optimization » codon optimization (Expand Search), joint optimization (Expand Search), routing optimization (Expand Search)
input feature » input features (Expand Search)
binary input » binary depot (Expand Search)
data codings » data recordings (Expand Search), data encoding (Expand Search), data codes (Expand Search)
binary data » primary data (Expand Search), dietary data (Expand Search)
-
1
-
2
-
3
-
4
Optimized Bayesian regularization-back propagation neural network using data-driven intrusion detection system in Internet of Things
Published 2025“…Hence, Binary Black Widow Optimization Algorithm (BBWOA) is proposed in this manuscript to improve the BRBPNN classifier that detects intrusion precisely. …”
-
5
-
6
Algoritmo de clasificación de expresiones de odio por tipos en español (Algorithm for classifying hate expressions by type in Spanish)
Published 2024“…</p><h2>Model Architecture</h2><p dir="ltr">The model is based on <code>pysentimiento/robertuito-base-uncased</code> with the following modifications:</p><ul><li>A dense classification layer was added over the base model</li><li>Uses input IDs and attention masks as inputs</li><li>Generates a multi-class classification with 5 hate categories</li></ul><h2>Dataset</h2><p dir="ltr"><b>HATEMEDIA Dataset</b>: Custom hate speech dataset with categorization by type:</p><ul><li><b>Labels</b>: 5 hate type categories (0-4)</li><li><b>Preprocessing</b>:</li><li>Null values removed from text and labels</li><li>Reindexing and relabeling (original labels are adjusted by subtracting 1)</li><li>Exclusion of category 2 during training</li><li>Conversion of category 5 to category 2</li></ul><h2>Training Process</h2><h3>Configuration</h3><ul><li><b>Batch size</b>: 128</li><li><b>Epoches</b>: 5</li><li><b>Learning rate</b>: 2e-5 with 10% warmup steps</li><li><b>Early stopping</b> with patience=2</li><li><b>Class weights</b>: Balanced to handle class imbalance</li></ul><h3>Custom Metrics</h3><ul><li>Recall for specific classes (focus on class 2)</li><li>Precision for specific classes (focus on class 3)</li><li>F1-score (weighted)</li><li>AUC-PR</li><li>Recall at precision=0.6 (class 3)</li><li>Precision at recall=0.6 (class 2)</li></ul><h2>Evaluation Metrics</h2><p dir="ltr">The model is evaluated using:</p><ul><li>Macro recall, precision, and F1-score</li><li>One-vs-Rest AUC</li><li>Accuracy</li><li>Per-class metrics</li><li>Confusion matrix</li><li>Full classification report</li></ul><h2>Technical Features</h2><h3>Data Preprocessing</h3><ul><li><b>Tokenization</b>: Maximum length of 128 tokens (truncation and padding)</li><li><b>Encoding of labels</b>: One-hot encoding for multi-class classification</li><li><b>Data split</b>: 80% training, 10% validation, 10% testing</li></ul><h3>Optimization</h3><ul><li><b>Optimizer</b>: Adam with linear warmup scheduling</li><li><b>Loss function</b>: Categorical Crossentropy (from_logits=True)</li><li><b>Imbalance handling</b>: Class weights computed automatically</li></ul><h2>Requirements</h2><p dir="ltr">The following Python packages are required:</p><ul><li>TensorFlow</li><li>Transformers</li><li>scikit-learn</li><li>pandas</li><li>datasets</li><li>matplotlib</li><li>seaborn</li><li>numpy</li></ul><h2>Usage</h2><ol><li><b>Data format</b>:</li></ol><ul><li>CSV file or Pandas DataFrame</li><li>Required column name: <code>text</code> (string type)</li><li>Required column name: Data type label (integer type, 0-4) - optional for evaluation</li></ul><ol><li><b>Text preprocessing</b>:</li></ol><ul><li>Automatic tokenization with a maximum length of 128 tokens</li><li>Long texts will be automatically truncated</li><li>Handling of special characters, URLs, and emojis included</li></ul><ol><li><b>Label encoding</b>:</li></ol><ul><li>The model classifies hate speech into 5 categories (0-4)</li><li><code>0</code>: Political hatred: Expressions directed against individuals or groups based on political orientation.…”
-
7
Design and implementation of the Multiple Criteria Decision Making (MCDM) algorithm for predicting the severity of COVID-19.
Published 2021“…EVAL1: The correlation between input features <i>x</i>∈<i>X</i> and output features y∈<i>Y</i>, <i>R</i>[<i>x,y</i>] or <i>R</i>[<i>y,x</i>]; EVAL2: The correlation between input features <i>x</i>∈<i>X</i> and labeled features v∈<i>L</i>, <i>R</i>[<i>x,v</i>] or <i>R</i>[<i>v,x</i>]; Subset: The optimal input feature subset. …”
-
8
-
9
<i>hi</i>PRS algorithm process flow.
Published 2023“…<p><b>(A)</b> Input data is a list of genotype-level SNPs. <b>(B)</b> Focusing on the positive class only, the algorithm exploits FIM (<i>apriori</i> algorithm) to build a list of candidate interactions of any desired order, retaining those that have an empirical frequency above a given threshold <i>δ</i>. …”
-
10
-
11
Table_1_Computational prediction of promotors in Agrobacterium tumefaciens strain C58 by using the machine learning technique.DOCX
Published 2023“…The obtained features were optimized by using correlation and the mRMR-based algorithm. …”
-
12
-
13
-
14
-
15
Fortran & C++: design fractal-type optical diffractive element
Published 2022“…</p> <p>(4) export geometry/optics raw data and figures for binary DOE devices.</p> <p><br></p> <p>[Wolfram Mathematica code "square_triangle_DOE.nb"]:</p> <p>read the optimized binary DOE document (after Fortran & C++ code) to calculate its diffractive fields for comparison.…”