Showing 1 - 5 results of 5 for search '(( binary also led optimization algorithm ) OR ( history data encoding optimization algorithm ))', query time: 0.40s Refine Results
  1. 1
  2. 2

    Predictive factors affecting hepatitis patients survival results via application of the machine learning methods by Xiaohua Li (311008)

    Published 2025
    “…Machine learning models help predict hepatitis risks using patient data, symptoms, and history. This study used Decision Tree Classification (DTC) and Extreme Gradient Boosting Classification (XGBC) with three optimizers Rhizotomy Optimization Algorithm (ROA), Gold Rush Optimizer (GRO), and Motion-Encoded Electric Charged Particles Optimization Algorithm (MEPO) to enhance accuracy. …”
  3. 3

    Flowchart of this study. by Aref Andishgar (18155608)

    Published 2024
    “…A total of 160 variables were included in the machine learning (ML) models, and feature scaling and one-hot encoding were employed for data processing. Ten supervised ML algorithms were utilized, namely logistic regression (LR), support vector machine (SVM), random forest (RF), Gaussian naive Bayes (GNB), linear discriminant analysis (LDA), k-nearest neighbors (KNN), gradient boosting machine (GBM), extreme gradient boosting (XGB), cat boost (CAT), and light gradient boosting machine (LGBM). …”
  4. 4

    Algoritmo de clasificación de expresiones de odio por tipos en español (Algorithm for classifying hate expressions by type in Spanish) by Daniel Pérez Palau (11097348)

    Published 2024
    “…</p><h2>Model Architecture</h2><p dir="ltr">The model is based on <code>pysentimiento/robertuito-base-uncased</code> with the following modifications:</p><ul><li>A dense classification layer was added over the base model</li><li>Uses input IDs and attention masks as inputs</li><li>Generates a multi-class classification with 5 hate categories</li></ul><h2>Dataset</h2><p dir="ltr"><b>HATEMEDIA Dataset</b>: Custom hate speech dataset with categorization by type:</p><ul><li><b>Labels</b>: 5 hate type categories (0-4)</li><li><b>Preprocessing</b>:</li><li>Null values ​​removed from text and labels</li><li>Reindexing and relabeling (original labels are adjusted by subtracting 1)</li><li>Exclusion of category 2 during training</li><li>Conversion of category 5 to category 2</li></ul><h2>Training Process</h2><h3>Configuration</h3><ul><li><b>Batch size</b>: 128</li><li><b>Epoches</b>: 5</li><li><b>Learning rate</b>: 2e-5 with 10% warmup steps</li><li><b>Early stopping</b> with patience=2</li><li><b>Class weights</b>: Balanced to handle class imbalance</li></ul><h3>Custom Metrics</h3><ul><li>Recall for specific classes (focus on class 2)</li><li>Precision for specific classes (focus on class 3)</li><li>F1-score (weighted)</li><li>AUC-PR</li><li>Recall at precision=0.6 (class 3)</li><li>Precision at recall=0.6 (class 2)</li></ul><h2>Evaluation Metrics</h2><p dir="ltr">The model is evaluated using:</p><ul><li>Macro recall, precision, and F1-score</li><li>One-vs-Rest AUC</li><li>Accuracy</li><li>Per-class metrics</li><li>Confusion matrix</li><li>Full classification report</li></ul><h2>Technical Features</h2><h3>Data Preprocessing</h3><ul><li><b>Tokenization</b>: Maximum length of 128 tokens (truncation and padding)</li><li><b>Encoding of labels</b>: One-hot encoding for multi-class classification</li><li><b>Data split</b>: 80% training, 10% validation, 10% testing</li></ul><h3>Optimization</h3><ul><li><b>Optimizer</b>: Adam with linear warmup scheduling</li><li><b>Loss function</b>: Categorical Crossentropy (from_logits=True)</li><li><b>Imbalance handling</b>: Class weights computed automatically</li></ul><h2>Requirements</h2><p dir="ltr">The following Python packages are required:</p><ul><li>TensorFlow</li><li>Transformers</li><li>scikit-learn</li><li>pandas</li><li>datasets</li><li>matplotlib</li><li>seaborn</li><li>numpy</li></ul><h2>Usage</h2><ol><li><b>Data format</b>:</li></ol><ul><li>CSV file or Pandas DataFrame</li><li>Required column name: <code>text</code> (string type)</li><li>Required column name: Data type label (integer type, 0-4) - optional for evaluation</li></ul><ol><li><b>Text preprocessing</b>:</li></ol><ul><li>Automatic tokenization with a maximum length of 128 tokens</li><li>Long texts will be automatically truncated</li><li>Handling of special characters, URLs, and emojis included</li></ul><ol><li><b>Label encoding</b>:</li></ol><ul><li>The model classifies hate speech into 5 categories (0-4)</li><li><code>0</code>: Political hatred: Expressions directed against individuals or groups based on political orientation.…”
  5. 5

    Phylogenetic history of genes 5 and 11 in three related genomes. by Geleta Dugassa Barka (9099456)

    Published 2020
    “…The optimal tree with the sum of branch length = 2.98805978 is shown. …”