Showing 61 - 80 results of 141 for search '(( binary based models optimization algorithm ) OR ( binary data code optimization algorithm ))*', query time: 0.63s Refine Results
  1. 61

    Feature selection process. by Balraj Preet Kaur (20370832)

    Published 2024
    “…To optimize the parameters of the machine learning algorithms, hyperparameter optimization with a genetic algorithm is proposed and to reduce the size of the feature set, feature selection is performed using binary grey wolf optimization algorithm. …”
  2. 62

    Results of KNN. by Balraj Preet Kaur (20370832)

    Published 2024
    “…To optimize the parameters of the machine learning algorithms, hyperparameter optimization with a genetic algorithm is proposed and to reduce the size of the feature set, feature selection is performed using binary grey wolf optimization algorithm. …”
  3. 63

    After upsampling. by Balraj Preet Kaur (20370832)

    Published 2024
    “…To optimize the parameters of the machine learning algorithms, hyperparameter optimization with a genetic algorithm is proposed and to reduce the size of the feature set, feature selection is performed using binary grey wolf optimization algorithm. …”
  4. 64

    Results of Extra tree. by Balraj Preet Kaur (20370832)

    Published 2024
    “…To optimize the parameters of the machine learning algorithms, hyperparameter optimization with a genetic algorithm is proposed and to reduce the size of the feature set, feature selection is performed using binary grey wolf optimization algorithm. …”
  5. 65

    Gradient boosting classifier results. by Balraj Preet Kaur (20370832)

    Published 2024
    “…To optimize the parameters of the machine learning algorithms, hyperparameter optimization with a genetic algorithm is proposed and to reduce the size of the feature set, feature selection is performed using binary grey wolf optimization algorithm. …”
  6. 66

    Algoritmo de clasificación de expresiones de odio por tipos en español (Algorithm for classifying hate expressions by type in Spanish) by Daniel Pérez Palau (11097348)

    Published 2024
    “…</p><h2>Model Architecture</h2><p dir="ltr">The model is based on <code>pysentimiento/robertuito-base-uncased</code> with the following modifications:</p><ul><li>A dense classification layer was added over the base model</li><li>Uses input IDs and attention masks as inputs</li><li>Generates a multi-class classification with 5 hate categories</li></ul><h2>Dataset</h2><p dir="ltr"><b>HATEMEDIA Dataset</b>: Custom hate speech dataset with categorization by type:</p><ul><li><b>Labels</b>: 5 hate type categories (0-4)</li><li><b>Preprocessing</b>:</li><li>Null values ​​removed from text and labels</li><li>Reindexing and relabeling (original labels are adjusted by subtracting 1)</li><li>Exclusion of category 2 during training</li><li>Conversion of category 5 to category 2</li></ul><h2>Training Process</h2><h3>Configuration</h3><ul><li><b>Batch size</b>: 128</li><li><b>Epoches</b>: 5</li><li><b>Learning rate</b>: 2e-5 with 10% warmup steps</li><li><b>Early stopping</b> with patience=2</li><li><b>Class weights</b>: Balanced to handle class imbalance</li></ul><h3>Custom Metrics</h3><ul><li>Recall for specific classes (focus on class 2)</li><li>Precision for specific classes (focus on class 3)</li><li>F1-score (weighted)</li><li>AUC-PR</li><li>Recall at precision=0.6 (class 3)</li><li>Precision at recall=0.6 (class 2)</li></ul><h2>Evaluation Metrics</h2><p dir="ltr">The model is evaluated using:</p><ul><li>Macro recall, precision, and F1-score</li><li>One-vs-Rest AUC</li><li>Accuracy</li><li>Per-class metrics</li><li>Confusion matrix</li><li>Full classification report</li></ul><h2>Technical Features</h2><h3>Data Preprocessing</h3><ul><li><b>Tokenization</b>: Maximum length of 128 tokens (truncation and padding)</li><li><b>Encoding of labels</b>: One-hot encoding for multi-class classification</li><li><b>Data split</b>: 80% training, 10% validation, 10% testing</li></ul><h3>Optimization</h3><ul><li><b>Optimizer</b>: Adam with linear warmup scheduling</li><li><b>Loss function</b>: Categorical Crossentropy (from_logits=True)</li><li><b>Imbalance handling</b>: Class weights computed automatically</li></ul><h2>Requirements</h2><p dir="ltr">The following Python packages are required:</p><ul><li>TensorFlow</li><li>Transformers</li><li>scikit-learn</li><li>pandas</li><li>datasets</li><li>matplotlib</li><li>seaborn</li><li>numpy</li></ul><h2>Usage</h2><ol><li><b>Data format</b>:</li></ol><ul><li>CSV file or Pandas DataFrame</li><li>Required column name: <code>text</code> (string type)</li><li>Required column name: Data type label (integer type, 0-4) - optional for evaluation</li></ul><ol><li><b>Text preprocessing</b>:</li></ol><ul><li>Automatic tokenization with a maximum length of 128 tokens</li><li>Long texts will be automatically truncated</li><li>Handling of special characters, URLs, and emojis included</li></ul><ol><li><b>Label encoding</b>:</li></ol><ul><li>The model classifies hate speech into 5 categories (0-4)</li><li><code>0</code>: Political hatred: Expressions directed against individuals or groups based on political orientation.…”
  7. 67

    <i>hi</i>PRS algorithm process flow. by Michela C. Massi (14599915)

    Published 2023
    “…From this dataset we can compute the MI between each interaction and the outcome and <b>(D)</b> obtain a ranked list (<i>I</i><sub><i>δ</i></sub>) based on this metric. <b>(E)</b> Starting from the interaction at the top of <i>I</i><sub><i>δ</i></sub>, <i>hi</i>PRS constructs <i>I</i><sub><i>K</i></sub>, selecting <i>K</i> (where <i>K</i> is user-specified) terms through the greedy optimization of the ratio between MI (<i>relevance</i>) and a suitable measure of similarity for interactions (<i>redundancy)</i> (cf. …”
  8. 68

    IRBMO vs. meta-heuristic algorithms boxplot. by Chenyi Zhu (9383370)

    Published 2025
    “…In order to comprehensively verify the performance of IRBMO, this paper designs a series of experiments to compare it with nine mainstream binary optimization algorithms. The experiments are based on 12 medical datasets, and the results show that IRBMO achieves optimal overall performance in key metrics such as fitness value, classification accuracy and specificity. …”
  9. 69

    IRBMO vs. feature selection algorithm boxplot. by Chenyi Zhu (9383370)

    Published 2025
    “…In order to comprehensively verify the performance of IRBMO, this paper designs a series of experiments to compare it with nine mainstream binary optimization algorithms. The experiments are based on 12 medical datasets, and the results show that IRBMO achieves optimal overall performance in key metrics such as fitness value, classification accuracy and specificity. …”
  10. 70

    Flowchart scheme of the ML-based model. by Noshaba Qasmi (20405009)

    Published 2024
    “…<b>I)</b> Testing data consisting of 20% of the entire dataset. <b>J)</b> Optimization of hyperparameter tuning. <b>K)</b> Algorithm selection from all models. …”
  11. 71

    A* Path-Finding Algorithm to Determine Cell Connections by Max Weng (22327159)

    Published 2025
    “…Future work aims to generalize this algorithm for broader biological applications by training additional Cellpose models and adapting the A* framework.…”
  12. 72

    SHAP bar plot. by Meng Cao (105914)

    Published 2025
    Subjects:
  13. 73
  14. 74
  15. 75
  16. 76

    SHAP summary plot. by Meng Cao (105914)

    Published 2025
    Subjects:
  17. 77

    Plan frame of the house. by Ling Zhao (111365)

    Published 2025
    “…<div><p>To solve the problems of insufficient global optimization ability and easy loss of population diversity in building interior layout design, this study proposes a novel layout optimization model integrating interactive genetic algorithm and improved differential evolutionary algorithm to improve the global optimization ability and maintain population diversity in building layout design. …”
  18. 78

    Ablation test results. by Ling Zhao (111365)

    Published 2025
    “…<div><p>To solve the problems of insufficient global optimization ability and easy loss of population diversity in building interior layout design, this study proposes a novel layout optimization model integrating interactive genetic algorithm and improved differential evolutionary algorithm to improve the global optimization ability and maintain population diversity in building layout design. …”
  19. 79

    Hyperparameter selection test. by Ling Zhao (111365)

    Published 2025
    “…<div><p>To solve the problems of insufficient global optimization ability and easy loss of population diversity in building interior layout design, this study proposes a novel layout optimization model integrating interactive genetic algorithm and improved differential evolutionary algorithm to improve the global optimization ability and maintain population diversity in building layout design. …”
  20. 80

    Multiple index test results of different methods. by Ling Zhao (111365)

    Published 2025
    “…<div><p>To solve the problems of insufficient global optimization ability and easy loss of population diversity in building interior layout design, this study proposes a novel layout optimization model integrating interactive genetic algorithm and improved differential evolutionary algorithm to improve the global optimization ability and maintain population diversity in building layout design. …”