Showing 21 - 39 results of 39 for search '(( binary mask based optimization algorithm ) OR ( binary key features optimization algorithm ))', query time: 1.08s Refine Results
  1. 21

    Friedman average rank sum test results. by Chenyi Zhu (9383370)

    Published 2025
    “…To adapt to the feature selection problem, we convert the continuous optimization algorithm to binary form via transfer function, which further enhances the applicability of the algorithm. …”
  2. 22

    IRBMO vs. variant comparison adaptation data. by Chenyi Zhu (9383370)

    Published 2025
    “…To adapt to the feature selection problem, we convert the continuous optimization algorithm to binary form via transfer function, which further enhances the applicability of the algorithm. …”
  3. 23

    Generalized Tensor Decomposition With Features on Multiple Modes by Jiaxin Hu (1327875)

    Published 2021
    “…An efficient alternating optimization algorithm with provable spectral initialization is further developed. …”
  4. 24

    Design and implementation of the Multiple Criteria Decision Making (MCDM) algorithm for predicting the severity of COVID-19. by Jiaqing Luo (10975030)

    Published 2021
    “…For key features: First, select the first feature that is most relevant to the severity; Second, select the remaining key features in turn by ranking. …”
  5. 25

    Table_1_An efficient decision support system for leukemia identification utilizing nature-inspired deep feature optimization.pdf by Muhammad Awais (263096)

    Published 2024
    “…To optimize feature selection, a customized binary Grey Wolf Algorithm is utilized, achieving an impressive 80% reduction in feature size while preserving key discriminative information. …”
  6. 26

    Table_1_bSRWPSO-FKNN: A boosted PSO with fuzzy K-nearest neighbor classifier for predicting atopic dermatitis disease.docx by Yupeng Li (507508)

    Published 2023
    “…In bSRWPSO-FKNN, the core of which is to optimize the classification performance of FKNN through binary SRWPSO.…”
  7. 27

    GSE96058 information. by Sepideh Zununi Vahed (9861298)

    Published 2024
    “…Initially, the data was organized and underwent hold-out cross-validation, data cleaning, and normalization. Subsequently, feature selection was conducted using ANOVA and binary Particle Swarm Optimization (PSO). …”
  8. 28

    The performance of classifiers. by Sepideh Zununi Vahed (9861298)

    Published 2024
    “…Initially, the data was organized and underwent hold-out cross-validation, data cleaning, and normalization. Subsequently, feature selection was conducted using ANOVA and binary Particle Swarm Optimization (PSO). …”
  9. 29

    Flowchart scheme of the ML-based model. by Noshaba Qasmi (20405009)

    Published 2024
    “…<b>I)</b> Testing data consisting of 20% of the entire dataset. <b>J)</b> Optimization of hyperparameter tuning. <b>K)</b> Algorithm selection from all models. …”
  10. 30
  11. 31

    Solubility Prediction of Different Forms of Pharmaceuticals in Single and Mixed Solvents Using Symmetric Electrolyte Nonrandom Two-Liquid Segment Activity Coefficient Model by Getachew S. Molla (6416744)

    Published 2019
    “…The methodology incorporates key features of the symmetric eNRTL-SAC model structure to reduce the number of parameters and uses a hybrid of global search algorithms for parameter estimation. …”
  12. 32
  13. 33

    Algoritmo de clasificación de expresiones de odio por tipos en español (Algorithm for classifying hate expressions by type in Spanish) by Daniel Pérez Palau (11097348)

    Published 2024
    “…</p><h2>Model Architecture</h2><p dir="ltr">The model is based on <code>pysentimiento/robertuito-base-uncased</code> with the following modifications:</p><ul><li>A dense classification layer was added over the base model</li><li>Uses input IDs and attention masks as inputs</li><li>Generates a multi-class classification with 5 hate categories</li></ul><h2>Dataset</h2><p dir="ltr"><b>HATEMEDIA Dataset</b>: Custom hate speech dataset with categorization by type:</p><ul><li><b>Labels</b>: 5 hate type categories (0-4)</li><li><b>Preprocessing</b>:</li><li>Null values ​​removed from text and labels</li><li>Reindexing and relabeling (original labels are adjusted by subtracting 1)</li><li>Exclusion of category 2 during training</li><li>Conversion of category 5 to category 2</li></ul><h2>Training Process</h2><h3>Configuration</h3><ul><li><b>Batch size</b>: 128</li><li><b>Epoches</b>: 5</li><li><b>Learning rate</b>: 2e-5 with 10% warmup steps</li><li><b>Early stopping</b> with patience=2</li><li><b>Class weights</b>: Balanced to handle class imbalance</li></ul><h3>Custom Metrics</h3><ul><li>Recall for specific classes (focus on class 2)</li><li>Precision for specific classes (focus on class 3)</li><li>F1-score (weighted)</li><li>AUC-PR</li><li>Recall at precision=0.6 (class 3)</li><li>Precision at recall=0.6 (class 2)</li></ul><h2>Evaluation Metrics</h2><p dir="ltr">The model is evaluated using:</p><ul><li>Macro recall, precision, and F1-score</li><li>One-vs-Rest AUC</li><li>Accuracy</li><li>Per-class metrics</li><li>Confusion matrix</li><li>Full classification report</li></ul><h2>Technical Features</h2><h3>Data Preprocessing</h3><ul><li><b>Tokenization</b>: Maximum length of 128 tokens (truncation and padding)</li><li><b>Encoding of labels</b>: One-hot encoding for multi-class classification</li><li><b>Data split</b>: 80% training, 10% validation, 10% testing</li></ul><h3>Optimization</h3><ul><li><b>Optimizer</b>: Adam with linear warmup scheduling</li><li><b>Loss function</b>: Categorical Crossentropy (from_logits=True)</li><li><b>Imbalance handling</b>: Class weights computed automatically</li></ul><h2>Requirements</h2><p dir="ltr">The following Python packages are required:</p><ul><li>TensorFlow</li><li>Transformers</li><li>scikit-learn</li><li>pandas</li><li>datasets</li><li>matplotlib</li><li>seaborn</li><li>numpy</li></ul><h2>Usage</h2><ol><li><b>Data format</b>:</li></ol><ul><li>CSV file or Pandas DataFrame</li><li>Required column name: <code>text</code> (string type)</li><li>Required column name: Data type label (integer type, 0-4) - optional for evaluation</li></ul><ol><li><b>Text preprocessing</b>:</li></ol><ul><li>Automatic tokenization with a maximum length of 128 tokens</li><li>Long texts will be automatically truncated</li><li>Handling of special characters, URLs, and emojis included</li></ul><ol><li><b>Label encoding</b>:</li></ol><ul><li>The model classifies hate speech into 5 categories (0-4)</li><li><code>0</code>: Political hatred: Expressions directed against individuals or groups based on political orientation.…”
  14. 34
  15. 35

    Steps in the extraction of 14 coordinates from the CT slices for the curved MPR. by Linus Woitke (22783534)

    Published 2025
    “…Protruding paths are then eliminated using graph-based optimization algorithms, as demonstrated in f). …”
  16. 36
  17. 37

    Table 1_Heavy metal biomarkers and their impact on hearing loss risk: a machine learning framework analysis.docx by Ali Nabavi (21097424)

    Published 2025
    “…Multiple machine learning algorithms, including Random Forest, XGBoost, Gradient Boosting, Logistic Regression, CatBoost, and MLP, were optimized and evaluated. …”
  18. 38

    Data_Sheet_1_Alzheimer’s Disease Diagnosis and Biomarker Analysis Using Resting-State Functional MRI Functional Brain Network With Multi-Measures Features and Hippocampal Subfield... by Uttam Khatri (12689072)

    Published 2022
    “…Finally, we implemented and compared the different feature selection algorithms to integrate the structural features, brain networks, and voxel features to optimize the diagnostic identifications of AD using support vector machine (SVM) classifiers. …”
  19. 39

    Machine Learning-Ready Dataset for Cytotoxicity Prediction of Metal Oxide Nanoparticles by Soham Savarkar (21811825)

    Published 2025
    “…</p><p dir="ltr"><b>Applications and Model Compatibility:</b></p><p dir="ltr">The dataset is optimized for use in supervised learning workflows and has been tested with algorithms such as:</p><p dir="ltr">Gradient Boosting Machines (GBM),</p><p dir="ltr">Support Vector Machines (SVM-RBF),</p><p dir="ltr">Random Forests, and</p><p dir="ltr">Principal Component Analysis (PCA) for feature reduction.…”