Search alternatives:
process classification » protein classification (Expand Search), proposed classification (Expand Search), forest classification (Expand Search)
based optimization » whale optimization (Expand Search)
image process » damage process (Expand Search), image processing (Expand Search), simple process (Expand Search)
binary 1 » binary _ (Expand Search)
1 based » _ based (Expand Search)
process classification » protein classification (Expand Search), proposed classification (Expand Search), forest classification (Expand Search)
based optimization » whale optimization (Expand Search)
image process » damage process (Expand Search), image processing (Expand Search), simple process (Expand Search)
binary 1 » binary _ (Expand Search)
1 based » _ based (Expand Search)
-
121
-
122
Identification and quantitation of clinically relevant microbes in patient samples: Comparison of three k-mer based classifiers for speed, accuracy, and sensitivity
Published 2019“…We tested the accuracy, sensitivity, and resource requirements of three top metagenomic taxonomic classifiers that use fast k-mer based algorithms: Centrifuge, CLARK, and KrakenUniq. …”
-
123
-
124
Analysis and design of algorithms for the manufacturing process of integrated circuits
Published 2023“…The (approximate) solution proposals of state-of-the-art methods include rule-based approaches, genetic algorithms, and reinforcement learning. …”
-
125
-
126
-
127
Data_Sheet_1_Physics-Inspired Optimization for Quadratic Unconstrained Problems Using a Digital Annealer.pdf
Published 2019“…<p>The Fujitsu Digital Annealer is designed to solve fully connected quadratic unconstrained binary optimization (QUBO) problems. It is implemented on application-specific CMOS hardware and currently solves problems of up to 1,024 variables. …”
-
128
-
129
Table_1_An efficient decision support system for leukemia identification utilizing nature-inspired deep feature optimization.pdf
Published 2024“…Next, a hybrid feature extraction approach is presented leveraging transfer learning from selected deep neural network models, InceptionV3 and DenseNet201, to extract comprehensive feature sets. To optimize feature selection, a customized binary Grey Wolf Algorithm is utilized, achieving an impressive 80% reduction in feature size while preserving key discriminative information. …”
-
130
-
131
-
132
Supplementary file 1_Comparative evaluation of fast-learning classification algorithms for urban forest tree species identification using EO-1 hyperion hyperspectral imagery.docx
Published 2025“…This study focuses on developing an efficient classification framework for species-level tree mapping in the Hauz Khas Urban Forest, New Delhi, India, using EO-1 Hyperion hyperspectral imagery.</p>Methods<p>Thirteen supervised classification algorithms were comparatively evaluated, encompassing traditional spectral/statistical classifiers—Maximum Likelihood, Mahalanobis Distance, Minimum Distance, Parallelepiped, Spectral Angle Mapper (SAM), Spectral Information Divergence (SID), and Binary Encoding—and machine learning algorithms including Decision Tree (DT), K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Random Forest (RF), and Artificial Neural Network (ANN). …”
-
133
-
134
-
135
-
136
-
137
Data_Sheet_1_Prediction of Mental Health in Medical Workers During COVID-19 Based on Machine Learning.ZIP
Published 2021“…In this study, we propose a novel prediction model based on optimization algorithm and neural network, which can select and rank the most important factors that affect mental health of medical workers. …”
-
138
Algoritmo de clasificación de expresiones de odio por tipos en español (Algorithm for classifying hate expressions by type in Spanish)
Published 2024“…</p><h2>Model Architecture</h2><p dir="ltr">The model is based on <code>pysentimiento/robertuito-base-uncased</code> with the following modifications:</p><ul><li>A dense classification layer was added over the base model</li><li>Uses input IDs and attention masks as inputs</li><li>Generates a multi-class classification with 5 hate categories</li></ul><h2>Dataset</h2><p dir="ltr"><b>HATEMEDIA Dataset</b>: Custom hate speech dataset with categorization by type:</p><ul><li><b>Labels</b>: 5 hate type categories (0-4)</li><li><b>Preprocessing</b>:</li><li>Null values removed from text and labels</li><li>Reindexing and relabeling (original labels are adjusted by subtracting 1)</li><li>Exclusion of category 2 during training</li><li>Conversion of category 5 to category 2</li></ul><h2>Training Process</h2><h3>Configuration</h3><ul><li><b>Batch size</b>: 128</li><li><b>Epoches</b>: 5</li><li><b>Learning rate</b>: 2e-5 with 10% warmup steps</li><li><b>Early stopping</b> with patience=2</li><li><b>Class weights</b>: Balanced to handle class imbalance</li></ul><h3>Custom Metrics</h3><ul><li>Recall for specific classes (focus on class 2)</li><li>Precision for specific classes (focus on class 3)</li><li>F1-score (weighted)</li><li>AUC-PR</li><li>Recall at precision=0.6 (class 3)</li><li>Precision at recall=0.6 (class 2)</li></ul><h2>Evaluation Metrics</h2><p dir="ltr">The model is evaluated using:</p><ul><li>Macro recall, precision, and F1-score</li><li>One-vs-Rest AUC</li><li>Accuracy</li><li>Per-class metrics</li><li>Confusion matrix</li><li>Full classification report</li></ul><h2>Technical Features</h2><h3>Data Preprocessing</h3><ul><li><b>Tokenization</b>: Maximum length of 128 tokens (truncation and padding)</li><li><b>Encoding of labels</b>: One-hot encoding for multi-class classification</li><li><b>Data split</b>: 80% training, 10% validation, 10% testing</li></ul><h3>Optimization</h3><ul><li><b>Optimizer</b>: Adam with linear warmup scheduling</li><li><b>Loss function</b>: Categorical Crossentropy (from_logits=True)</li><li><b>Imbalance handling</b>: Class weights computed automatically</li></ul><h2>Requirements</h2><p dir="ltr">The following Python packages are required:</p><ul><li>TensorFlow</li><li>Transformers</li><li>scikit-learn</li><li>pandas</li><li>datasets</li><li>matplotlib</li><li>seaborn</li><li>numpy</li></ul><h2>Usage</h2><ol><li><b>Data format</b>:</li></ol><ul><li>CSV file or Pandas DataFrame</li><li>Required column name: <code>text</code> (string type)</li><li>Required column name: Data type label (integer type, 0-4) - optional for evaluation</li></ul><ol><li><b>Text preprocessing</b>:</li></ol><ul><li>Automatic tokenization with a maximum length of 128 tokens</li><li>Long texts will be automatically truncated</li><li>Handling of special characters, URLs, and emojis included</li></ul><ol><li><b>Label encoding</b>:</li></ol><ul><li>The model classifies hate speech into 5 categories (0-4)</li><li><code>0</code>: Political hatred: Expressions directed against individuals or groups based on political orientation.…”
-
139
Summary of LITNET-2020 dataset.
Published 2023“…The ILSTM was then used to build an efficient intrusion detection system for binary and multi-class classification cases. The proposed algorithm has two phases: phase one involves training a conventional LSTM network to get initial weights, and phase two involves using the hybrid swarm algorithms, CBOA and PSO, to optimize the weights of LSTM to improve the accuracy. …”
-
140
SHAP analysis for LITNET-2020 dataset.
Published 2023“…The ILSTM was then used to build an efficient intrusion detection system for binary and multi-class classification cases. The proposed algorithm has two phases: phase one involves training a conventional LSTM network to get initial weights, and phase two involves using the hybrid swarm algorithms, CBOA and PSO, to optimize the weights of LSTM to improve the accuracy. …”