Search alternatives:
group classification » risk classification (Expand Search), improve classification (Expand Search), perform classification (Expand Search)
codon optimization » wolf optimization (Expand Search)
binary data » primary data (Expand Search), dietary data (Expand Search)
data group » ta group (Expand Search)
data codon » data code (Expand Search), data codes (Expand Search), data codings (Expand Search)
group classification » risk classification (Expand Search), improve classification (Expand Search), perform classification (Expand Search)
codon optimization » wolf optimization (Expand Search)
binary data » primary data (Expand Search), dietary data (Expand Search)
data group » ta group (Expand Search)
data codon » data code (Expand Search), data codes (Expand Search), data codings (Expand Search)
-
1
-
2
-
3
-
4
-
5
-
6
-
7
-
8
-
9
Data Sheet 1_Bundled assessment to replace on-road test on driving function in stroke patients: a binary classification model via random forest.docx
Published 2025“…A random forest algorithm was then applied to construct a binary classification model based on the data obtained from the two groups.…”
-
10
-
11
-
12
Natural language processing and machine learning algorithm to identify brain MRI reports with acute ischemic stroke
Published 2019“…Labeling for AIS was performed manually, identifying clinical notes. We applied binary logistic regression, naïve Bayesian classification, single decision tree, and support vector machine for the binary classifiers, and we assessed performance of the algorithms by F1-measure. …”
-
13
-
14
DataSheet_1_Patient-Level Effectiveness Prediction Modeling for Glioblastoma Using Classification Trees.docx
Published 2020“…Secondly, a classification tree algorithm was trained and validated for dividing individual patients into treatment response and non-response groups. …”
-
15
Receiver operating curves for NLP classification.
Published 2020“…These curves represent different combinations of text featurization (BOW, tf-idf, GloVe) and binary classification algorithms (Logistic Regression, k-NN, CART, OCT, OCT-H, RF, RNN). …”
-
16
Algoritmo de clasificación de expresiones de odio por tipos en español (Algorithm for classifying hate expressions by type in Spanish)
Published 2024“…</p><h2>Model Architecture</h2><p dir="ltr">The model is based on <code>pysentimiento/robertuito-base-uncased</code> with the following modifications:</p><ul><li>A dense classification layer was added over the base model</li><li>Uses input IDs and attention masks as inputs</li><li>Generates a multi-class classification with 5 hate categories</li></ul><h2>Dataset</h2><p dir="ltr"><b>HATEMEDIA Dataset</b>: Custom hate speech dataset with categorization by type:</p><ul><li><b>Labels</b>: 5 hate type categories (0-4)</li><li><b>Preprocessing</b>:</li><li>Null values removed from text and labels</li><li>Reindexing and relabeling (original labels are adjusted by subtracting 1)</li><li>Exclusion of category 2 during training</li><li>Conversion of category 5 to category 2</li></ul><h2>Training Process</h2><h3>Configuration</h3><ul><li><b>Batch size</b>: 128</li><li><b>Epoches</b>: 5</li><li><b>Learning rate</b>: 2e-5 with 10% warmup steps</li><li><b>Early stopping</b> with patience=2</li><li><b>Class weights</b>: Balanced to handle class imbalance</li></ul><h3>Custom Metrics</h3><ul><li>Recall for specific classes (focus on class 2)</li><li>Precision for specific classes (focus on class 3)</li><li>F1-score (weighted)</li><li>AUC-PR</li><li>Recall at precision=0.6 (class 3)</li><li>Precision at recall=0.6 (class 2)</li></ul><h2>Evaluation Metrics</h2><p dir="ltr">The model is evaluated using:</p><ul><li>Macro recall, precision, and F1-score</li><li>One-vs-Rest AUC</li><li>Accuracy</li><li>Per-class metrics</li><li>Confusion matrix</li><li>Full classification report</li></ul><h2>Technical Features</h2><h3>Data Preprocessing</h3><ul><li><b>Tokenization</b>: Maximum length of 128 tokens (truncation and padding)</li><li><b>Encoding of labels</b>: One-hot encoding for multi-class classification</li><li><b>Data split</b>: 80% training, 10% validation, 10% testing</li></ul><h3>Optimization</h3><ul><li><b>Optimizer</b>: Adam with linear warmup scheduling</li><li><b>Loss function</b>: Categorical Crossentropy (from_logits=True)</li><li><b>Imbalance handling</b>: Class weights computed automatically</li></ul><h2>Requirements</h2><p dir="ltr">The following Python packages are required:</p><ul><li>TensorFlow</li><li>Transformers</li><li>scikit-learn</li><li>pandas</li><li>datasets</li><li>matplotlib</li><li>seaborn</li><li>numpy</li></ul><h2>Usage</h2><ol><li><b>Data format</b>:</li></ol><ul><li>CSV file or Pandas DataFrame</li><li>Required column name: <code>text</code> (string type)</li><li>Required column name: Data type label (integer type, 0-4) - optional for evaluation</li></ul><ol><li><b>Text preprocessing</b>:</li></ol><ul><li>Automatic tokenization with a maximum length of 128 tokens</li><li>Long texts will be automatically truncated</li><li>Handling of special characters, URLs, and emojis included</li></ul><ol><li><b>Label encoding</b>:</li></ol><ul><li>The model classifies hate speech into 5 categories (0-4)</li><li><code>0</code>: Political hatred: Expressions directed against individuals or groups based on political orientation.…”
-
17
Random forest model performs better than support vector machine algorithms and when it primarily uses spontaneous photopic ERG of 60-s duration in humans.
Published 2023“…D, Corresponding performance parameters. All data correspond to binary classification between control and disease cases. …”
-
18
Data_Sheet_1_Calcium Spark Detection and Event-Based Classification of Single Cardiomyocyte Using Deep Learning.pdf
Published 2021“…Furthermore, we proposed an event-based logistic regression and binary classification model to classify single cardiomyocytes using Ca<sup>2+</sup> spark characteristics, which to date have generally been used only for simple statistical analyses and comparison between normal and diseased groups. …”
-
19
Data_Sheet_3_sigFeature: Novel Significant Feature Selection Method for Classification of Gene Expression Data Using Support Vector Machine and t Statistic.docx
Published 2020“…The “sigFeature” R package is centered around a function called “sigFeature,” which provides automatic selection of features for the binary classification. Using six publicly available microarray data sets (downloaded from Gene Expression Omnibus) with different biological attributes, we further compared the performance of “sigFeature” to three other feature selection algorithms. …”
-
20
Data_Sheet_2_sigFeature: Novel Significant Feature Selection Method for Classification of Gene Expression Data Using Support Vector Machine and t Statistic.docx
Published 2020“…The “sigFeature” R package is centered around a function called “sigFeature,” which provides automatic selection of features for the binary classification. Using six publicly available microarray data sets (downloaded from Gene Expression Omnibus) with different biological attributes, we further compared the performance of “sigFeature” to three other feature selection algorithms. …”