Search alternatives:
feature optimization » resource optimization (Expand Search), feature elimination (Expand Search), structure optimization (Expand Search)
based optimization » whale optimization (Expand Search)
image feature » image features (Expand Search), scale feature (Expand Search), imaging features (Expand Search)
mask based » task based (Expand Search), tasks based (Expand Search), risk based (Expand Search)
feature optimization » resource optimization (Expand Search), feature elimination (Expand Search), structure optimization (Expand Search)
based optimization » whale optimization (Expand Search)
image feature » image features (Expand Search), scale feature (Expand Search), imaging features (Expand Search)
mask based » task based (Expand Search), tasks based (Expand Search), risk based (Expand Search)
-
1
Flowchart scheme of the ML-based model.
Published 2024“…<b>C)</b> Image quality analysis of preprocessed images. <b>D)</b> Background removal by obtaining the largest contour followed by masking. …”
-
2
-
3
-
4
Melanoma Skin Cancer Detection Using Deep Learning Methods and Binary GWO Algorithm
Published 2025“…In this work, we propose a novel framework that integrates </p><p dir="ltr">Convolutional Neural Networks (CNNs) for image classification and a binary Grey Wolf Optimization (GWO) </p><p dir="ltr">algorithm for feature selection. …”
-
5
Improved support vector machine classification algorithm based on adaptive feature weight updating in the Hadoop cluster environment
Published 2019“…The MapReduce parallel programming model on the Hadoop platform is used to perform an adaptive fusion of hue, local binary pattern (LBP) and scale-invariant feature transform (SIFT) features extracted from images to derive optimal combinations of weights. …”
-
6
A* Path-Finding Algorithm to Determine Cell Connections
Published 2025“…To address this, the research integrates a modified A* pathfinding algorithm with a U-Net convolutional neural network, a custom statistical binary classification method, and a personalized Min-Max connectivity threshold to automate the detection of astrocyte connectivity.…”
-
7
Generalized Tensor Decomposition With Features on Multiple Modes
Published 2021“…An efficient alternating optimization algorithm with provable spectral initialization is further developed. …”
-
8
-
9
-
10
-
11
Table_1_An efficient decision support system for leukemia identification utilizing nature-inspired deep feature optimization.pdf
Published 2024“…To optimize feature selection, a customized binary Grey Wolf Algorithm is utilized, achieving an impressive 80% reduction in feature size while preserving key discriminative information. …”
-
12
Sample image for illustration.
Published 2024“…The results demonstrate that CBFD achieves a average precision of 0.97 for the test image, outperforming Superpoint, Directional Intensified Tertiary Filtering (DITF), Binary Robust Independent Elementary Features (BRIEF), Binary Robust Invariant Scalable Keypoints (BRISK), Speeded Up Robust Features (SURF), and Scale Invariant Feature Transform (SIFT), which achieve scores of 0.95, 0.92, 0.72, 0.66, 0.63 and 0.50 respectively. …”
-
13
Quadratic polynomial in 2D image plane.
Published 2024“…The results demonstrate that CBFD achieves a average precision of 0.97 for the test image, outperforming Superpoint, Directional Intensified Tertiary Filtering (DITF), Binary Robust Independent Elementary Features (BRIEF), Binary Robust Invariant Scalable Keypoints (BRISK), Speeded Up Robust Features (SURF), and Scale Invariant Feature Transform (SIFT), which achieve scores of 0.95, 0.92, 0.72, 0.66, 0.63 and 0.50 respectively. …”
-
14
-
15
Comparison analysis of computation time.
Published 2024“…The results demonstrate that CBFD achieves a average precision of 0.97 for the test image, outperforming Superpoint, Directional Intensified Tertiary Filtering (DITF), Binary Robust Independent Elementary Features (BRIEF), Binary Robust Invariant Scalable Keypoints (BRISK), Speeded Up Robust Features (SURF), and Scale Invariant Feature Transform (SIFT), which achieve scores of 0.95, 0.92, 0.72, 0.66, 0.63 and 0.50 respectively. …”
-
16
Process flow diagram of CBFD.
Published 2024“…The results demonstrate that CBFD achieves a average precision of 0.97 for the test image, outperforming Superpoint, Directional Intensified Tertiary Filtering (DITF), Binary Robust Independent Elementary Features (BRIEF), Binary Robust Invariant Scalable Keypoints (BRISK), Speeded Up Robust Features (SURF), and Scale Invariant Feature Transform (SIFT), which achieve scores of 0.95, 0.92, 0.72, 0.66, 0.63 and 0.50 respectively. …”
-
17
Precision recall curve.
Published 2024“…The results demonstrate that CBFD achieves a average precision of 0.97 for the test image, outperforming Superpoint, Directional Intensified Tertiary Filtering (DITF), Binary Robust Independent Elementary Features (BRIEF), Binary Robust Invariant Scalable Keypoints (BRISK), Speeded Up Robust Features (SURF), and Scale Invariant Feature Transform (SIFT), which achieve scores of 0.95, 0.92, 0.72, 0.66, 0.63 and 0.50 respectively. …”
-
18
-
19
Algoritmo de clasificación de expresiones de odio por tipos en español (Algorithm for classifying hate expressions by type in Spanish)
Published 2024“…</p><h2>Model Architecture</h2><p dir="ltr">The model is based on <code>pysentimiento/robertuito-base-uncased</code> with the following modifications:</p><ul><li>A dense classification layer was added over the base model</li><li>Uses input IDs and attention masks as inputs</li><li>Generates a multi-class classification with 5 hate categories</li></ul><h2>Dataset</h2><p dir="ltr"><b>HATEMEDIA Dataset</b>: Custom hate speech dataset with categorization by type:</p><ul><li><b>Labels</b>: 5 hate type categories (0-4)</li><li><b>Preprocessing</b>:</li><li>Null values removed from text and labels</li><li>Reindexing and relabeling (original labels are adjusted by subtracting 1)</li><li>Exclusion of category 2 during training</li><li>Conversion of category 5 to category 2</li></ul><h2>Training Process</h2><h3>Configuration</h3><ul><li><b>Batch size</b>: 128</li><li><b>Epoches</b>: 5</li><li><b>Learning rate</b>: 2e-5 with 10% warmup steps</li><li><b>Early stopping</b> with patience=2</li><li><b>Class weights</b>: Balanced to handle class imbalance</li></ul><h3>Custom Metrics</h3><ul><li>Recall for specific classes (focus on class 2)</li><li>Precision for specific classes (focus on class 3)</li><li>F1-score (weighted)</li><li>AUC-PR</li><li>Recall at precision=0.6 (class 3)</li><li>Precision at recall=0.6 (class 2)</li></ul><h2>Evaluation Metrics</h2><p dir="ltr">The model is evaluated using:</p><ul><li>Macro recall, precision, and F1-score</li><li>One-vs-Rest AUC</li><li>Accuracy</li><li>Per-class metrics</li><li>Confusion matrix</li><li>Full classification report</li></ul><h2>Technical Features</h2><h3>Data Preprocessing</h3><ul><li><b>Tokenization</b>: Maximum length of 128 tokens (truncation and padding)</li><li><b>Encoding of labels</b>: One-hot encoding for multi-class classification</li><li><b>Data split</b>: 80% training, 10% validation, 10% testing</li></ul><h3>Optimization</h3><ul><li><b>Optimizer</b>: Adam with linear warmup scheduling</li><li><b>Loss function</b>: Categorical Crossentropy (from_logits=True)</li><li><b>Imbalance handling</b>: Class weights computed automatically</li></ul><h2>Requirements</h2><p dir="ltr">The following Python packages are required:</p><ul><li>TensorFlow</li><li>Transformers</li><li>scikit-learn</li><li>pandas</li><li>datasets</li><li>matplotlib</li><li>seaborn</li><li>numpy</li></ul><h2>Usage</h2><ol><li><b>Data format</b>:</li></ol><ul><li>CSV file or Pandas DataFrame</li><li>Required column name: <code>text</code> (string type)</li><li>Required column name: Data type label (integer type, 0-4) - optional for evaluation</li></ul><ol><li><b>Text preprocessing</b>:</li></ol><ul><li>Automatic tokenization with a maximum length of 128 tokens</li><li>Long texts will be automatically truncated</li><li>Handling of special characters, URLs, and emojis included</li></ul><ol><li><b>Label encoding</b>:</li></ol><ul><li>The model classifies hate speech into 5 categories (0-4)</li><li><code>0</code>: Political hatred: Expressions directed against individuals or groups based on political orientation.…”
-
20
PathOlOgics_RBCs Python Scripts.zip
Published 2023“…</p><p dir="ltr">In terms of classification, a second algorithm was developed and employed to preliminary sort or group the individual cells (after excluding the overlapping cells manually) into different categories using five geometric measurements applied to the extracted contour from each binary image mask (see PathOlOgics_script_2; preliminary shape measurements). …”