Search alternatives:
based optimization » whale optimization (Expand Search)
smart optimization » swarm optimization (Expand Search), art optimization (Expand Search), whale optimization (Expand Search)
library based » laboratory based (Expand Search)
binary mask » binary image (Expand Search)
based smart » based sars (Expand Search), based search (Expand Search)
mask based » task based (Expand Search), tasks based (Expand Search), risk based (Expand Search)
based optimization » whale optimization (Expand Search)
smart optimization » swarm optimization (Expand Search), art optimization (Expand Search), whale optimization (Expand Search)
library based » laboratory based (Expand Search)
binary mask » binary image (Expand Search)
based smart » based sars (Expand Search), based search (Expand Search)
mask based » task based (Expand Search), tasks based (Expand Search), risk based (Expand Search)
-
1
-
2
-
3
-
4
-
5
-
6
-
7
-
8
-
9
-
10
-
11
-
12
A* Path-Finding Algorithm to Determine Cell Connections
Published 2025“…To address this, the research integrates a modified A* pathfinding algorithm with a U-Net convolutional neural network, a custom statistical binary classification method, and a personalized Min-Max connectivity threshold to automate the detection of astrocyte connectivity.…”
-
13
Flowchart scheme of the ML-based model.
Published 2024“…<b>I)</b> Testing data consisting of 20% of the entire dataset. <b>J)</b> Optimization of hyperparameter tuning. <b>K)</b> Algorithm selection from all models. …”
-
14
-
15
-
16
Algoritmo de clasificación de expresiones de odio por tipos en español (Algorithm for classifying hate expressions by type in Spanish)
Published 2024“…</p><h2>Model Architecture</h2><p dir="ltr">The model is based on <code>pysentimiento/robertuito-base-uncased</code> with the following modifications:</p><ul><li>A dense classification layer was added over the base model</li><li>Uses input IDs and attention masks as inputs</li><li>Generates a multi-class classification with 5 hate categories</li></ul><h2>Dataset</h2><p dir="ltr"><b>HATEMEDIA Dataset</b>: Custom hate speech dataset with categorization by type:</p><ul><li><b>Labels</b>: 5 hate type categories (0-4)</li><li><b>Preprocessing</b>:</li><li>Null values removed from text and labels</li><li>Reindexing and relabeling (original labels are adjusted by subtracting 1)</li><li>Exclusion of category 2 during training</li><li>Conversion of category 5 to category 2</li></ul><h2>Training Process</h2><h3>Configuration</h3><ul><li><b>Batch size</b>: 128</li><li><b>Epoches</b>: 5</li><li><b>Learning rate</b>: 2e-5 with 10% warmup steps</li><li><b>Early stopping</b> with patience=2</li><li><b>Class weights</b>: Balanced to handle class imbalance</li></ul><h3>Custom Metrics</h3><ul><li>Recall for specific classes (focus on class 2)</li><li>Precision for specific classes (focus on class 3)</li><li>F1-score (weighted)</li><li>AUC-PR</li><li>Recall at precision=0.6 (class 3)</li><li>Precision at recall=0.6 (class 2)</li></ul><h2>Evaluation Metrics</h2><p dir="ltr">The model is evaluated using:</p><ul><li>Macro recall, precision, and F1-score</li><li>One-vs-Rest AUC</li><li>Accuracy</li><li>Per-class metrics</li><li>Confusion matrix</li><li>Full classification report</li></ul><h2>Technical Features</h2><h3>Data Preprocessing</h3><ul><li><b>Tokenization</b>: Maximum length of 128 tokens (truncation and padding)</li><li><b>Encoding of labels</b>: One-hot encoding for multi-class classification</li><li><b>Data split</b>: 80% training, 10% validation, 10% testing</li></ul><h3>Optimization</h3><ul><li><b>Optimizer</b>: Adam with linear warmup scheduling</li><li><b>Loss function</b>: Categorical Crossentropy (from_logits=True)</li><li><b>Imbalance handling</b>: Class weights computed automatically</li></ul><h2>Requirements</h2><p dir="ltr">The following Python packages are required:</p><ul><li>TensorFlow</li><li>Transformers</li><li>scikit-learn</li><li>pandas</li><li>datasets</li><li>matplotlib</li><li>seaborn</li><li>numpy</li></ul><h2>Usage</h2><ol><li><b>Data format</b>:</li></ol><ul><li>CSV file or Pandas DataFrame</li><li>Required column name: <code>text</code> (string type)</li><li>Required column name: Data type label (integer type, 0-4) - optional for evaluation</li></ul><ol><li><b>Text preprocessing</b>:</li></ol><ul><li>Automatic tokenization with a maximum length of 128 tokens</li><li>Long texts will be automatically truncated</li><li>Handling of special characters, URLs, and emojis included</li></ul><ol><li><b>Label encoding</b>:</li></ol><ul><li>The model classifies hate speech into 5 categories (0-4)</li><li><code>0</code>: Political hatred: Expressions directed against individuals or groups based on political orientation.…”
-
17
-
18
Steps in the extraction of 14 coordinates from the CT slices for the curved MPR.
Published 2025“…Protruding paths are then eliminated using graph-based optimization algorithms, as demonstrated in f). …”
-
19
Data_Sheet_1_Alzheimer’s Disease Diagnosis and Biomarker Analysis Using Resting-State Functional MRI Functional Brain Network With Multi-Measures Features and Hippocampal Subfield...
Published 2022“…Finally, we implemented and compared the different feature selection algorithms to integrate the structural features, brain networks, and voxel features to optimize the diagnostic identifications of AD using support vector machine (SVM) classifiers. …”
-
20
An Ecological Benchmark of Photo Editing Software: A Comparative Analysis of Local vs. Cloud Workflows
Published 2025“…Technical Architecture Overview Computational Environment Specifications Our experimental infrastructure leverages a heterogeneous multi-node computational topology encompassing three distinct hardware abstraction layers: Node Configuration Alpha (Intel-NVIDIA Heterogeneous Architecture) Processor: Intel Core i7-12700K (Alder Lake microarchitecture) - 12-core hybrid architecture (8 P-cores + 4 E-cores) - Base frequency: 3.6 GHz, Max turbo: 5.0 GHz - Cache hierarchy: 32KB L1I + 48KB L1D per P-core, 12MB L3 shared - Instruction set extensions: AVX2, AVX-512, SSE4.2 - Thermal design power: 125W (PL1), 190W (PL2) Memory Subsystem: 32GB DDR4-3200 JEDEC-compliant DIMM - Dual-channel configuration, ECC-disabled - Memory controller integrated within CPU die - Peak theoretical bandwidth: 51.2 GB/s GPU Accelerator: NVIDIA GeForce RTX 3070 (GA104 silicon) - CUDA compute capability: 8.6 - RT cores: 46 (2nd gen), Tensor cores: 184 (3rd gen) - Memory: 8GB GDDR6 @ 448 GB/s bandwidth - PCIe 4.0 x16 interface with GPU Direct RDMA support Node Configuration Beta (AMD Zen3+ Architecture) Processor: AMD Ryzen 7 5800X (Zen 3 microarchitecture) - 8-core monolithic design, simultaneous multithreading enabled - Base frequency: 3.8 GHz, Max boost: 4.7 GHz - Cache hierarchy: 32KB L1I + 32KB L1D per core, 32MB L3 shared - Infinity Fabric interconnect @ 1800 MHz - Thermal design power: 105W Memory Subsystem: 16GB DDR4-3600 overclocked configuration - Dual-channel with optimized subtimings (CL16-19-19-39) - Memory controller frequency: 1800 MHz (1:1 FCLK ratio) GPU Accelerator: NVIDIA GeForce GTX 1660 (TU116 silicon) - CUDA compute capability: 7.5 - Memory: 6GB GDDR5 @ 192 GB/s bandwidth - Turing shader architecture without RT/Tensor cores Node Configuration Gamma (Intel Raptor Lake High-Performance) Processor: Intel Core i9-13900K (Raptor Lake microarchitecture) - 24-core hybrid topology (8 P-cores + 16 E-cores) - P-core frequency: 3.0 GHz base, 5.8 GHz max turbo - E-core frequency: 2.2 GHz base, 4.3 GHz max turbo - Cache hierarchy: 36MB L3 shared, Intel Smart Cache technology - Thermal velocity boost with thermal monitoring Memory Subsystem: 64GB DDR5-5600 high-bandwidth configuration - Quad-channel topology with advanced error correction - Peak theoretical bandwidth: 89.6 GB/s GPU Accelerator: NVIDIA GeForce RTX 4080 (AD103 silicon) - Ada Lovelace architecture, CUDA compute capability: 8.9 - RT cores: 76 (3rd gen), Tensor cores: 304 (4th gen) - Memory: 16GB GDDR6X @ 716.8 GB/s bandwidth - PCIe 4.0 x16 with NVLink-ready topology Instrumentation and Telemetry Framework Power Consumption Monitoring Infrastructure Our energy profiling subsystem employs a multi-layered approach to capture granular power consumption metrics across the entire computational stack: Hardware Performance Counters (HPC): Intel RAPL (Running Average Power Limit) interface for CPU package power measurement with sub-millisecond resolution GPU Telemetry: NVIDIA Management Library (NVML) API for real-time GPU power draw monitoring via PCIe sideband signaling System-level PMU: Performance Monitoring Unit instrumentation leveraging MSR (Model Specific Register) access for architectural event sampling Network Interface Telemetry: SNMP-based monitoring of NIC power consumption during cloud upload/download phases Temporal Synchronization Protocol All measurement vectors utilize high-resolution performance counters (HPET) with nanosecond precision timestamps, synchronized via Network Time Protocol (NTP) to ensure temporal coherence across distributed measurement points. …”