Search alternatives:
whale optimization » swarm optimization (Expand Search)
d optimization » _ optimization (Expand Search), b optimization (Expand Search), led optimization (Expand Search)
linear layer » inner layer (Expand Search), smear layer (Expand Search)
image whale » image scale (Expand Search)
layer d » layer _ (Expand Search), layer 2 (Expand Search), layer 1 (Expand Search)
whale optimization » swarm optimization (Expand Search)
d optimization » _ optimization (Expand Search), b optimization (Expand Search), led optimization (Expand Search)
linear layer » inner layer (Expand Search), smear layer (Expand Search)
image whale » image scale (Expand Search)
layer d » layer _ (Expand Search), layer 2 (Expand Search), layer 1 (Expand Search)
-
1
-
2
-
3
-
4
-
5
Numerical Simulation and Machine Learning-Driven Engineering of K<sub>2</sub>GeI<sub>6</sub> Perovskite Solar Cells for High-Efficiency and Sustainable Photovoltaics
Published 2025“…This study presents a comprehensive numerical investigation of lead-free potassium germanium hexaiodide (K<sub>2</sub>GeI<sub>6</sub>)-based double perovskite absorbers using the SCAPS-1D simulation tool. A total of 84 device configurations were explored by varying combinations of electron transport layers (ETLs) and hole transport layers (HTLs). …”
-
6
-
7
-
8
Algoritmo de clasificación de expresiones de odio por tipos en español (Algorithm for classifying hate expressions by type in Spanish)
Published 2024“…</p><h2>Model Architecture</h2><p dir="ltr">The model is based on <code>pysentimiento/robertuito-base-uncased</code> with the following modifications:</p><ul><li>A dense classification layer was added over the base model</li><li>Uses input IDs and attention masks as inputs</li><li>Generates a multi-class classification with 5 hate categories</li></ul><h2>Dataset</h2><p dir="ltr"><b>HATEMEDIA Dataset</b>: Custom hate speech dataset with categorization by type:</p><ul><li><b>Labels</b>: 5 hate type categories (0-4)</li><li><b>Preprocessing</b>:</li><li>Null values removed from text and labels</li><li>Reindexing and relabeling (original labels are adjusted by subtracting 1)</li><li>Exclusion of category 2 during training</li><li>Conversion of category 5 to category 2</li></ul><h2>Training Process</h2><h3>Configuration</h3><ul><li><b>Batch size</b>: 128</li><li><b>Epoches</b>: 5</li><li><b>Learning rate</b>: 2e-5 with 10% warmup steps</li><li><b>Early stopping</b> with patience=2</li><li><b>Class weights</b>: Balanced to handle class imbalance</li></ul><h3>Custom Metrics</h3><ul><li>Recall for specific classes (focus on class 2)</li><li>Precision for specific classes (focus on class 3)</li><li>F1-score (weighted)</li><li>AUC-PR</li><li>Recall at precision=0.6 (class 3)</li><li>Precision at recall=0.6 (class 2)</li></ul><h2>Evaluation Metrics</h2><p dir="ltr">The model is evaluated using:</p><ul><li>Macro recall, precision, and F1-score</li><li>One-vs-Rest AUC</li><li>Accuracy</li><li>Per-class metrics</li><li>Confusion matrix</li><li>Full classification report</li></ul><h2>Technical Features</h2><h3>Data Preprocessing</h3><ul><li><b>Tokenization</b>: Maximum length of 128 tokens (truncation and padding)</li><li><b>Encoding of labels</b>: One-hot encoding for multi-class classification</li><li><b>Data split</b>: 80% training, 10% validation, 10% testing</li></ul><h3>Optimization</h3><ul><li><b>Optimizer</b>: Adam with linear warmup scheduling</li><li><b>Loss function</b>: Categorical Crossentropy (from_logits=True)</li><li><b>Imbalance handling</b>: Class weights computed automatically</li></ul><h2>Requirements</h2><p dir="ltr">The following Python packages are required:</p><ul><li>TensorFlow</li><li>Transformers</li><li>scikit-learn</li><li>pandas</li><li>datasets</li><li>matplotlib</li><li>seaborn</li><li>numpy</li></ul><h2>Usage</h2><ol><li><b>Data format</b>:</li></ol><ul><li>CSV file or Pandas DataFrame</li><li>Required column name: <code>text</code> (string type)</li><li>Required column name: Data type label (integer type, 0-4) - optional for evaluation</li></ul><ol><li><b>Text preprocessing</b>:</li></ol><ul><li>Automatic tokenization with a maximum length of 128 tokens</li><li>Long texts will be automatically truncated</li><li>Handling of special characters, URLs, and emojis included</li></ul><ol><li><b>Label encoding</b>:</li></ol><ul><li>The model classifies hate speech into 5 categories (0-4)</li><li><code>0</code>: Political hatred: Expressions directed against individuals or groups based on political orientation.…”
-
9
Compack3D: Accelerating High-Order Compact Scheme Simulations
Published 2025“…Kernel performance and communication between distributed memory partitions are optimized based on the improved code implementation and design of parallel algorithms enabled by the mathematical properties of the linear system solution approach. …”
-
10
Table_1_Helix Matrix Transformation Combined With Convolutional Neural Network Algorithm for Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry-Based Bact...
Published 2020“…The filter sizes for the three convolutional layers were 4, 8, and 16. The kernel size was three and the activation function was the rectified linear unit (ReLU). …”
-
11
Table_2_Helix Matrix Transformation Combined With Convolutional Neural Network Algorithm for Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry-Based Bact...
Published 2020“…The filter sizes for the three convolutional layers were 4, 8, and 16. The kernel size was three and the activation function was the rectified linear unit (ReLU). …”
-
12
Image_1_Helix Matrix Transformation Combined With Convolutional Neural Network Algorithm for Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry-Based Bact...
Published 2020“…The filter sizes for the three convolutional layers were 4, 8, and 16. The kernel size was three and the activation function was the rectified linear unit (ReLU). …”
-
13
Results for the Pattern Generation task.
Published 2021“…The recurrent activity is decoded via a linear readout layer with three output neurons, one for each trajectory. …”
-
14
Visualization_1.avi
Published 2019“…The structure is formed in layer-by-layer fashion using stl model with movement being optimized using traveling salesman algorithm. …”
-
15
Code
Published 2025“…This transformation is performed by a fully connected linear layer.</p><p><br></p><p dir="ltr">The transformed sequences are then processed by two sequential convolutional layers. …”
-
16
Core data
Published 2025“…This transformation is performed by a fully connected linear layer.</p><p><br></p><p dir="ltr">The transformed sequences are then processed by two sequential convolutional layers. …”
-
17
An Ecological Benchmark of Photo Editing Software: A Comparative Analysis of Local vs. Cloud Workflows
Published 2025“…Experimental Methodology Framework Local Processing Pipeline Architecture Data Flow: Storage I/O → Memory Buffer → CPU/GPU Processing → Cache Coherency → Storage I/O ├── Input Vector: mmap() system call for zero-copy file access ├── Processing Engine: OpenMP parallelization with NUMA-aware thread affinity ├── Memory Management: Custom allocator with hugepage backing └── Output Vector: Direct I/O bypassing kernel page cache Cloud Processing Pipeline Architecture Data Flow: Local Storage → Network Stack → TLS Tunnel → CDN Edge → Origin Server → Processing Grid → Response Pipeline ├── Upload Phase: TCP window scaling with congestion control algorithms ├── Network Layer: Application-layer protocol with adaptive bitrate streaming ├── Server-side Processing: Containerized microservices on Kubernetes orchestration ├── Load Balancing: Consistent hashing with geographic affinity routing └── Download Phase: HTTP/2 multiplexing with server push optimization Dataset Schema and Semantic Structure Primary Data Vectors Field Data Type Semantic Meaning Measurement Unit test_type Categorical Processing paradigm identifier {local_processing, cloud_processing} photo_count Integer Cardinality of input asset vector Count avg_file_size_mb Float64 Mean per-asset storage footprint Mebibytes (2^20 bytes) total_volume_gb Float64 Aggregate data corpus size Gigabytes (10^9 bytes) processing_time_sec Integer Wall-clock execution duration Seconds (SI base unit) cpu_usage_watts Float64 Thermal design power consumption Watts (Joules/second) ram_usage_mb Integer Peak resident set size Mebibytes network_upload_mb Float64 Egress bandwidth utilization Mebibytes energy_consumption_kwh Float64 Cumulative energy expenditure Kilowatt-hours co2_equivalent_g Float64 Carbon footprint estimation Grams CO₂e test_date ISO8601 Temporal execution marker RFC 3339 format hardware_config String Node topology identifier Alphanumeric encoding Statistical Distribution Characteristics The dataset exhibits non-parametric distribution patterns with significant heteroscedasticity across computational load vectors. …”