Search alternatives:
based optimization » whale optimization (Expand Search)
library based » laboratory based (Expand Search)
binary mask » binary image (Expand Search)
mask based » task based (Expand Search), tasks based (Expand Search), risk based (Expand Search)
based optimization » whale optimization (Expand Search)
library based » laboratory based (Expand Search)
binary mask » binary image (Expand Search)
mask based » task based (Expand Search), tasks based (Expand Search), risk based (Expand Search)
-
1
A Practical Algorithm to Solve the Near-Congruence Problem for Rigid Molecules and Clusters
Published 2023“…The Fortran implementation of the algorithm is available as an open source library (https://github.com/qcuaeh/molalignlib) and is suitable to be used in global optimization methods for the identification of local minima or basins.…”
-
2
A* Path-Finding Algorithm to Determine Cell Connections
Published 2025“…To address this, the research integrates a modified A* pathfinding algorithm with a U-Net convolutional neural network, a custom statistical binary classification method, and a personalized Min-Max connectivity threshold to automate the detection of astrocyte connectivity.…”
-
3
-
4
Flowchart scheme of the ML-based model.
Published 2024“…<b>I)</b> Testing data consisting of 20% of the entire dataset. <b>J)</b> Optimization of hyperparameter tuning. <b>K)</b> Algorithm selection from all models. …”
-
5
Distribution of Bound Conformations in Conformational Ensembles for X‑ray Ligands Predicted by the ANI-2X Machine Learning Potential
Published 2023“…This information is useful to guide the construction of libraries for shape-based virtual screening and to improve the docking algorithm to efficiently sample bound conformations.…”
-
6
Distribution of Bound Conformations in Conformational Ensembles for X‑ray Ligands Predicted by the ANI-2X Machine Learning Potential
Published 2023“…This information is useful to guide the construction of libraries for shape-based virtual screening and to improve the docking algorithm to efficiently sample bound conformations.…”
-
7
-
8
-
9
Data_Sheet_1_CLGBO: An Algorithm for Constructing Highly Robust Coding Sets for DNA Storage.docx
Published 2021“…In this study, we describe an enhanced gradient-based optimizer that includes the Cauchy and Levy mutation strategy (CLGBO) to construct DNA coding sets, which are used as primer and address libraries. …”
-
10
-
11
Algoritmo de clasificación de expresiones de odio por tipos en español (Algorithm for classifying hate expressions by type in Spanish)
Published 2024“…</p><h2>Model Architecture</h2><p dir="ltr">The model is based on <code>pysentimiento/robertuito-base-uncased</code> with the following modifications:</p><ul><li>A dense classification layer was added over the base model</li><li>Uses input IDs and attention masks as inputs</li><li>Generates a multi-class classification with 5 hate categories</li></ul><h2>Dataset</h2><p dir="ltr"><b>HATEMEDIA Dataset</b>: Custom hate speech dataset with categorization by type:</p><ul><li><b>Labels</b>: 5 hate type categories (0-4)</li><li><b>Preprocessing</b>:</li><li>Null values removed from text and labels</li><li>Reindexing and relabeling (original labels are adjusted by subtracting 1)</li><li>Exclusion of category 2 during training</li><li>Conversion of category 5 to category 2</li></ul><h2>Training Process</h2><h3>Configuration</h3><ul><li><b>Batch size</b>: 128</li><li><b>Epoches</b>: 5</li><li><b>Learning rate</b>: 2e-5 with 10% warmup steps</li><li><b>Early stopping</b> with patience=2</li><li><b>Class weights</b>: Balanced to handle class imbalance</li></ul><h3>Custom Metrics</h3><ul><li>Recall for specific classes (focus on class 2)</li><li>Precision for specific classes (focus on class 3)</li><li>F1-score (weighted)</li><li>AUC-PR</li><li>Recall at precision=0.6 (class 3)</li><li>Precision at recall=0.6 (class 2)</li></ul><h2>Evaluation Metrics</h2><p dir="ltr">The model is evaluated using:</p><ul><li>Macro recall, precision, and F1-score</li><li>One-vs-Rest AUC</li><li>Accuracy</li><li>Per-class metrics</li><li>Confusion matrix</li><li>Full classification report</li></ul><h2>Technical Features</h2><h3>Data Preprocessing</h3><ul><li><b>Tokenization</b>: Maximum length of 128 tokens (truncation and padding)</li><li><b>Encoding of labels</b>: One-hot encoding for multi-class classification</li><li><b>Data split</b>: 80% training, 10% validation, 10% testing</li></ul><h3>Optimization</h3><ul><li><b>Optimizer</b>: Adam with linear warmup scheduling</li><li><b>Loss function</b>: Categorical Crossentropy (from_logits=True)</li><li><b>Imbalance handling</b>: Class weights computed automatically</li></ul><h2>Requirements</h2><p dir="ltr">The following Python packages are required:</p><ul><li>TensorFlow</li><li>Transformers</li><li>scikit-learn</li><li>pandas</li><li>datasets</li><li>matplotlib</li><li>seaborn</li><li>numpy</li></ul><h2>Usage</h2><ol><li><b>Data format</b>:</li></ol><ul><li>CSV file or Pandas DataFrame</li><li>Required column name: <code>text</code> (string type)</li><li>Required column name: Data type label (integer type, 0-4) - optional for evaluation</li></ul><ol><li><b>Text preprocessing</b>:</li></ol><ul><li>Automatic tokenization with a maximum length of 128 tokens</li><li>Long texts will be automatically truncated</li><li>Handling of special characters, URLs, and emojis included</li></ul><ol><li><b>Label encoding</b>:</li></ol><ul><li>The model classifies hate speech into 5 categories (0-4)</li><li><code>0</code>: Political hatred: Expressions directed against individuals or groups based on political orientation.…”
-
12
-
13
Steps in the extraction of 14 coordinates from the CT slices for the curved MPR.
Published 2025“…Protruding paths are then eliminated using graph-based optimization algorithms, as demonstrated in f). …”
-
14
Search for acetylcholinesterase inhibitors by computerized screening of approved drug compounds
Published 2025“…The screening process employed the SOL docking program with MMFF94 force field and genetic algorithms for global optimization, targeting the human AChE structure (PDB ID: 6O4W). …”
-
15
Data_Sheet_1_Alzheimer’s Disease Diagnosis and Biomarker Analysis Using Resting-State Functional MRI Functional Brain Network With Multi-Measures Features and Hippocampal Subfield...
Published 2022“…Finally, we implemented and compared the different feature selection algorithms to integrate the structural features, brain networks, and voxel features to optimize the diagnostic identifications of AD using support vector machine (SVM) classifiers. …”
-
16
Table 1_Advances in the application of human-machine collaboration in healthcare: insights from China.docx
Published 2025“…“Human–machine collaboration” is based on an intelligent algorithmic system that utilizes the complementary strengths of humans and machines for data exchange, task allocation, decision making and collaborative work to provide more decision support. …”
-
17
An Ecological Benchmark of Photo Editing Software: A Comparative Analysis of Local vs. Cloud Workflows
Published 2025“…Reproducibility Framework Container Orchestration # Kubernetes deployment manifest for reproducible environment apiVersion: apps/v1 kind: Deployment metadata: name: energy-benchmark-pod spec: replicas: 1 selector: matchLabels: app: benchmark-runner template: metadata: labels: app: benchmark-runner spec: nodeSelector: hardware.profile: "high-performance" containers: - name: benchmark-container image: albumforge/energy-benchmark:v2.1.3 resources: requests: cpu: "8000m" memory: "16Gi" nvidia.com/gpu: 1 limits: cpu: "16000m" memory: "32Gi" env: - name: MEASUREMENT_PRECISION value: "high" - name: POWER_SAMPLING_RATE value: "1000" # 1kHz sampling Dependency Management FROM ubuntu:22.04-cuda11.8-devel RUN apt-get update && apt-get install -y \ perf-tools \ powertop \ intel-gpu-tools \ nvidia-smi \ cpupower \ msr-tools \ && rm -rf /var/lib/apt/lists/* COPY requirements.txt /opt/ RUN pip install -r /opt/requirements.txt Usage Examples and API Documentation Python Data Analysis Interface import pandas as pd import numpy as np from scipy import stats import matplotlib.pyplot as plt import seaborn as sns # Load dataset with optimized dtypes for memory efficiency df = pd.read_csv('ecological_benchmark_dataset.csv', dtype={'hardware_config': 'category', 'test_type': 'category'}) # Compute energy efficiency metrics df['energy_per_photo'] = df['energy_consumption_kwh'] / df['photo_count'] df['co2_per_gigabyte'] = df['co2_equivalent_g'] / df['total_volume_gb'] # Statistical analysis with confidence intervals local_energy = df[df['test_type'] == 'local_processing']['energy_consumption_kwh'] cloud_energy = df[df['test_type'] == 'cloud_processing']['energy_consumption_kwh'] t_stat, p_value = stats.ttest_ind(local_energy, cloud_energy) effect_size = (cloud_energy.mean() - local_energy.mean()) / np.sqrt((cloud_energy.var() + local_energy.var()) / 2) print(f"Statistical significance: p = {p_value:.2e}") print(f"Cohen's d effect size: {effect_size:.3f}") R Statistical Computing Environment library(tidyverse) library(lme4) # Linear mixed-effects models library(ggplot2) library(corrplot) # Load and preprocess data df <- read_csv("ecological_benchmark_dataset.csv") %>% mutate( test_type = factor(test_type), hardware_config = factor(hardware_config), log_energy = log(energy_consumption_kwh), efficiency_ratio = energy_consumption_kwh / processing_time_sec ) # Mixed-effects regression model accounting for hardware heterogeneity model <- lmer(log_energy ~ test_type + log(photo_count) + (1|hardware_config), data = df) # Extract model coefficients with confidence intervals summary(model) confint(model, method = "Wald") Advanced Analytics and Machine Learning Integration Predictive Modeling Framework from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor from sklearn.model_selection import cross_val_score, GridSearchCV from sklearn.preprocessing import StandardScaler, LabelEncoder from sklearn.metrics import mean_absolute_error, r2_score # Feature engineering pipeline def create_feature_matrix(df): features = df[['photo_count', 'avg_file_size_mb', 'total_volume_gb']].copy() # Polynomial features for capturing non-linear relationships features['photo_count_squared'] = features['photo_count'] ** 2 features['size_volume_interaction'] = features['avg_file_size_mb'] * features['total_volume_gb'] # Hardware configuration encoding le = LabelEncoder() features['hardware_encoded'] = le.fit_transform(df['hardware_config']) return features # Energy consumption prediction model X = create_feature_matrix(df) y = df['energy_consumption_kwh'] # Hyperparameter optimization param_grid = { 'n_estimators': [100, 200, 500], 'max_depth': [10, 20, None], 'min_samples_split': [2, 5, 10], 'min_samples_leaf': [1, 2, 4] } rf_model = RandomForestRegressor(random_state=42) grid_search = GridSearchCV(rf_model, param_grid, cv=5, scoring='neg_mean_absolute_error') grid_search.fit(X, y) print(f"Best cross-validation score: {-grid_search.best_score_:.6f}") print(f"Optimal hyperparameters: {grid_search.best_params_}") Carbon Footprint Calculation Methodology Emission Factor Coefficients Carbon intensity calculations employ region-specific emission factors from the International Energy Agency (IEA) database: EMISSION_FACTORS = { 'EU_AVERAGE': 0.276, # kg CO₂/kWh (European Union average 2024) 'FRANCE': 0.057, # kg CO₂/kWh (Nuclear-dominant grid) 'GERMANY': 0.485, # kg CO₂/kWh (Coal transition period) 'NORWAY': 0.013, # kg CO₂/kWh (Hydroelectric dominant) 'GLOBAL_AVERAGE': 0.475 # kg CO₂/kWh (Global weighted average) } def calculate_carbon_footprint(energy_kwh: float, region: str = 'EU_AVERAGE') -> float: """ Calculate CO₂ equivalent emissions using lifecycle assessment methodology Args: energy_kwh: Energy consumption in kilowatt-hours region: Geographic region for emission factor selection Returns: CO₂ equivalent emissions in grams """ emission_factor = EMISSION_FACTORS.get(region, EMISSION_FACTORS['GLOBAL_AVERAGE']) co2_kg = energy_kwh * emission_factor return co2_kg * 1000 # Convert to grams Citation and Attribution This dataset is released under Creative Commons Attribution 4.0 International (CC BY 4.0) license. …”