Search alternatives:
code implementation » model implementation (Expand Search), time implementation (Expand Search), pre implementation (Expand Search)
python model » motion model (Expand Search), action model (Expand Search)
code implementation » model implementation (Expand Search), time implementation (Expand Search), pre implementation (Expand Search)
python model » motion model (Expand Search), action model (Expand Search)
-
1241
TF3P: Three-Dimensional Force Fields Fingerprint Learned by Deep Capsular Network
Published 2020“…Furthermore, TF3P is compatible with both statistical models (e.g., similarity ensemble approach) and machine learning models. …”
-
1242
Adaptive protein evolution through length variation of short tandem repeats in Arabidopsis. Supplementary Materials.
Published 2023“…</p> <p><br></p> <p>Additional File 6: Disorder predictions from d2p2 (BED).</p> <p><br></p> <p>Additional File 7: Disorder predictions, protein binding from DisoRDPbind (BED).…”
-
1243
An Ecological Benchmark of Photo Editing Software: A Comparative Analysis of Local vs. Cloud Workflows
Published 2025“…Reproducibility Framework Container Orchestration # Kubernetes deployment manifest for reproducible environment apiVersion: apps/v1 kind: Deployment metadata: name: energy-benchmark-pod spec: replicas: 1 selector: matchLabels: app: benchmark-runner template: metadata: labels: app: benchmark-runner spec: nodeSelector: hardware.profile: "high-performance" containers: - name: benchmark-container image: albumforge/energy-benchmark:v2.1.3 resources: requests: cpu: "8000m" memory: "16Gi" nvidia.com/gpu: 1 limits: cpu: "16000m" memory: "32Gi" env: - name: MEASUREMENT_PRECISION value: "high" - name: POWER_SAMPLING_RATE value: "1000" # 1kHz sampling Dependency Management FROM ubuntu:22.04-cuda11.8-devel RUN apt-get update && apt-get install -y \ perf-tools \ powertop \ intel-gpu-tools \ nvidia-smi \ cpupower \ msr-tools \ && rm -rf /var/lib/apt/lists/* COPY requirements.txt /opt/ RUN pip install -r /opt/requirements.txt Usage Examples and API Documentation Python Data Analysis Interface import pandas as pd import numpy as np from scipy import stats import matplotlib.pyplot as plt import seaborn as sns # Load dataset with optimized dtypes for memory efficiency df = pd.read_csv('ecological_benchmark_dataset.csv', dtype={'hardware_config': 'category', 'test_type': 'category'}) # Compute energy efficiency metrics df['energy_per_photo'] = df['energy_consumption_kwh'] / df['photo_count'] df['co2_per_gigabyte'] = df['co2_equivalent_g'] / df['total_volume_gb'] # Statistical analysis with confidence intervals local_energy = df[df['test_type'] == 'local_processing']['energy_consumption_kwh'] cloud_energy = df[df['test_type'] == 'cloud_processing']['energy_consumption_kwh'] t_stat, p_value = stats.ttest_ind(local_energy, cloud_energy) effect_size = (cloud_energy.mean() - local_energy.mean()) / np.sqrt((cloud_energy.var() + local_energy.var()) / 2) print(f"Statistical significance: p = {p_value:.2e}") print(f"Cohen's d effect size: {effect_size:.3f}") R Statistical Computing Environment library(tidyverse) library(lme4) # Linear mixed-effects models library(ggplot2) library(corrplot) # Load and preprocess data df <- read_csv("ecological_benchmark_dataset.csv") %>% mutate( test_type = factor(test_type), hardware_config = factor(hardware_config), log_energy = log(energy_consumption_kwh), efficiency_ratio = energy_consumption_kwh / processing_time_sec ) # Mixed-effects regression model accounting for hardware heterogeneity model <- lmer(log_energy ~ test_type + log(photo_count) + (1|hardware_config), data = df) # Extract model coefficients with confidence intervals summary(model) confint(model, method = "Wald") Advanced Analytics and Machine Learning Integration Predictive Modeling Framework from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor from sklearn.model_selection import cross_val_score, GridSearchCV from sklearn.preprocessing import StandardScaler, LabelEncoder from sklearn.metrics import mean_absolute_error, r2_score # Feature engineering pipeline def create_feature_matrix(df): features = df[['photo_count', 'avg_file_size_mb', 'total_volume_gb']].copy() # Polynomial features for capturing non-linear relationships features['photo_count_squared'] = features['photo_count'] ** 2 features['size_volume_interaction'] = features['avg_file_size_mb'] * features['total_volume_gb'] # Hardware configuration encoding le = LabelEncoder() features['hardware_encoded'] = le.fit_transform(df['hardware_config']) return features # Energy consumption prediction model X = create_feature_matrix(df) y = df['energy_consumption_kwh'] # Hyperparameter optimization param_grid = { 'n_estimators': [100, 200, 500], 'max_depth': [10, 20, None], 'min_samples_split': [2, 5, 10], 'min_samples_leaf': [1, 2, 4] } rf_model = RandomForestRegressor(random_state=42) grid_search = GridSearchCV(rf_model, param_grid, cv=5, scoring='neg_mean_absolute_error') grid_search.fit(X, y) print(f"Best cross-validation score: {-grid_search.best_score_:.6f}") print(f"Optimal hyperparameters: {grid_search.best_params_}") Carbon Footprint Calculation Methodology Emission Factor Coefficients Carbon intensity calculations employ region-specific emission factors from the International Energy Agency (IEA) database: EMISSION_FACTORS = { 'EU_AVERAGE': 0.276, # kg CO₂/kWh (European Union average 2024) 'FRANCE': 0.057, # kg CO₂/kWh (Nuclear-dominant grid) 'GERMANY': 0.485, # kg CO₂/kWh (Coal transition period) 'NORWAY': 0.013, # kg CO₂/kWh (Hydroelectric dominant) 'GLOBAL_AVERAGE': 0.475 # kg CO₂/kWh (Global weighted average) } def calculate_carbon_footprint(energy_kwh: float, region: str = 'EU_AVERAGE') -> float: """ Calculate CO₂ equivalent emissions using lifecycle assessment methodology Args: energy_kwh: Energy consumption in kilowatt-hours region: Geographic region for emission factor selection Returns: CO₂ equivalent emissions in grams """ emission_factor = EMISSION_FACTORS.get(region, EMISSION_FACTORS['GLOBAL_AVERAGE']) co2_kg = energy_kwh * emission_factor return co2_kg * 1000 # Convert to grams Citation and Attribution This dataset is released under Creative Commons Attribution 4.0 International (CC BY 4.0) license. …”
-
1244
DataSheet1_Endogenous CRISPR/Cas systems for genome engineering in the acetogens Acetobacterium woodii and Clostridium autoethanogenum.pdf
Published 2023“…As an alternative, this study aims to facilitate the exploitation of CRISPR/Cas endogenous systems as genome engineering tools. Accordingly, a Python script was developed to automate the prediction of protospacer adjacent motif (PAM) sequences and used to identify PAM candidates of the A. woodii Type I-B CRISPR/Cas system. …”
-
1245
Supplementary Material for review——Revealing the co-occurrence patterns of the group emotions from social media data
Published 2025“…</p><p dir="ltr">The main.py file can be run directly to automate the computation and output of the model.</p><p dir="ltr">表格部分</p><p dir="ltr">1.Table 3:Model accuracy assessment</p><p dir="ltr">脚本路径:’code/bert.py’</p><p dir="ltr">输入数据:’data/wh_data_cleaned.csv’</p><p dir="ltr">输出位置:’data/emotion_prediction_wh.csv’</p><p dir="ltr">说明:Output Precision, Recall, F1 for each emotion, and calculate the weighted average of Precision, Recall, F1</p><p dir="ltr">2.Table 4: Examples of different types of emotional structures</p><p dir="ltr">脚本路径:’code/countnum.py’</p><p dir="ltr">输入数据:’data/emotion_prediction_wh.csv’</p><p dir="ltr">输出位置:’data/emotion_prediction_wh.csv’</p><p dir="ltr">说明:Determine whether an emotion is of a single type, a dominant subsidiary type, or one of the composite types by using emotion probabilities and entropy values</p><p dir="ltr">3.Table 5-6: Examples of different types of emotional structures</p><p dir="ltr">①脚本路径:’code/lat_lon.py’</p><p dir="ltr">输入数据:’data/emotion_prediction_wh.csv’</p><p dir="ltr">输出位置:’result/bert/wh/128/grid_lat_lon.csv’</p><p dir="ltr">说明:The study area can be gridded by running the file.…”
-
1246
Data_Sheet_1_CNN stability training improves robustness to scanner and IHC-based image variability for epithelium segmentation in cervical histology.docx
Published 2023“…CST models also outperformed models trained with random on-the-fly data augmentation (DA) in all test sets ([0.002, 0.021], p < 1e-6).…”
-
1247
AGU24 - EP11D-1300 - Revisiting Megacusp Embayment Occurrence in Monterey Bay and Beyond: High Spatiotemporal Resolution Satellite Imagery Provides New Insight into the Wave Condit...
Published 2025“…Previous studies using both site observation and numerical models have yielded rough characterization of the wave conditions necessary for MCE formation, including wave energy and direction. …”
-
1248
Data_Sheet_1_Decoding emotional resilience in aging: unveiling the interplay between daily functioning and emotional health.PDF
Published 2024“…Machine learning algorithms validated our findings from statistical analysis, confirming the predictive accuracy of ADL for EPs. The area under the curve (AUC) for the three models were SVM-AUC = 0.700, DT-AUC = 0.742, and LR-AUC = 0.711. …”
-
1249
Image_1_Association between household income levels and nutritional intake of allergic children under 6 years of age in Korea: 2019 national health and nutrition examination survey...
Published 2024“…Logistic regression analysis was performed to identify factors associated with allergic diseases, including gender, BMI, eating habits, dietary supplement intake, and nutrient consumption. To predict childhood asthma, 14 machine learning models were compared using the ‘pycaret’ package in Python.…”
-
1250
Image_1_Association between household income levels and nutritional intake of allergic children under 6 years of age in Korea: 2019 Korea National Health and Nutrition Examination...
Published 2024“…Logistic regression analysis was performed to identify factors associated with allergic diseases, including gender, BMI, eating habits, dietary supplement intake, and nutrient consumption. To predict childhood asthma, 14 machine learning models were compared using the ‘pycaret’ package in Python.…”
-
1251
Supplementary Data: Biodiversity and Energy System Planning - Queensland 2025
Published 2025“…</p><h2>Software and Spatial Resolution</h2><p dir="ltr">The VRE siting model is implemented using Python and relies heavily on ArcGIS for comprehensive spatial data handling and analysis.…”
-
1252
Dataset for:Exploring the Pharmacological Properties and Mechanism of Action of Lithocarpus litseifolius (Hance) Chun. in Treating Diabetic Neuropathy Based on SwissADME, Network P...
Published 2025“…</p><p dir="ltr">Targets/</p><p dir="ltr">– 1,346 unique SwissTargetPrediction hits (Homo sapiens, probability > 0) for all compounds.…”
-
1253
Mean Annual Habitat Quality and Its Driving Variables in China (1990–2018)
Published 2025“…</p><p dir="ltr">(HQ: Habitat Quality; CZ: Climate Zone; FFI: Forest Fragmentation Index; GPP: Gross Primary Productivity; Light: Nighttime Lights; PRE: Mean Annual Precipitation Sum; ASP: Aspect; RAD: Solar Radiation; SLOPE: Slope; TEMP: Mean Annual Temperature; SM: Soil Moisture)</p><p dir="ltr"><br>A Python script used for modeling habitat quality, including mean encoding of the categorical variable climate zone (CZ), multicollinearity testing using Variance Inflation Factor (VIF), and implementation of four machine learning models to predict habitat quality.…”
-
1254
Gladier - A programmable data capture, storage, and analysis architecture for experimental facilities
Published 2021“…Each service can be accessed via REST APIs, and/or from Python via a simple client library (which calls the REST APIs). …”
-
1255
<b>Historical Nifty 50 Constituent Weights (Rolling 20-Year Window)</b>
Published 2025“…</li><li>Building features for quantitative models that aim to predict market movements.</li><li>Backtesting investment strategies benchmarked against the Nifty 50.…”
-
1256
Decoding rapidly presented visual stimuli from prefrontal ensembles without report nor post-perceptual processing
Published 2022“…</p><p dir="ltr"><br><a href="https://github.com/jobellet/fast_and_rich_decoding_in_VLPFC/blob/main/Download_dataset.ipynb" rel="noreferrer" target="_blank">For users seeking a more navigable dataset, click here to access this Python notebook</a>. This notebook provides tools for renaming variables and cleaning unused columns, allowing for a more tailored data analysis experience without altering the primary dataset files. …”