Showing 121 - 140 results of 145 for search '(( library from based optimization algorithm ) OR ( binary task based optimization algorithm ))', query time: 0.73s Refine Results
  1. 121

    Image_5_The Effect of Training Sample Size on the Prediction of White Matter Hyperintensity Volume in a Healthy Population Using BIANCA.JPEG by Niklas Wulms (11928755)

    Published 2022
    “…In this study, we tested whether WMH volumetry with FMRIB software library v6.0 (FSL; https://fsl.fmrib.ox.ac.uk/fsl/fslwiki) Brain Intensity AbNormality Classification Algorithm (BIANCA), a customizable and trainable algorithm that quantifies WMH volume based on individual data training sets, can be optimized for a normal aging population.…”
  2. 122

    Image_7_The Effect of Training Sample Size on the Prediction of White Matter Hyperintensity Volume in a Healthy Population Using BIANCA.JPEG by Niklas Wulms (11928755)

    Published 2022
    “…In this study, we tested whether WMH volumetry with FMRIB software library v6.0 (FSL; https://fsl.fmrib.ox.ac.uk/fsl/fslwiki) Brain Intensity AbNormality Classification Algorithm (BIANCA), a customizable and trainable algorithm that quantifies WMH volume based on individual data training sets, can be optimized for a normal aging population.…”
  3. 123

    Image_8_The Effect of Training Sample Size on the Prediction of White Matter Hyperintensity Volume in a Healthy Population Using BIANCA.JPEG by Niklas Wulms (11928755)

    Published 2022
    “…In this study, we tested whether WMH volumetry with FMRIB software library v6.0 (FSL; https://fsl.fmrib.ox.ac.uk/fsl/fslwiki) Brain Intensity AbNormality Classification Algorithm (BIANCA), a customizable and trainable algorithm that quantifies WMH volume based on individual data training sets, can be optimized for a normal aging population.…”
  4. 124

    COSMO-Bench by Daniel McGann (18759496)

    Published 2025
    “…Such datasets have been used to great effect in the field of single-robot SLAM, and researchers focused on multi-robot problems would benefit greatly from dedicated benchmark datasets. To address this gap we design and release the Collaborative Open-Source Multi-robot Optimization Benchmark (COSMO-Bench) -- a suite of 24 datasets derived from a state-of-the-art C-SLAM front-end and real-world LiDAR data</p><p dir="ltr">This entry, hosted through Carnegie Mellon University libraries, serves to host the official dataset release in perpetuity. …”
  5. 125

    Collaborative Research: SI2-SSI: ELSI-Infrastructure for Scalable Electronic Structure Theory by Volker Blum (3683170)

    Published 2020
    “…The ELectronic Structure Infrastructure (ELSI) project provides an open-source software interface to facilitate the implementation and optimal use of high-performance solver libraries covering cubic scaling eigensolvers, linear scaling density-matrix-based algorithms, and other reduced scaling methods in between. …”
  6. 126

    Collaborative Research: SI2-SSI: ELSI - Infrastructure for Scalable Electronic Structure Theory by Volker Blum (3683170)

    Published 2020
    “…The ELectronic Structure Infrastructure (ELSI) project provides an open-source software interface to facilitate the implementation and optimal use of high-performance solver libraries covering cubic scaling eigensolvers, linear scaling density-matrix-based algorithms, and other reduced scaling methods in between. …”
  7. 127

    Otago's Network for Engagement and Research: Mapping Academic Expertise and Connections by Sander Zwanenburg (8552102)

    Published 2020
    “…<br></div><div><br></div><div>In the next stage of the project, we will develop further the data integration schemes, enhance our algorithm to infer expertise based on this data, and update the interactive visualisation to reflect these inferences. …”
  8. 128

    Aluminum alloy industrial materials defect by Ying Han (20349093)

    Published 2024
    “…<p dir="ltr">The dataset used in this study experiment was from the preliminary competition dataset of the 2018 Guangdong Industrial Intelligent Manufacturing Big Data Intelligent Algorithm Competition organized by Tianchi Feiyue Cloud (https://tianchi.aliyun.com/competition/entrance/231682/introduction). …”
  9. 129

    An Ecological Benchmark of Photo Editing Software: A Comparative Analysis of Local vs. Cloud Workflows by Pierre-Alexis DELAROCHE (22092572)

    Published 2025
    “…Reproducibility Framework Container Orchestration # Kubernetes deployment manifest for reproducible environment apiVersion: apps/v1 kind: Deployment metadata: name: energy-benchmark-pod spec: replicas: 1 selector: matchLabels: app: benchmark-runner template: metadata: labels: app: benchmark-runner spec: nodeSelector: hardware.profile: "high-performance" containers: - name: benchmark-container image: albumforge/energy-benchmark:v2.1.3 resources: requests: cpu: "8000m" memory: "16Gi" nvidia.com/gpu: 1 limits: cpu: "16000m" memory: "32Gi" env: - name: MEASUREMENT_PRECISION value: "high" - name: POWER_SAMPLING_RATE value: "1000" # 1kHz sampling Dependency Management FROM ubuntu:22.04-cuda11.8-devel RUN apt-get update && apt-get install -y \ perf-tools \ powertop \ intel-gpu-tools \ nvidia-smi \ cpupower \ msr-tools \ && rm -rf /var/lib/apt/lists/* COPY requirements.txt /opt/ RUN pip install -r /opt/requirements.txt Usage Examples and API Documentation Python Data Analysis Interface import pandas as pd import numpy as np from scipy import stats import matplotlib.pyplot as plt import seaborn as sns # Load dataset with optimized dtypes for memory efficiency df = pd.read_csv('ecological_benchmark_dataset.csv', dtype={'hardware_config': 'category', 'test_type': 'category'}) # Compute energy efficiency metrics df['energy_per_photo'] = df['energy_consumption_kwh'] / df['photo_count'] df['co2_per_gigabyte'] = df['co2_equivalent_g'] / df['total_volume_gb'] # Statistical analysis with confidence intervals local_energy = df[df['test_type'] == 'local_processing']['energy_consumption_kwh'] cloud_energy = df[df['test_type'] == 'cloud_processing']['energy_consumption_kwh'] t_stat, p_value = stats.ttest_ind(local_energy, cloud_energy) effect_size = (cloud_energy.mean() - local_energy.mean()) / np.sqrt((cloud_energy.var() + local_energy.var()) / 2) print(f"Statistical significance: p = {p_value:.2e}") print(f"Cohen's d effect size: {effect_size:.3f}") R Statistical Computing Environment library(tidyverse) library(lme4) # Linear mixed-effects models library(ggplot2) library(corrplot) # Load and preprocess data df <- read_csv("ecological_benchmark_dataset.csv") %>% mutate( test_type = factor(test_type), hardware_config = factor(hardware_config), log_energy = log(energy_consumption_kwh), efficiency_ratio = energy_consumption_kwh / processing_time_sec ) # Mixed-effects regression model accounting for hardware heterogeneity model <- lmer(log_energy ~ test_type + log(photo_count) + (1|hardware_config), data = df) # Extract model coefficients with confidence intervals summary(model) confint(model, method = "Wald") Advanced Analytics and Machine Learning Integration Predictive Modeling Framework from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor from sklearn.model_selection import cross_val_score, GridSearchCV from sklearn.preprocessing import StandardScaler, LabelEncoder from sklearn.metrics import mean_absolute_error, r2_score # Feature engineering pipeline def create_feature_matrix(df): features = df[['photo_count', 'avg_file_size_mb', 'total_volume_gb']].copy() # Polynomial features for capturing non-linear relationships features['photo_count_squared'] = features['photo_count'] ** 2 features['size_volume_interaction'] = features['avg_file_size_mb'] * features['total_volume_gb'] # Hardware configuration encoding le = LabelEncoder() features['hardware_encoded'] = le.fit_transform(df['hardware_config']) return features # Energy consumption prediction model X = create_feature_matrix(df) y = df['energy_consumption_kwh'] # Hyperparameter optimization param_grid = { 'n_estimators': [100, 200, 500], 'max_depth': [10, 20, None], 'min_samples_split': [2, 5, 10], 'min_samples_leaf': [1, 2, 4] } rf_model = RandomForestRegressor(random_state=42) grid_search = GridSearchCV(rf_model, param_grid, cv=5, scoring='neg_mean_absolute_error') grid_search.fit(X, y) print(f"Best cross-validation score: {-grid_search.best_score_:.6f}") print(f"Optimal hyperparameters: {grid_search.best_params_}") Carbon Footprint Calculation Methodology Emission Factor Coefficients Carbon intensity calculations employ region-specific emission factors from the International Energy Agency (IEA) database: EMISSION_FACTORS = { 'EU_AVERAGE': 0.276, # kg CO₂/kWh (European Union average 2024) 'FRANCE': 0.057, # kg CO₂/kWh (Nuclear-dominant grid) 'GERMANY': 0.485, # kg CO₂/kWh (Coal transition period) 'NORWAY': 0.013, # kg CO₂/kWh (Hydroelectric dominant) 'GLOBAL_AVERAGE': 0.475 # kg CO₂/kWh (Global weighted average) } def calculate_carbon_footprint(energy_kwh: float, region: str = 'EU_AVERAGE') -> float: """ Calculate CO₂ equivalent emissions using lifecycle assessment methodology Args: energy_kwh: Energy consumption in kilowatt-hours region: Geographic region for emission factor selection Returns: CO₂ equivalent emissions in grams """ emission_factor = EMISSION_FACTORS.get(region, EMISSION_FACTORS['GLOBAL_AVERAGE']) co2_kg = energy_kwh * emission_factor return co2_kg * 1000 # Convert to grams Citation and Attribution This dataset is released under Creative Commons Attribution 4.0 International (CC BY 4.0) license. …”
  10. 130

    Table_3_G2P Provides an Integrative Environment for Multi-model genomic selection analysis to improve genotype-to-phenotype prediction.xlsx by Qian Wang (32718)

    Published 2023
    “…G2P works as an integrative environment offering comprehensive, unbiased evaluation analyses of the 16 GS models, which may be run in parallel on high-performance computing clusters. Based on the evaluation outcome, G2P performs auto-ensemble algorithms that not only can automatically select the most precise models but also can integrate prediction results from multiple models. …”
  11. 131

    Image_1_G2P Provides an Integrative Environment for Multi-model genomic selection analysis to improve genotype-to-phenotype prediction.jpeg by Qian Wang (32718)

    Published 2023
    “…G2P works as an integrative environment offering comprehensive, unbiased evaluation analyses of the 16 GS models, which may be run in parallel on high-performance computing clusters. Based on the evaluation outcome, G2P performs auto-ensemble algorithms that not only can automatically select the most precise models but also can integrate prediction results from multiple models. …”
  12. 132

    Image_2_G2P Provides an Integrative Environment for Multi-model genomic selection analysis to improve genotype-to-phenotype prediction.jpeg by Qian Wang (32718)

    Published 2023
    “…G2P works as an integrative environment offering comprehensive, unbiased evaluation analyses of the 16 GS models, which may be run in parallel on high-performance computing clusters. Based on the evaluation outcome, G2P performs auto-ensemble algorithms that not only can automatically select the most precise models but also can integrate prediction results from multiple models. …”
  13. 133

    DataSheet_1_G2P Provides an Integrative Environment for Multi-model genomic selection analysis to improve genotype-to-phenotype prediction.docx by Qian Wang (32718)

    Published 2023
    “…G2P works as an integrative environment offering comprehensive, unbiased evaluation analyses of the 16 GS models, which may be run in parallel on high-performance computing clusters. Based on the evaluation outcome, G2P performs auto-ensemble algorithms that not only can automatically select the most precise models but also can integrate prediction results from multiple models. …”
  14. 134

    Image_3_G2P Provides an Integrative Environment for Multi-model genomic selection analysis to improve genotype-to-phenotype prediction.jpeg by Qian Wang (32718)

    Published 2023
    “…G2P works as an integrative environment offering comprehensive, unbiased evaluation analyses of the 16 GS models, which may be run in parallel on high-performance computing clusters. Based on the evaluation outcome, G2P performs auto-ensemble algorithms that not only can automatically select the most precise models but also can integrate prediction results from multiple models. …”
  15. 135

    Table_4_G2P Provides an Integrative Environment for Multi-model genomic selection analysis to improve genotype-to-phenotype prediction.xlsx by Qian Wang (32718)

    Published 2023
    “…G2P works as an integrative environment offering comprehensive, unbiased evaluation analyses of the 16 GS models, which may be run in parallel on high-performance computing clusters. Based on the evaluation outcome, G2P performs auto-ensemble algorithms that not only can automatically select the most precise models but also can integrate prediction results from multiple models. …”
  16. 136

    Table_2_G2P Provides an Integrative Environment for Multi-model genomic selection analysis to improve genotype-to-phenotype prediction.xlsx by Qian Wang (32718)

    Published 2023
    “…G2P works as an integrative environment offering comprehensive, unbiased evaluation analyses of the 16 GS models, which may be run in parallel on high-performance computing clusters. Based on the evaluation outcome, G2P performs auto-ensemble algorithms that not only can automatically select the most precise models but also can integrate prediction results from multiple models. …”
  17. 137

    Table_1_G2P Provides an Integrative Environment for Multi-model genomic selection analysis to improve genotype-to-phenotype prediction.xlsx by Qian Wang (32718)

    Published 2023
    “…G2P works as an integrative environment offering comprehensive, unbiased evaluation analyses of the 16 GS models, which may be run in parallel on high-performance computing clusters. Based on the evaluation outcome, G2P performs auto-ensemble algorithms that not only can automatically select the most precise models but also can integrate prediction results from multiple models. …”
  18. 138

    Code by Baoqiang Chen (21099509)

    Published 2025
    “…</p><p><br></p><p dir="ltr">For the 5′ UTR library, we developed a Python script to extract sequences and Unique Molecular Identifiers (UMIs) from the FASTQ files. …”
  19. 139

    Core data by Baoqiang Chen (21099509)

    Published 2025
    “…</p><p><br></p><p dir="ltr">For the 5′ UTR library, we developed a Python script to extract sequences and Unique Molecular Identifiers (UMIs) from the FASTQ files. …”
  20. 140

    R‑BIND: An Interactive Database for Exploring and Developing RNA-Targeted Chemical Probes by Brittany S. Morgan (7554242)

    Published 2019
    “…These tools and resources can be used to design small molecule libraries, optimize lead ligands, or select targets, probes, assays, and control experiments. …”