بدائل البحث:
code » core (توسيع البحث)
يعرض 161 - 171 نتائج من 171 نتيجة بحث عن '((python tool) OR (python code)) predicted', وقت الاستعلام: 0.22s تنقيح النتائج
  1. 161

    <b>Exploring Multiverse Dynamics Through the HEX Formula: Applications in Quantum Entanglement, Dark Matter, and Wormhole Stability</b>(Simulations Resources) حسب Hector Ortiz (18802150)

    منشور في 2024
    "…<p dir="ltr">This repository provides Python simulations designed to illustrate the HEX formula’s predictions for multiverse interactions. …"
  2. 162

    mdata_pbmc.h5mu حسب Vadim Chechekhin (20569496)

    منشور في 2025
    "…<p dir="ltr">scParadise is a fast, tunable, high-throughput automatic cell type annotation and modality prediction python framework.</p><p><br></p><p dir="ltr">scParadise includes three sets of tools:</p><p dir="ltr">1). scAdam - fast multi-task multi-class cell type annotation.…"
  3. 163

    Scripts_scParadise_article حسب Vadim Chechekhin (20569496)

    منشور في 2025
    "…To address these challenges, we introduce scParadise, an open-source and flexible Python package comprising three integrated tools: scAdam for tissue-specific, multi-task cell type annotation; scEve for cross-tissue, cross-species modality imputation; and scNoah for benchmarking cell type annotation and modality imputation methods. …"
  4. 164

    Mushroom Classification Using Support Vector Machines (SVM) Focusing on Cap Features. حسب Gabriel Minato (22462099)

    منشور في 2025
    "…The predictor variables were</p><p dir="ltr">20 pre-processed features of the mushroom cap, and the target variable was the mushroom's class</p><p dir="ltr">(edible 'e' or poisonous 'p'). The tools used were the Python programming language and its</p><p dir="ltr">libraries, primarily Scikit-learn to build and optimize the SVM classifier. …"
  5. 165

    Globus Compute: Federated FaaS for Integrated Research Solutions حسب eRNZ Admin (6438486)

    منشور في 2025
    "…HPC enables researchers to perform simulations, modeling, and analysis, which are critical to predicting outcomes, guiding experiments, and developing new technologies [1]. …"
  6. 166

    <b>AI for imaging plant stress in invasive species </b>(dataset from the article https://doi.org/10.1093/aob/mcaf043) حسب Erola Fenollosa (20977421)

    منشور في 2025
    "…</li><li>The dataframe of extracted colour features from all leaf images and lab variables (ecophysiological predictors and variables to be predicted)</li><li>Set of scripts used for image pre-processing, features extraction, data analytsis, visualization and Machine learning algorithms training, using ImageJ, R and Python.…"
  7. 167

    Fast, FAIR, and Scalable: Managing Big Data in HPC with Zarr حسب Alfonso Ladino (21447002)

    منشور في 2025
    "…(NEXRAD), using open-source tools from the Python ecosystem such as Xarray, Xradar, and Dask to enable efficient parallel processing and scalable analysis. …"
  8. 168

    Landscape Change Monitoring System (LCMS) Conterminous United States Cause of Change (Image Service) حسب U.S. Forest Service (17476914)

    منشور في 2025
    "…Scikit-learn: Machine Learning in Python. In Journal of Machine Learning Research (Vol. 12, pp. 2825-2830).Pengra, B. …"
  9. 169

    Core data حسب Baoqiang Chen (21099509)

    منشور في 2025
    "…</p><p><br></p><p dir="ltr"><b>Prediction and Design of 5′ UTRs</b></p><p dir="ltr">We developed a convolutional neural network (CNN) model to predict 5′ UTRs. …"
  10. 170

    Nucleotide analogue tolerant synthetic RdRp mutant construct for Surveillance and Therapeutic Resistance Monitoring in SARS-CoV-2 حسب Tahir Bhatti (20961974)

    منشور في 2025
    "…Biopython: Freely available Python tools for computational molecular biology and bioinformatics. …"
  11. 171

    An Ecological Benchmark of Photo Editing Software: A Comparative Analysis of Local vs. Cloud Workflows حسب Pierre-Alexis DELAROCHE (22092572)

    منشور في 2025
    "…Reproducibility Framework Container Orchestration # Kubernetes deployment manifest for reproducible environment apiVersion: apps/v1 kind: Deployment metadata: name: energy-benchmark-pod spec: replicas: 1 selector: matchLabels: app: benchmark-runner template: metadata: labels: app: benchmark-runner spec: nodeSelector: hardware.profile: "high-performance" containers: - name: benchmark-container image: albumforge/energy-benchmark:v2.1.3 resources: requests: cpu: "8000m" memory: "16Gi" nvidia.com/gpu: 1 limits: cpu: "16000m" memory: "32Gi" env: - name: MEASUREMENT_PRECISION value: "high" - name: POWER_SAMPLING_RATE value: "1000" # 1kHz sampling Dependency Management FROM ubuntu:22.04-cuda11.8-devel RUN apt-get update && apt-get install -y \ perf-tools \ powertop \ intel-gpu-tools \ nvidia-smi \ cpupower \ msr-tools \ && rm -rf /var/lib/apt/lists/* COPY requirements.txt /opt/ RUN pip install -r /opt/requirements.txt Usage Examples and API Documentation Python Data Analysis Interface import pandas as pd import numpy as np from scipy import stats import matplotlib.pyplot as plt import seaborn as sns # Load dataset with optimized dtypes for memory efficiency df = pd.read_csv('ecological_benchmark_dataset.csv', dtype={'hardware_config': 'category', 'test_type': 'category'}) # Compute energy efficiency metrics df['energy_per_photo'] = df['energy_consumption_kwh'] / df['photo_count'] df['co2_per_gigabyte'] = df['co2_equivalent_g'] / df['total_volume_gb'] # Statistical analysis with confidence intervals local_energy = df[df['test_type'] == 'local_processing']['energy_consumption_kwh'] cloud_energy = df[df['test_type'] == 'cloud_processing']['energy_consumption_kwh'] t_stat, p_value = stats.ttest_ind(local_energy, cloud_energy) effect_size = (cloud_energy.mean() - local_energy.mean()) / np.sqrt((cloud_energy.var() + local_energy.var()) / 2) print(f"Statistical significance: p = {p_value:.2e}") print(f"Cohen's d effect size: {effect_size:.3f}") R Statistical Computing Environment library(tidyverse) library(lme4) # Linear mixed-effects models library(ggplot2) library(corrplot) # Load and preprocess data df <- read_csv("ecological_benchmark_dataset.csv") %>% mutate( test_type = factor(test_type), hardware_config = factor(hardware_config), log_energy = log(energy_consumption_kwh), efficiency_ratio = energy_consumption_kwh / processing_time_sec ) # Mixed-effects regression model accounting for hardware heterogeneity model <- lmer(log_energy ~ test_type + log(photo_count) + (1|hardware_config), data = df) # Extract model coefficients with confidence intervals summary(model) confint(model, method = "Wald") Advanced Analytics and Machine Learning Integration Predictive Modeling Framework from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor from sklearn.model_selection import cross_val_score, GridSearchCV from sklearn.preprocessing import StandardScaler, LabelEncoder from sklearn.metrics import mean_absolute_error, r2_score # Feature engineering pipeline def create_feature_matrix(df): features = df[['photo_count', 'avg_file_size_mb', 'total_volume_gb']].copy() # Polynomial features for capturing non-linear relationships features['photo_count_squared'] = features['photo_count'] ** 2 features['size_volume_interaction'] = features['avg_file_size_mb'] * features['total_volume_gb'] # Hardware configuration encoding le = LabelEncoder() features['hardware_encoded'] = le.fit_transform(df['hardware_config']) return features # Energy consumption prediction model X = create_feature_matrix(df) y = df['energy_consumption_kwh'] # Hyperparameter optimization param_grid = { 'n_estimators': [100, 200, 500], 'max_depth': [10, 20, None], 'min_samples_split': [2, 5, 10], 'min_samples_leaf': [1, 2, 4] } rf_model = RandomForestRegressor(random_state=42) grid_search = GridSearchCV(rf_model, param_grid, cv=5, scoring='neg_mean_absolute_error') grid_search.fit(X, y) print(f"Best cross-validation score: {-grid_search.best_score_:.6f}") print(f"Optimal hyperparameters: {grid_search.best_params_}") Carbon Footprint Calculation Methodology Emission Factor Coefficients Carbon intensity calculations employ region-specific emission factors from the International Energy Agency (IEA) database: EMISSION_FACTORS = { 'EU_AVERAGE': 0.276, # kg CO₂/kWh (European Union average 2024) 'FRANCE': 0.057, # kg CO₂/kWh (Nuclear-dominant grid) 'GERMANY': 0.485, # kg CO₂/kWh (Coal transition period) 'NORWAY': 0.013, # kg CO₂/kWh (Hydroelectric dominant) 'GLOBAL_AVERAGE': 0.475 # kg CO₂/kWh (Global weighted average) } def calculate_carbon_footprint(energy_kwh: float, region: str = 'EU_AVERAGE') -> float: """ Calculate CO₂ equivalent emissions using lifecycle assessment methodology Args: energy_kwh: Energy consumption in kilowatt-hours region: Geographic region for emission factor selection Returns: CO₂ equivalent emissions in grams """ emission_factor = EMISSION_FACTORS.get(region, EMISSION_FACTORS['GLOBAL_AVERAGE']) co2_kg = energy_kwh * emission_factor return co2_kg * 1000 # Convert to grams Citation and Attribution This dataset is released under Creative Commons Attribution 4.0 International (CC BY 4.0) license. …"