Search alternatives:
code implementation » model implementation (Expand Search), time implementation (Expand Search), world implementation (Expand Search)
code presented » model presented (Expand Search), side presented (Expand Search), order presented (Expand Search)
code implementation » model implementation (Expand Search), time implementation (Expand Search), world implementation (Expand Search)
code presented » model presented (Expand Search), side presented (Expand Search), order presented (Expand Search)
-
121
-
122
Catalogue of compact radio sources in Messier-82 from e-MERLIN observations
Published 2025“…Source finding was initially performed using Python Blob Detection and Source Finder (PyBDSF).</p><p dir="ltr">The dataset includes two tables detailing the properties of these 36 sources:</p><p dir="ltr"><b>Table 3.1: CASA </b><code><strong>imfit</strong></code><b> Source Catalogue</b></p><p dir="ltr">This table contains source parameters derived using the CASA task <code>imfit</code>. …”
-
123
Code used to run simulations and generate figures.
Published 2025“…<p>The archive contains the Python code to reproduce simulations presented in this paper. …”
-
124
py-rocket: A Docker image to promote cross-language (Python, R) collaboration across diverse user platforms for cloud computing in the earth sciences
Published 2025“…</p><p><br></p><p dir="ltr">A sturdy Docker stack relies on a solid base image. Here we present work on the py-rocket base image and illustrate how this enhances collaboration while providing familiar IDEs and environments to both R and Python users. …”
-
125
Testing Code for JcvPCA and JsvCRP.
Published 2025“…<p>This file contains the code that implements both metrics in python and apply them on a simulated dataset.…”
-
126
Data and code for: Automatic fish scale analysis
Published 2025“…</p><h3>Includeed in this repository:</h3><ul><li><b>Raw data files:</b></li><li><code>comparison_all_scales.csv</code> – comparison_all_scales.csv - manually verified vs. automated measurements of 1095 coregonid scales</li></ul><ul><li><ul><li><code>Validation_data.csv</code> – manually measured scale data under binocular</li><li><code>Parameter_correction_numeric.csv</code> – calibration data (scale radius vs. fish length/weight)</li></ul></li><li><b>Statistical results:</b></li><li><ul><li><code>comparison_stats_core_variables.csv</code> – verification statistics (bias, relative error, limits of agreement)</li><li><code>Validation_statistics.csv</code> – summary statistics and model fits (manual vs. automated)</li></ul></li><li><b>Executable script (not GUI):</b></li><li><ul><li><code>Algorithm.py</code> – core processing module for scale feature extraction<br>→ <i>Note: The complete Coregon Analyzer application (incl. …”
-
127
-
128
PTPC-UHT bounce
Published 2025“…<br>It contains the full Python implementation of the PTPC bounce model (<code>PTPC_UHT_bounce.py</code>) and representative outputs used to generate the figures in the paper. …”
-
129
Data for "A hollow fiber membrane permeance evaluation device demonstrating outside-in and inside-out performance differences"
Published 2025“…</li><li>Plot data derived from the above data sources.</li><li>Python code to generate figures from the plot data.…”
-
130
Data sets and coding scripts for research on sensory processing in ADHD and ASD
Published 2025“…The repository includes raw and matched datasets, analysis outputs, and the full Python code used for the matching pipeline.</p><h4>Ethics and Approval</h4><p dir="ltr">All procedures were approved by the University of Sheffield Department of Psychology Ethics Committee (Ref: 046476). …”
-
131
Code for High-quality Human Activity Intensity Maps in China from 2000-2020
Published 2025“…<p dir="ltr">Code and remote sensing images and interpretation results of the samples for uncertainty analysis for "High-quality Human Activity Intensity Maps in China from 2000-2020"</p><p dir="ltr">“Mapping_HAI.py”:We generated the HAI maps using ArcGIS 10.8, and the geoprocessing tasks were implemented using Python 2.7 with the ArcPy library (ArcGIS 10.8 + Python 2.7 environment). …”
-
132
Predicting coding regions on unassembled reads, how hard can it be? - Genome Informatics 2024
Published 2024“…The locations and directions of the predictions on the reads are then combined with the information about locations and directions of the reads on the genome using Python code to produce detailed results regarding the correct, incorrect and alternative starts and stops with respect to the genome-level annotation.…”
-
133
Code and data for evaluating oil spill amount from text-form incident information
Published 2025“…The code is written in Python and operated using Jupyter Lab and Anaconda. …”
-
134
The codes and data for "Lane Extraction from Trajectories at Road Intersections Based on Graph Transformer Network"
Published 2024“…Each lane includes 'geometry' and 'inter_id' attributes.</li></ul><h2>Codes</h2><p dir="ltr">This repository contains the following Python codes:</p><ul><li>`data_processing.py`: Contains the implementation of data processing and feature extraction. …”
-
135
MATH_code : False Data Injection Attack Detection in Smart Grids based on Reservoir Computing
Published 2025“…</li><li><b>3_literature_analysis_and_mapping.ipynb</b><br>Contains the Python code used for executing the systematic mapping study (SMS), including automated processing of literature data and thematic clustering.…”
-
136
Monte Carlo Simulation Code for Evaluating Cognitive Biases in Penalty Shootouts Using ABAB and ABBA Formats
Published 2024“…<p dir="ltr">This Python code implements a Monte Carlo simulation to evaluate the impact of cognitive biases on penalty shootouts under two formats: ABAB (alternating shots) and ABBA (similar to tennis tiebreak format). …”
-
137
Code for the HIVE Appendicitis prediction modelRepository with LLM_data_extractor_optuna for automated feature extraction
Published 2025“…</p><p dir="ltr"><b>LLM Data Extractor optuna repo</b> is a Python framework for generating and evaluating clinical text predictions using large language models (LLMs) like <code>qwen2.5</code>. …”
-
138
<b>Code and derived data for</b><b>Training Sample Location Matters: Accuracy Impacts in LULC Classification</b>
Published 2025“…</li><li>Python/Kaggle notebooks (<code>.ipynb</code>): reproducibility pipeline for accuracy metrics and statistical analysis.…”
-
139
<b>Use case codes of the DDS3 and DDS4 datasets for bacillus segmentation and tuberculosis diagnosis, respectively</b>
Published 2025“…<p dir="ltr"><b>Use case codes of the DDS3 and DDS4 datasets for bacillus segmentation and tuberculosis diagnosis, respectively</b></p><p dir="ltr">The code was developed in the Google Collaboratory environment, using Python version 3.7.13, with TensorFlow 2.8.2. …”
-
140
Quetzal: Comprehensive Peptide Fragmentation Annotation and Visualization
Published 2025“…We describe how Quetzal annotates spectra using the new Human Proteome Organization (HUPO) Proteomics Standards Initiative (PSI) mzPAF standard for fragment ion peak annotation, including the Python-based code, a web-service end point that provides annotation services, and a web-based application for annotating spectra and producing publication-quality figures. …”