Search alternatives:
code implementing » model implementing (Expand Search), consider implementing (Expand Search), _ implementing (Expand Search)
python model » action model (Expand Search), motion model (Expand Search)
code implementing » model implementing (Expand Search), consider implementing (Expand Search), _ implementing (Expand Search)
python model » action model (Expand Search), motion model (Expand Search)
-
121
<b>Code and derived data for</b><b>Training Sample Location Matters: Accuracy Impacts in LULC Classification</b>
Published 2025“…</li><li>Python/Kaggle notebooks (<code>.ipynb</code>): reproducibility pipeline for accuracy metrics and statistical analysis.…”
-
122
<b>Use case codes of the DDS3 and DDS4 datasets for bacillus segmentation and tuberculosis diagnosis, respectively</b>
Published 2025“…<p dir="ltr"><b>Use case codes of the DDS3 and DDS4 datasets for bacillus segmentation and tuberculosis diagnosis, respectively</b></p><p dir="ltr">The code was developed in the Google Collaboratory environment, using Python version 3.7.13, with TensorFlow 2.8.2. …”
-
123
JASPEX model
Published 2025“…</p><p dir="ltr">We wrote new sets of python codes and developed python programming codes to rework on the map to generate the coloured map of Southwest Nigeria from the map of Nigeria (which represented the region of our study). …”
-
124
Datasets To EVAL.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
125
Statistical significance test results.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
126
How RAG work.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
127
OpenBookQA experimental results.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
128
AI2_ARC experimental results.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
129
TQA experimental results.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
130
E-EVAL experimental results.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
131
TQA Accuracy Comparison Chart on different LLM.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
132
ScienceQA experimental results.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
133
Code interpreter with LLM.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
134
Data and some code used in the paper:<b>Expansion quantization network: A micro-emotion detection and annotation framework</b>
Published 2025“…Attached is the micro-emotion annotation code based on pytorch, which can be used to annotate the Goemotions dataset by yourself, or predict the emotion classification based on the annotation results. …”
-
135
BaNDyT: Bayesian Network Modeling of Molecular Dynamics Trajectories
Published 2025“…We describe here the software’s uses, the methods associated with it, and a comprehensive Python interface to the underlying generalist BNM code. …”
-
136
BaNDyT: Bayesian Network Modeling of Molecular Dynamics Trajectories
Published 2025“…We describe here the software’s uses, the methods associated with it, and a comprehensive Python interface to the underlying generalist BNM code. …”
-
137
BaNDyT: Bayesian Network Modeling of Molecular Dynamics Trajectories
Published 2025“…We describe here the software’s uses, the methods associated with it, and a comprehensive Python interface to the underlying generalist BNM code. …”
-
138
Advancing Solar Magnetic Field Modeling
Published 2025“…<br><br>We developed a significantly faster Python code built upon a functional optimization framework previously proposed and implemented by our team. …”
-
139
High-Throughput Mass Spectral Library Searching of Small Molecules in R with NIST MSPepSearch
Published 2025“…Despite the availability of numerous library search algorithms, those developed by NIST and implemented in MS Search remain predominant, partly because commercial databases (e.g., NIST, Wiley) are distributed in proprietary formats inaccessible to custom code. …”
-
140
Comparison data 7 for <i>Lamprologus ocellatus</i>.
Published 2024“…TIBA accepts data outputs from popular logging software and is implemented in Python and JavaScript, with all current browsers supported. …”