Search alternatives:
code representing » model representing (Expand Search), models representing (Expand Search), tpd representing (Expand Search)
code implementing » model implementing (Expand Search), consider implementing (Expand Search), _ implementing (Expand Search)
models represent » model representing (Expand Search), model presents (Expand Search), lines represent (Expand Search)
code representing » model representing (Expand Search), models representing (Expand Search), tpd representing (Expand Search)
code implementing » model implementing (Expand Search), consider implementing (Expand Search), _ implementing (Expand Search)
models represent » model representing (Expand Search), model presents (Expand Search), lines represent (Expand Search)
-
41
PTPC-UHT bounce
Published 2025“…<br>It contains the full Python implementation of the PTPC bounce model (<code>PTPC_UHT_bounce.py</code>) and representative outputs used to generate the figures in the paper. …”
-
42
-
43
-
44
Datasets To EVAL.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
45
Statistical significance test results.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
46
How RAG work.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
47
OpenBookQA experimental results.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
48
AI2_ARC experimental results.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
49
TQA experimental results.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
50
E-EVAL experimental results.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
51
TQA Accuracy Comparison Chart on different LLM.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
52
ScienceQA experimental results.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
53
Finites differences python code to solve CH equation with a source term and Comsol routine to solve Brusselator equation in radial domains.
Published 2025“…<p dir="ltr"><b><i>* Cahn-Hilliard simulations *</i></b><br>Finite difference code implementing the modified Cahn Hilliard equation with a forward Euler scheme and the possibility to parallelize the solver using the numba python library.…”
-
54
EFGs: A Complete and Accurate Implementation of Ertl’s Functional Group Detection Algorithm in RDKit
Published 2025“…In this paper, a new RDKit/Python implementation of the algorithm is described, that is both accurate and complete. …”
-
55
2D Orthogonal Planes Split: <b>Python</b> and <b>MATLAB</b> code | <b>Source Images</b> for Figures
Published 2025“…The output files generated by the code include results from both Python and MATLAB implementations; these output images are provided as validation, demonstrating that both implementations produce matching results.…”
-
56
-
57
-
58
-
59
-
60