-
1
Resolving Harvesting Errors in Institutional Repository Migration : Using Python Scripts with VS Code and LLM Integration.
Published 2025“…Therefore, we decided to create a dedicated Python program using Large Language Model (LLM)-assisted coding.…”
-
2
Multi-Version PYZ Builder Script: A Universal Python Module Creation Tool
Published 2024“…This tool represents a significant advancement in the realm of <a href="https://xn--mxac.net/secure-python-code-manager.html" target="_blank"><b>secure code sharing</b></a>, providing a robust solution for modern Python programming challenges.…”
-
3
System Hardware ID Generator Script: A Cross-Platform Hardware Identification Tool
Published 2024“…</p><ul><li>For advanced <a href="https://xn--mxac.net/local-python-code-protector.html" target="_blank">Python code protection tools</a>, consider using the <a href="https://xn--mxac.net/local-python-code-protector.html" target="_blank">Local Python Code Protector Script</a>. …”
-
4
City-level GDP estimates for China under alternative pathways from 2020 to 2100-python code
Published 2025“…The dataset is complemented by processing code and raw input data in the "Python_Code" directory to ensure full reproducibility. …”
-
5
-
6
Python code for hierarchical cluster analysis of detected R-strategies from rule-based NLP on 500 circular economy definitions
Published 2025“…</p><p dir="ltr">This Python code was optimized and debugged using ChatGPT-4o to ensure implementation efficiency, accuracy, and clarity.…”
-
7
Comparison of tools with features similar to <i>bmdrc,</i> and a descriptions of the modules within the <i>bmdrc</i> package.
Published 2025“…<p>(A) Highlighted tool features from a selection of benchmark dose modeling tools to contextualize the needs bmdrc and other existing tools fill. …”
-
8
Code program.
Published 2025“…<div><p>Vehicle lateral stability control under hazardous operating conditions represents a pivotal challenge in intelligent driving active safety. …”
-
9
-
10
Output datasets from ML–assisted bibliometric workflow in African phytochemical metabolomics research
Published 2025“…<p dir="ltr">This collection contains supplementary datasets generated during the machine learning–assisted bibliometric workflow for metabolomics and phytochemical research. The datasets represent sequential outputs derived from the integration and harmonisation of bibliographic metadata from <b>Scopus</b>, <b>Web of Science (WoS)</b>, and <b>Dimensions</b>, processed via R and Python environments.…”
-
11
-
12
Python implementation of a wildfire propagation example using m:n-CAk over Z and R.
Published 2025“…</p><p dir="ltr"><br></p><p dir="ltr">## Files in the Project</p><p dir="ltr"><br></p><p dir="ltr">### Python Scripts</p><p dir="ltr">- **Wildfire_on_m_n-CAk.py**: This file contains the main code for the fire cellular automaton. …”
-
13
Code interpreter with LLM.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
14
-
15
-
16
Datasets To EVAL.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
17
Statistical significance test results.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
18
How RAG work.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
19
OpenBookQA experimental results.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
20
AI2_ARC experimental results.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”