Showing 1 - 20 results of 179 for search '(( python model representing ) OR ( ((python model) OR (python code)) representing ))', query time: 0.34s Refine Results
  1. 1

    Resolving Harvesting Errors in Institutional Repository Migration : Using Python Scripts with VS Code and LLM Integration. by satoshi hashimoto(橋本 郷史) (18851272)

    Published 2025
    “…Therefore, we decided to create a dedicated Python program using Large Language Model (LLM)-assisted coding.…”
  2. 2
  3. 3

    City-level GDP estimates for China under alternative pathways from 2020 to 2100-python code by Jinjie Sun (11791715)

    Published 2025
    “…The dataset is complemented by processing code and raw input data in the "Python_Code" directory to ensure full reproducibility. …”
  4. 4

    Python code for hierarchical cluster analysis of detected R-strategies from rule-based NLP on 500 circular economy definitions by Zahir Barahmand (18008947)

    Published 2025
    “…</p><p dir="ltr">This Python code was optimized and debugged using ChatGPT-4o to ensure implementation efficiency, accuracy, and clarity.…”
  5. 5

    Code program. by Honglei Pang (22693724)

    Published 2025
    “…<div><p>Vehicle lateral stability control under hazardous operating conditions represents a pivotal challenge in intelligent driving active safety. …”
  6. 6

    Python implementation of a wildfire propagation example using m:n-CAk over Z and R. by Pau Fonseca i Casas (9507338)

    Published 2025
    “…</p><p dir="ltr"><br></p><p dir="ltr">## Files in the Project</p><p dir="ltr"><br></p><p dir="ltr">### Python Scripts</p><p dir="ltr">- **Wildfire_on_m_n-CAk.py**: This file contains the main code for the fire cellular automaton. …”
  7. 7
  8. 8
  9. 9
  10. 10

    Code interpreter with LLM. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17

    Datasets To EVAL. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  18. 18

    Statistical significance test results. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  19. 19

    How RAG work. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  20. 20

    OpenBookQA experimental results. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”