Search alternatives:
model implementation » modular implementation (Expand Search), world implementation (Expand Search), time implementation (Expand Search)
model represent » models represent (Expand Search), model representing (Expand Search), models represented (Expand Search)
python model » python code (Expand Search), python tool (Expand Search), action model (Expand Search)
model implementation » modular implementation (Expand Search), world implementation (Expand Search), time implementation (Expand Search)
model represent » models represent (Expand Search), model representing (Expand Search), models represented (Expand Search)
python model » python code (Expand Search), python tool (Expand Search), action model (Expand Search)
-
141
OpenBookQA experimental results.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
142
AI2_ARC experimental results.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
143
TQA experimental results.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
144
E-EVAL experimental results.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
145
TQA Accuracy Comparison Chart on different LLM.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
146
ScienceQA experimental results.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
147
Code interpreter with LLM.
Published 2025“…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
-
148
Number of tweets collected over time.
Published 2025“…Python algorithms were developed to model each primary collection type. …”
-
149
Descriptive measures of the dataset.
Published 2025“…Python algorithms were developed to model each primary collection type. …”
-
150
Corpora from the articles in order of size.
Published 2025“…Python algorithms were developed to model each primary collection type. …”
-
151
Media information.
Published 2025“…Python algorithms were developed to model each primary collection type. …”
-
152
Information from interactions.
Published 2025“…Python algorithms were developed to model each primary collection type. …”
-
153
Table of the database statistical measures.
Published 2025“…Python algorithms were developed to model each primary collection type. …”
-
154
Tweets information.
Published 2025“…Python algorithms were developed to model each primary collection type. …”
-
155
Examples of tweets texts (Portuguese).
Published 2025“…Python algorithms were developed to model each primary collection type. …”
-
156
Methodological flowchart.
Published 2025“…Python algorithms were developed to model each primary collection type. …”
-
157
Number of tweets collected per query and type.
Published 2025“…Python algorithms were developed to model each primary collection type. …”
-
158
Examples of tweets texts (English).
Published 2025“…Python algorithms were developed to model each primary collection type. …”
-
159
Users information.
Published 2025“…Python algorithms were developed to model each primary collection type. …”
-
160
JASPEX model
Published 2025“…</p><p dir="ltr">We wrote new sets of python codes and developed python programming codes to rework on the map to generate the coloured map of Southwest Nigeria from the map of Nigeria (which represented the region of our study). …”