بدائل البحث:
code implementation » model implementation (توسيع البحث), time implementation (توسيع البحث), world implementation (توسيع البحث)
models represented » models represent (توسيع البحث), models representing (توسيع البحث), model presented (توسيع البحث)
python models » motion models (توسيع البحث), pelton models (توسيع البحث)
code implementation » model implementation (توسيع البحث), time implementation (توسيع البحث), world implementation (توسيع البحث)
models represented » models represent (توسيع البحث), models representing (توسيع البحث), model presented (توسيع البحث)
python models » motion models (توسيع البحث), pelton models (توسيع البحث)
-
101
-
102
The codes and data for "Lane Extraction from Trajectories at Road Intersections Based on Graph Transformer Network"
منشور في 2024"…Each lane includes 'geometry' and 'inter_id' attributes.</li></ul><h2>Codes</h2><p dir="ltr">This repository contains the following Python codes:</p><ul><li>`data_processing.py`: Contains the implementation of data processing and feature extraction. …"
-
103
MATH_code : False Data Injection Attack Detection in Smart Grids based on Reservoir Computing
منشور في 2025"…</li><li><b>3_literature_analysis_and_mapping.ipynb</b><br>Contains the Python code used for executing the systematic mapping study (SMS), including automated processing of literature data and thematic clustering.…"
-
104
-
105
Evaluation and Statistical Analysis Code for "Multi-Task Learning for Joint Fisheye Compression and Perception for Autonomous Driving"
منشور في 2025"…</li></ul><p dir="ltr">These scripts are implemented in Python using the PyTorch framework and are provided to ensure the reproducibility of the experimental results presented in the manuscript.…"
-
106
Monte Carlo Simulation Code for Evaluating Cognitive Biases in Penalty Shootouts Using ABAB and ABBA Formats
منشور في 2024"…<p dir="ltr">This Python code implements a Monte Carlo simulation to evaluate the impact of cognitive biases on penalty shootouts under two formats: ABAB (alternating shots) and ABBA (similar to tennis tiebreak format). …"
-
107
-
108
The codes and data for "A Graph Convolutional Neural Network-based Method for Predicting Computational Intensity of Geocomputation"
منشور في 2025"…The <b>innovations</b> and <b>steps</b> in Case 3, including data download, sample generation, and parallel computation optimization, were independently developed and are not dependent on the GeoCube’s code.</p><h2>Requirements</h2><p dir="ltr">The codes use the following dependencies with Python 3.8</p><ul><li>torch==2.0.0</li><li>torch_geometric==2.5.3</li><li>networkx==2.6.3</li><li>pyshp==2.3.1</li><li>tensorrt==8.6.1</li><li>matplotlib==3.7.2</li><li>scipy==1.10.1</li><li>scikit-learn==1.3.0</li><li>geopandas==0.13.2</li></ul><p><br></p>…"
-
109
The codes and data for "A Graph Convolutional Neural Network-based Method for Predicting Computational Intensity of Geocomputation"
منشور في 2025"…The <b>innovations</b> and <b>steps</b> in Case 3, including data download, sample generation, and parallel computation optimization, were independently developed and are not dependent on the GeoCube’s code.</p><h2>Requirements</h2><p dir="ltr">The codes use the following dependencies with Python 3.8</p><ul><li>torch==2.0.0</li><li>torch_geometric==2.5.3</li><li>networkx==2.6.3</li><li>pyshp==2.3.1</li><li>tensorrt==8.6.1</li><li>matplotlib==3.7.2</li><li>scipy==1.10.1</li><li>scikit-learn==1.3.0</li><li>geopandas==0.13.2</li></ul><p><br></p>…"
-
110
Data features examined for potential biases.
منشور في 2025"…Representativeness of the population, differences in calibration and model performance among groups, and differences in performance across hospital settings were identified as possible sources of bias.…"
-
111
Analysis topics.
منشور في 2025"…Representativeness of the population, differences in calibration and model performance among groups, and differences in performance across hospital settings were identified as possible sources of bias.…"
-
112
Datasets To EVAL.
منشور في 2025"…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …"
-
113
Statistical significance test results.
منشور في 2025"…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …"
-
114
How RAG work.
منشور في 2025"…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …"
-
115
OpenBookQA experimental results.
منشور في 2025"…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …"
-
116
AI2_ARC experimental results.
منشور في 2025"…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …"
-
117
TQA experimental results.
منشور في 2025"…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …"
-
118
E-EVAL experimental results.
منشور في 2025"…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …"
-
119
TQA Accuracy Comparison Chart on different LLM.
منشور في 2025"…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …"
-
120
ScienceQA experimental results.
منشور في 2025"…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …"