Showing 101 - 120 results of 254 for search '(( python code implementation ) OR ( python models representing ))', query time: 0.49s Refine Results
  1. 101
  2. 102

    The codes and data for "Lane Extraction from Trajectories at Road Intersections Based on Graph Transformer Network" by Chongshan Wan (19247614)

    Published 2024
    “…Each lane includes 'geometry' and 'inter_id' attributes.</li></ul><h2>Codes</h2><p dir="ltr">This repository contains the following Python codes:</p><ul><li>`data_processing.py`: Contains the implementation of data processing and feature extraction. …”
  3. 103
  4. 104

    MATH_code : False Data Injection Attack Detection in Smart Grids based on Reservoir Computing by Carl-Hendrik Peters (21530624)

    Published 2025
    “…</li><li><b>3_literature_analysis_and_mapping.ipynb</b><br>Contains the Python code used for executing the systematic mapping study (SMS), including automated processing of literature data and thematic clustering.…”
  5. 105

    Evaluation and Statistical Analysis Code for "Multi-Task Learning for Joint Fisheye Compression and Perception for Autonomous Driving" by Basem Ahmed (18127861)

    Published 2025
    “…</li></ul><p dir="ltr">These scripts are implemented in Python using the PyTorch framework and are provided to ensure the reproducibility of the experimental results presented in the manuscript.…”
  6. 106

    Monte Carlo Simulation Code for Evaluating Cognitive Biases in Penalty Shootouts Using ABAB and ABBA Formats by Raul MATSUSHITA (10276562)

    Published 2024
    “…<p dir="ltr">This Python code implements a Monte Carlo simulation to evaluate the impact of cognitive biases on penalty shootouts under two formats: ABAB (alternating shots) and ABBA (similar to tennis tiebreak format). …”
  7. 107
  8. 108

    The codes and data for "A Graph Convolutional Neural Network-based Method for Predicting Computational Intensity of Geocomputation" by FirstName LastName (20554465)

    Published 2025
    “…The <b>innovations</b> and <b>steps</b> in Case 3, including data download, sample generation, and parallel computation optimization, were independently developed and are not dependent on the GeoCube’s code.</p><h2>Requirements</h2><p dir="ltr">The codes use the following dependencies with Python 3.8</p><ul><li>torch==2.0.0</li><li>torch_geometric==2.5.3</li><li>networkx==2.6.3</li><li>pyshp==2.3.1</li><li>tensorrt==8.6.1</li><li>matplotlib==3.7.2</li><li>scipy==1.10.1</li><li>scikit-learn==1.3.0</li><li>geopandas==0.13.2</li></ul><p><br></p>…”
  9. 109

    The codes and data for "A Graph Convolutional Neural Network-based Method for Predicting Computational Intensity of Geocomputation" by FirstName LastName (20554465)

    Published 2025
    “…The <b>innovations</b> and <b>steps</b> in Case 3, including data download, sample generation, and parallel computation optimization, were independently developed and are not dependent on the GeoCube’s code.</p><h2>Requirements</h2><p dir="ltr">The codes use the following dependencies with Python 3.8</p><ul><li>torch==2.0.0</li><li>torch_geometric==2.5.3</li><li>networkx==2.6.3</li><li>pyshp==2.3.1</li><li>tensorrt==8.6.1</li><li>matplotlib==3.7.2</li><li>scipy==1.10.1</li><li>scikit-learn==1.3.0</li><li>geopandas==0.13.2</li></ul><p><br></p>…”
  10. 110

    Data features examined for potential biases. by Harry Hochheiser (3413396)

    Published 2025
    “…Representativeness of the population, differences in calibration and model performance among groups, and differences in performance across hospital settings were identified as possible sources of bias.…”
  11. 111

    Analysis topics. by Harry Hochheiser (3413396)

    Published 2025
    “…Representativeness of the population, differences in calibration and model performance among groups, and differences in performance across hospital settings were identified as possible sources of bias.…”
  12. 112

    Datasets To EVAL. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  13. 113

    Statistical significance test results. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  14. 114

    How RAG work. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  15. 115

    OpenBookQA experimental results. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  16. 116

    AI2_ARC experimental results. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  17. 117

    TQA experimental results. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  18. 118

    E-EVAL experimental results. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  19. 119

    TQA Accuracy Comparison Chart on different LLM. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  20. 120

    ScienceQA experimental results. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”