يعرض 101 - 120 نتائج من 254 نتيجة بحث عن '(( python code implementation ) OR ( python models represented ))', وقت الاستعلام: 0.41s تنقيح النتائج
  1. 101
  2. 102

    The codes and data for "Lane Extraction from Trajectories at Road Intersections Based on Graph Transformer Network" حسب Chongshan Wan (19247614)

    منشور في 2024
    "…Each lane includes 'geometry' and 'inter_id' attributes.</li></ul><h2>Codes</h2><p dir="ltr">This repository contains the following Python codes:</p><ul><li>`data_processing.py`: Contains the implementation of data processing and feature extraction. …"
  3. 103

    MATH_code : False Data Injection Attack Detection in Smart Grids based on Reservoir Computing حسب Carl-Hendrik Peters (21530624)

    منشور في 2025
    "…</li><li><b>3_literature_analysis_and_mapping.ipynb</b><br>Contains the Python code used for executing the systematic mapping study (SMS), including automated processing of literature data and thematic clustering.…"
  4. 104
  5. 105

    Evaluation and Statistical Analysis Code for "Multi-Task Learning for Joint Fisheye Compression and Perception for Autonomous Driving" حسب Basem Ahmed (18127861)

    منشور في 2025
    "…</li></ul><p dir="ltr">These scripts are implemented in Python using the PyTorch framework and are provided to ensure the reproducibility of the experimental results presented in the manuscript.…"
  6. 106

    Monte Carlo Simulation Code for Evaluating Cognitive Biases in Penalty Shootouts Using ABAB and ABBA Formats حسب Raul MATSUSHITA (10276562)

    منشور في 2024
    "…<p dir="ltr">This Python code implements a Monte Carlo simulation to evaluate the impact of cognitive biases on penalty shootouts under two formats: ABAB (alternating shots) and ABBA (similar to tennis tiebreak format). …"
  7. 107
  8. 108

    The codes and data for "A Graph Convolutional Neural Network-based Method for Predicting Computational Intensity of Geocomputation" حسب FirstName LastName (20554465)

    منشور في 2025
    "…The <b>innovations</b> and <b>steps</b> in Case 3, including data download, sample generation, and parallel computation optimization, were independently developed and are not dependent on the GeoCube’s code.</p><h2>Requirements</h2><p dir="ltr">The codes use the following dependencies with Python 3.8</p><ul><li>torch==2.0.0</li><li>torch_geometric==2.5.3</li><li>networkx==2.6.3</li><li>pyshp==2.3.1</li><li>tensorrt==8.6.1</li><li>matplotlib==3.7.2</li><li>scipy==1.10.1</li><li>scikit-learn==1.3.0</li><li>geopandas==0.13.2</li></ul><p><br></p>…"
  9. 109

    The codes and data for "A Graph Convolutional Neural Network-based Method for Predicting Computational Intensity of Geocomputation" حسب FirstName LastName (20554465)

    منشور في 2025
    "…The <b>innovations</b> and <b>steps</b> in Case 3, including data download, sample generation, and parallel computation optimization, were independently developed and are not dependent on the GeoCube’s code.</p><h2>Requirements</h2><p dir="ltr">The codes use the following dependencies with Python 3.8</p><ul><li>torch==2.0.0</li><li>torch_geometric==2.5.3</li><li>networkx==2.6.3</li><li>pyshp==2.3.1</li><li>tensorrt==8.6.1</li><li>matplotlib==3.7.2</li><li>scipy==1.10.1</li><li>scikit-learn==1.3.0</li><li>geopandas==0.13.2</li></ul><p><br></p>…"
  10. 110

    Data features examined for potential biases. حسب Harry Hochheiser (3413396)

    منشور في 2025
    "…Representativeness of the population, differences in calibration and model performance among groups, and differences in performance across hospital settings were identified as possible sources of bias.…"
  11. 111

    Analysis topics. حسب Harry Hochheiser (3413396)

    منشور في 2025
    "…Representativeness of the population, differences in calibration and model performance among groups, and differences in performance across hospital settings were identified as possible sources of bias.…"
  12. 112

    Datasets To EVAL. حسب Jin Lu (428513)

    منشور في 2025
    "…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …"
  13. 113

    Statistical significance test results. حسب Jin Lu (428513)

    منشور في 2025
    "…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …"
  14. 114

    How RAG work. حسب Jin Lu (428513)

    منشور في 2025
    "…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …"
  15. 115

    OpenBookQA experimental results. حسب Jin Lu (428513)

    منشور في 2025
    "…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …"
  16. 116

    AI2_ARC experimental results. حسب Jin Lu (428513)

    منشور في 2025
    "…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …"
  17. 117

    TQA experimental results. حسب Jin Lu (428513)

    منشور في 2025
    "…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …"
  18. 118

    E-EVAL experimental results. حسب Jin Lu (428513)

    منشور في 2025
    "…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …"
  19. 119

    TQA Accuracy Comparison Chart on different LLM. حسب Jin Lu (428513)

    منشور في 2025
    "…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …"
  20. 120

    ScienceQA experimental results. حسب Jin Lu (428513)

    منشور في 2025
    "…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …"