Showing 121 - 140 results of 270 for search '(( ((python model) OR (python tool)) representing ) OR ( python code implementing ))', query time: 0.30s Refine Results
  1. 121

    <b>Code and derived data for</b><b>Training Sample Location Matters: Accuracy Impacts in LULC Classification</b> by Pajtim Zariqi (22155799)

    Published 2025
    “…</li><li>Python/Kaggle notebooks (<code>.ipynb</code>): reproducibility pipeline for accuracy metrics and statistical analysis.…”
  2. 122

    <b>Use case codes of the DDS3 and DDS4 datasets for bacillus segmentation and tuberculosis diagnosis, respectively</b> by Marly G F Costa (19812192)

    Published 2025
    “…<p dir="ltr"><b>Use case codes of the DDS3 and DDS4 datasets for bacillus segmentation and tuberculosis diagnosis, respectively</b></p><p dir="ltr">The code was developed in the Google Collaboratory environment, using Python version 3.7.13, with TensorFlow 2.8.2. …”
  3. 123

    JASPEX model by Olugbenga OLUWAGBEMI (21403187)

    Published 2025
    “…</p><p dir="ltr">We wrote new sets of python codes and developed python programming codes to rework on the map to generate the coloured map of Southwest Nigeria from the map of Nigeria (which represented the region of our study). …”
  4. 124

    Datasets To EVAL. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  5. 125

    Statistical significance test results. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  6. 126

    How RAG work. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  7. 127

    OpenBookQA experimental results. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  8. 128

    AI2_ARC experimental results. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  9. 129

    TQA experimental results. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  10. 130

    E-EVAL experimental results. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  11. 131

    TQA Accuracy Comparison Chart on different LLM. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  12. 132

    ScienceQA experimental results. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  13. 133

    Code interpreter with LLM. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  14. 134

    Data and some code used in the paper:<b>Expansion quantization network: A micro-emotion detection and annotation framework</b> by Zhou (20184816)

    Published 2025
    “…Attached is the micro-emotion annotation code based on pytorch, which can be used to annotate the Goemotions dataset by yourself, or predict the emotion classification based on the annotation results. …”
  15. 135

    BaNDyT: Bayesian Network Modeling of Molecular Dynamics Trajectories by Elizaveta Mukhaleva (20602550)

    Published 2025
    “…We describe here the software’s uses, the methods associated with it, and a comprehensive Python interface to the underlying generalist BNM code. …”
  16. 136

    BaNDyT: Bayesian Network Modeling of Molecular Dynamics Trajectories by Elizaveta Mukhaleva (20602550)

    Published 2025
    “…We describe here the software’s uses, the methods associated with it, and a comprehensive Python interface to the underlying generalist BNM code. …”
  17. 137

    BaNDyT: Bayesian Network Modeling of Molecular Dynamics Trajectories by Elizaveta Mukhaleva (20602550)

    Published 2025
    “…We describe here the software’s uses, the methods associated with it, and a comprehensive Python interface to the underlying generalist BNM code. …”
  18. 138

    Advancing Solar Magnetic Field Modeling by Carlos António (21257432)

    Published 2025
    “…<br><br>We developed a significantly faster Python code built upon a functional optimization framework previously proposed and implemented by our team. …”
  19. 139

    High-Throughput Mass Spectral Library Searching of Small Molecules in R with NIST MSPepSearch by Andrey Samokhin (20282728)

    Published 2025
    “…Despite the availability of numerous library search algorithms, those developed by NIST and implemented in MS Search remain predominant, partly because commercial databases (e.g., NIST, Wiley) are distributed in proprietary formats inaccessible to custom code. …”
  20. 140

    Comparison data 7 for <i>Lamprologus ocellatus</i>. by Nicolai Kraus (19949667)

    Published 2024
    “…TIBA accepts data outputs from popular logging software and is implemented in Python and JavaScript, with all current browsers supported. …”