Showing 21 - 40 results of 650 for search '(( python files implementation ) OR ( ((python models) OR (python code)) represent ))', query time: 0.28s Refine Results
  1. 21
  2. 22
  3. 23

    PTPC-UHT bounce by David Parry (22169299)

    Published 2025
    “…<br>It contains the full Python implementation of the PTPC bounce model (<code>PTPC_UHT_bounce.py</code>) and representative outputs used to generate the figures in the paper. …”
  4. 24
  5. 25
  6. 26

    DataSheet_1_AirSeaFluxCode: Open-source software for calculating turbulent air-sea fluxes from meteorological parameters.pdf by Stavroula Biri (14571707)

    Published 2023
    “…In this paper, we present a Python 3.6 (or higher) open-source software package “AirSeaFluxCode” for the computation of the heat (latent and sensible) and momentum fluxes. …”
  7. 27
  8. 28
  9. 29

    SciPy2024: Facilitating scientific investigations from long-tail data with Python by Deborah Khider (4673140)

    Published 2024
    “…</p><p><br></p><p dir="ltr">Although the Pandas extension and its incorporation into Pyleoclim represents a major stepping stone to allow scientists in these domains to make use of more open science code, work remains for interoperability with other open source libraries such as Matplotlib, Seaborn, Scikit-learn, and Scipy. …”
  10. 30
  11. 31
  12. 32
  13. 33
  14. 34

    Python implementation of the Trajectory Adaptive Multilevel Sampling algorithm for rare events and improvements by Pascal Wang (10130612)

    Published 2021
    “…</div><div><br></div><div>The `TAMS` folder contains the necessary files to run the TAMS algorithm. The `main.py` file is the file to be executed using a command of the type `python main.py`. …”
  15. 35

    Code interpreter with LLM. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  16. 36
  17. 37
  18. 38
  19. 39
  20. 40