Showing 41 - 60 results of 650 for search '(( python files implementation ) OR ( ((python models) OR (python code)) represent ))', query time: 0.29s Refine Results
  1. 41
  2. 42

    Genomic view of archaeal and bacterial diversity in skeleton of coral Porites lutea and Isopora palifera by Kshitij Tandon (9398705)

    Published 2022
    “…From inner to outer ring, Innermost circle represents MAGs color coded at bacterial class level. …”
  3. 43

    A high-performance and highly reusable fast multipole method library and its application to solvation energy calculations at virus-scale by Tingyu Wang (12342757)

    Published 2022
    “…It empowers researchers to compute solvation energy at the scale of viruses interactively via easy-to-use Python interfaces. The 1/N convergence rate observed in mesh-refinement studies confirms code correctness, and result comparison with other trusted PB software shows agreement using a wide range of proteins.Performance results report timings and breakdowns with between 8,000 and 2 million boundary elements and confirm the linear complexity in both time and space. …”
  4. 44

    Datasets To EVAL. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  5. 45

    Statistical significance test results. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  6. 46

    How RAG work. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  7. 47

    OpenBookQA experimental results. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  8. 48

    AI2_ARC experimental results. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  9. 49

    TQA experimental results. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  10. 50

    E-EVAL experimental results. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  11. 51

    TQA Accuracy Comparison Chart on different LLM. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  12. 52

    ScienceQA experimental results. by Jin Lu (428513)

    Published 2025
    “…We evaluated our proposed system on five educational datasets—AI2_ARC, OpenBookQA, E-EVAL, TQA, and ScienceQA—which represent diverse question types and domains. Compared to vanilla Large Language Models (LLMs), our approach combining Retrieval-Augmented Generation (RAG) with Code Interpreters achieved an average accuracy improvement of 10−15 percentage points. …”
  13. 53

    Building a Federated Data Catalog with Client Implementations - Meeting Data Where It is by Mike Johnson (16616871)

    Published 2023
    “…</p> <p>This combination of a <strong>auto-refreshing</strong> catalog paired with <strong>multi language implementations</strong> (R and Python), allowed the catalog to grow its data holdings from 11 to over 2,000 data providers and share the catalog as JSON and parquet files from a <em>github.io</em> page. …”
  14. 54
  15. 55
  16. 56
  17. 57
  18. 58
  19. 59

    WEDAP: A Python Package for Streamlined Plotting of Molecular Simulation Data by Darian T. Yang (12321348)

    Published 2024
    “…Here, we present the WEDAP Python package for simplifying the analysis of data generated from either conventional MD simulations or the weighted ensemble (WE) path sampling method, as implemented in the widely used WESTPA software package. …”
  20. 60

    A Python Package for the Localization of Protein Modifications in Mass Spectrometry Data by Anthony S. Barente (14035175)

    Published 2022
    “…Here we describe pyAscore, an efficient and versatile implementation of the Ascore algorithm in Python for scoring the localization of user defined PTMs in data dependent mass spectrometry. pyAscore can be used from the command line or imported into Python scripts and accepts standard file formats from popular software tools used in bottom-up proteomics. …”