Skip to content
VuFind
  • Login
    • English
    • اللغة العربية
Advanced
  • Model comparison for question...
  • Cite this
  • Text this
  • Email this
  • Print
  • Export Record
    • Export to RefWorks
    • Export to EndNoteWeb
    • Export to EndNote
  • Save to List
  • Permanent link
Model comparison for question pairs detection using 10-fold cross validation.

Model comparison for question pairs detection using 10-fold cross validation.

<p>Model comparison for question pairs detection using 10-fold cross validation.</p>

Saved in:
Bibliographic Details
Main Author: Sifei Han (3747112) (author)
Other Authors: Lingyun Shi (3907438) (author), Fuchiang (Rich) Tsui (20542907) (author)
Published: 2025
Subjects:
Science Policy
Environmental Sciences not elsewhere classified
Biological Sciences not elsewhere classified
Information Systems not elsewhere classified
text similarity tasks
supervised model training
quora question pairs
natural language processing
identifying potential reviewers
72 %- 65
13 %- 85
supervised learning using
tuned llama model
83 %- 82
4 %) outperformed
25 %]; p
9 %) using
7b model using
recent larger llm
larger pretrained models
improved performance compared
conducted various fine
improved performance
using 100
larger llm
shot learning
tuning approaches
study demonstrated
smaller llm
secondary study
recommending subject
rank adaptation
qqp ),
prompt engineering
previous study
numerous datasets
nlp ).
matter experts
matching resumes
lora ]),
like chatgpt
job descriptions
improvement compared
grant proposals
glue benchmark
fold cross
fair comparison
case study
best results
7b parameters
67 %),
20 %).
Tags: Add Tag
No Tags, Be the first to tag this record!
  • Holdings
  • Description
  • Comments
  • Similar Items
  • Staff View

Similar Items

  • The fine-tuning flowchart for the qLLaMA_LoRA-7B model by updating the pre-trained LLaMA-7B model’s parameters using the Low-Rank Adaptation (LoRA), a supervised learning algorithm, from 100,000 Quora question pairs.
    by: Sifei Han (3747112)
    Published: (2025)
  • External validation results between two fine-tuned models (qLLaMA_LoRA-7B and qLLaMA3.1_LoRA-70B) based on GLUE benchmark.
    by: Sifei Han (3747112)
    Published: (2025)
  • One-shot prompt for the Large Language model Meta AI (LLaMA) model with 7 billion parameters.
    by: Sifei Han (3747112)
    Published: (2025)
  • Data Sheet 1_Fine-tuning a local LLaMA-3 large language model for automated privacy-preserving physician letter generation in radiation oncology.pdf
    by: Yihao Hou (20555675)
    Published: (2025)
  • Data Sheet 2_Fine-tuning a local LLaMA-3 large language model for automated privacy-preserving physician letter generation in radiation oncology.pdf
    by: Yihao Hou (20555675)
    Published: (2025)

Find More

  • Browse the Catalog
  • Browse Alphabetically
  • Explore Channels
  • Course Reserves
  • New Items
Cannot write session to /tmp/vufind_sessions/sess_8q7mu4srtrpg9pgoc8pmcsf34d