Data Sheet 1_Fine-tuning a local LLaMA-3 large language model for automated privacy-preserving physician letter generation in radiation oncology.pdf
Introduction<p>Generating physician letters is a time-consuming task in daily clinical practice.</p>Methods<p>This study investigates local fine-tuning of large language models (LLMs), specifically LLaMA models, for physician letter generation in a privacy-preserving manner within...
محفوظ في:
| المؤلف الرئيسي: | Yihao Hou (20555675) (author) |
|---|---|
| مؤلفون آخرون: | Christoph Bert (34117) (author), Ahmed Gomaa (4115773) (author), Godehard Lahmer (20555678) (author), Daniel Höfler (10512040) (author), Thomas Weissmann (9960221) (author), Raphaela Voigt (20555681) (author), Philipp Schubert (9577160) (author), Charlotte Schmitter (6449204) (author), Alina Depardon (20555684) (author), Sabine Semrau (9577172) (author), Andreas Maier (6397244) (author), Rainer Fietkau (757368) (author), Yixing Huang (16324230) (author), Florian Putz (4048246) (author) |
| منشور في: |
2025
|
| الموضوعات: | |
| الوسوم: |
إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
|
مواد مشابهة
-
Data Sheet 2_Fine-tuning a local LLaMA-3 large language model for automated privacy-preserving physician letter generation in radiation oncology.pdf
حسب: Yihao Hou (20555675)
منشور في: (2025) -
Data Sheet 3_Fine-tuning a local LLaMA-3 large language model for automated privacy-preserving physician letter generation in radiation oncology.pdf
حسب: Yihao Hou (20555675)
منشور في: (2025) -
One-shot prompt for the Large Language model Meta AI (LLaMA) model with 7 billion parameters.
حسب: Sifei Han (3747112)
منشور في: (2025) -
Hook-and-Bait-Urdu
حسب: Sheetal Harris (20504654)
منشور في: (2025) -
The fine-tuning flowchart for the qLLaMA_LoRA-7B model by updating the pre-trained LLaMA-7B model’s parameters using the Low-Rank Adaptation (LoRA), a supervised learning algorithm, from 100,000 Quora question pairs.
حسب: Sifei Han (3747112)
منشور في: (2025)