Distilling Wisdom: A Review on Optimizing Learning From Massive Language Models
<p dir="ltr">In the era of Large Language Models (LLMs), Knowledge Distillation (KD) enables the transfer of capabilities from proprietary LLMs to open-source models. This survey provides a detailed discussion of the basic principles, algorithms, and implementation methods of knowled...
محفوظ في:
| المؤلف الرئيسي: | Dingzong Zhang (23275066) (author) |
|---|---|
| مؤلفون آخرون: | Devi Listiyani (23275069) (author), Priyanka Singh (256412) (author), Manoranjan Mohanty (23275072) (author) |
| منشور في: |
2025
|
| الموضوعات: | |
| الوسوم: |
إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
|
مواد مشابهة
-
Self-Distillation for Randomized Neural Networks
حسب: Minghui Hu (2457952)
منشور في: (2023) -
When geoscience meets generative AI and large language models: Foundations, trends, and future challenges
حسب: Hadid, Abdenour
منشور في: (2024) -
Large Language Model Enhanced Particle Swarm Optimization for Hyperparameter Tuning for Deep Learning Models
حسب: Saad Hameed (6488738)
منشور في: (2025) -
Large Language Models in Medical Education: Opportunities, Challenges, and Future Directions
حسب: Alaa Abd-alrazaq (17058018)
منشور في: (2023) -
The use of large language models for program repair
حسب: Fida Zubair (20482610)
منشور في: (2024)