Self-Distillation for Randomized Neural Networks
<p dir="ltr">Knowledge distillation (KD) is a conventional method in the field of deep learning that enables the transfer of dark knowledge from a teacher model to a student model, consequently improving the performance of the student model. In randomized neural networks, due to the...
Saved in:
| Main Author: | |
|---|---|
| Other Authors: | , |
| Published: |
2023
|
| Subjects: | |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|