Self-Distillation for Randomized Neural Networks

<p dir="ltr">Knowledge distillation (KD) is a conventional method in the field of deep learning that enables the transfer of dark knowledge from a teacher model to a student model, consequently improving the performance of the student model. In randomized neural networks, due to the...

Full description

Saved in:
Bibliographic Details
Main Author: Minghui Hu (2457952) (author)
Other Authors: Ruobin Gao (16003195) (author), Ponnuthurai Nagaratnam Suganthan (11274636) (author)
Published: 2023
Subjects:
Tags: Add Tag
No Tags, Be the first to tag this record!