Showing 201 - 220 results of 21,342 for search '(( significant decrease decrease ) OR ( significant (advancement OR advancements) decrease ))*', query time: 0.53s Refine Results
  1. 201

    Confusion matrix for LastBERT model. by Ahmed Akib Jawad Karim (20427740)

    Published 2025
    “…The study emphasizes the possibilities of knowledge distillation to produce effective models fit for use in resource-limited conditions, hence advancing NLP and mental health diagnosis. Furthermore underlined by the considerable decrease in model size without appreciable performance loss is the lower computational resources needed for training and deployment, hence facilitating greater applicability. …”
  2. 202

    Student model architecture. by Ahmed Akib Jawad Karim (20427740)

    Published 2025
    “…The study emphasizes the possibilities of knowledge distillation to produce effective models fit for use in resource-limited conditions, hence advancing NLP and mental health diagnosis. Furthermore underlined by the considerable decrease in model size without appreciable performance loss is the lower computational resources needed for training and deployment, hence facilitating greater applicability. …”
  3. 203

    Configuration of the LastBERT model. by Ahmed Akib Jawad Karim (20427740)

    Published 2025
    “…The study emphasizes the possibilities of knowledge distillation to produce effective models fit for use in resource-limited conditions, hence advancing NLP and mental health diagnosis. Furthermore underlined by the considerable decrease in model size without appreciable performance loss is the lower computational resources needed for training and deployment, hence facilitating greater applicability. …”
  4. 204

    Confusion matrix for DistilBERT model. by Ahmed Akib Jawad Karim (20427740)

    Published 2025
    “…The study emphasizes the possibilities of knowledge distillation to produce effective models fit for use in resource-limited conditions, hence advancing NLP and mental health diagnosis. Furthermore underlined by the considerable decrease in model size without appreciable performance loss is the lower computational resources needed for training and deployment, hence facilitating greater applicability. …”
  5. 205

    ROC curve for LastBERT model. by Ahmed Akib Jawad Karim (20427740)

    Published 2025
    “…The study emphasizes the possibilities of knowledge distillation to produce effective models fit for use in resource-limited conditions, hence advancing NLP and mental health diagnosis. Furthermore underlined by the considerable decrease in model size without appreciable performance loss is the lower computational resources needed for training and deployment, hence facilitating greater applicability. …”
  6. 206

    Sample Posts from the ADHD dataset. by Ahmed Akib Jawad Karim (20427740)

    Published 2025
    “…The study emphasizes the possibilities of knowledge distillation to produce effective models fit for use in resource-limited conditions, hence advancing NLP and mental health diagnosis. Furthermore underlined by the considerable decrease in model size without appreciable performance loss is the lower computational resources needed for training and deployment, hence facilitating greater applicability. …”
  7. 207

    Top-level overview for ADHD classification study. by Ahmed Akib Jawad Karim (20427740)

    Published 2025
    “…The study emphasizes the possibilities of knowledge distillation to produce effective models fit for use in resource-limited conditions, hence advancing NLP and mental health diagnosis. Furthermore underlined by the considerable decrease in model size without appreciable performance loss is the lower computational resources needed for training and deployment, hence facilitating greater applicability. …”
  8. 208
  9. 209
  10. 210
  11. 211
  12. 212
  13. 213
  14. 214
  15. 215
  16. 216
  17. 217
  18. 218
  19. 219
  20. 220