Search alternatives:
largest decrease » marked decrease (Expand Search)
larger decrease » marked decrease (Expand Search)
we decrease » _ decrease (Expand Search), a decrease (Expand Search), nn decrease (Expand Search)
largest decrease » marked decrease (Expand Search)
larger decrease » marked decrease (Expand Search)
we decrease » _ decrease (Expand Search), a decrease (Expand Search), nn decrease (Expand Search)
-
2521
Confusion matrix for LastBERT model.
Published 2025“…Referring to LastBERT, a customized student BERT model, we significantly lowered model parameters from 110 million BERT base to 29 million-resulting in a model approximately 73.64% smaller. …”
-
2522
Student model architecture.
Published 2025“…Referring to LastBERT, a customized student BERT model, we significantly lowered model parameters from 110 million BERT base to 29 million-resulting in a model approximately 73.64% smaller. …”
-
2523
Configuration of the LastBERT model.
Published 2025“…Referring to LastBERT, a customized student BERT model, we significantly lowered model parameters from 110 million BERT base to 29 million-resulting in a model approximately 73.64% smaller. …”
-
2524
Confusion matrix for DistilBERT model.
Published 2025“…Referring to LastBERT, a customized student BERT model, we significantly lowered model parameters from 110 million BERT base to 29 million-resulting in a model approximately 73.64% smaller. …”
-
2525
ROC curve for LastBERT model.
Published 2025“…Referring to LastBERT, a customized student BERT model, we significantly lowered model parameters from 110 million BERT base to 29 million-resulting in a model approximately 73.64% smaller. …”
-
2526
Sample Posts from the ADHD dataset.
Published 2025“…Referring to LastBERT, a customized student BERT model, we significantly lowered model parameters from 110 million BERT base to 29 million-resulting in a model approximately 73.64% smaller. …”
-
2527
Top-level overview for ADHD classification study.
Published 2025“…Referring to LastBERT, a customized student BERT model, we significantly lowered model parameters from 110 million BERT base to 29 million-resulting in a model approximately 73.64% smaller. …”
-
2528
-
2529
-
2530
-
2531
-
2532
-
2533
-
2534
-
2535
-
2536
-
2537
-
2538
-
2539
-
2540
Uptake of the intervention (N = 49).
Published 2025“…We piloted the Caregiver Wellbeing intervention in the eThekwini municipality, KwaZulu-Natal, South Africa. …”