Showing 1 - 20 results of 73 for search '(( significantly ((a decrease) OR (greater decrease)) ) OR ( significant concern decrease ))~', query time: 0.33s Refine Results
  1. 1
  2. 2

    Structural equation models raw data. by K. Kanoho Hosoda (19929050)

    Published 2024
    “…Giving kindness was significantly associated with decreased stress reduction and decreased institutional identity. …”
  3. 3

    Summary of sample descriptive statistics. by K. Kanoho Hosoda (19929050)

    Published 2024
    “…Giving kindness was significantly associated with decreased stress reduction and decreased institutional identity. …”
  4. 4
  5. 5

    Model A: Logistic structural model. by Abigail A. Lee (19935335)

    Published 2024
    “…<p>Latent variables concerning logistical factors and covariates were combined into a structural model. …”
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10

    Row data. by Xiangyu Wang (341093)

    Published 2025
    “…Moreover, Life Sciences & Medicine students demonstrated a greater tendency toward negative self-perception, low psychological well-being level, and decreased creative self-efficacy, compared to peers in other disciplines.…”
  11. 11

    Univariate linear regression analysis of scales. by Xiangyu Wang (341093)

    Published 2025
    “…Moreover, Life Sciences & Medicine students demonstrated a greater tendency toward negative self-perception, low psychological well-being level, and decreased creative self-efficacy, compared to peers in other disciplines.…”
  12. 12

    S1 File - by Ahmed Akib Jawad Karim (20427740)

    Published 2025
    “…Referring to LastBERT, a customized student BERT model, we significantly lowered model parameters from 110 million BERT base to 29 million-resulting in a model approximately 73.64% smaller. …”
  13. 13

    Confusion matrix for ClinicalBERT model. by Ahmed Akib Jawad Karim (20427740)

    Published 2025
    “…Referring to LastBERT, a customized student BERT model, we significantly lowered model parameters from 110 million BERT base to 29 million-resulting in a model approximately 73.64% smaller. …”
  14. 14

    Confusion matrix for LastBERT model. by Ahmed Akib Jawad Karim (20427740)

    Published 2025
    “…Referring to LastBERT, a customized student BERT model, we significantly lowered model parameters from 110 million BERT base to 29 million-resulting in a model approximately 73.64% smaller. …”
  15. 15

    Student model architecture. by Ahmed Akib Jawad Karim (20427740)

    Published 2025
    “…Referring to LastBERT, a customized student BERT model, we significantly lowered model parameters from 110 million BERT base to 29 million-resulting in a model approximately 73.64% smaller. …”
  16. 16

    Configuration of the LastBERT model. by Ahmed Akib Jawad Karim (20427740)

    Published 2025
    “…Referring to LastBERT, a customized student BERT model, we significantly lowered model parameters from 110 million BERT base to 29 million-resulting in a model approximately 73.64% smaller. …”
  17. 17

    Confusion matrix for DistilBERT model. by Ahmed Akib Jawad Karim (20427740)

    Published 2025
    “…Referring to LastBERT, a customized student BERT model, we significantly lowered model parameters from 110 million BERT base to 29 million-resulting in a model approximately 73.64% smaller. …”
  18. 18

    ROC curve for LastBERT model. by Ahmed Akib Jawad Karim (20427740)

    Published 2025
    “…Referring to LastBERT, a customized student BERT model, we significantly lowered model parameters from 110 million BERT base to 29 million-resulting in a model approximately 73.64% smaller. …”
  19. 19

    Sample Posts from the ADHD dataset. by Ahmed Akib Jawad Karim (20427740)

    Published 2025
    “…Referring to LastBERT, a customized student BERT model, we significantly lowered model parameters from 110 million BERT base to 29 million-resulting in a model approximately 73.64% smaller. …”
  20. 20

    Top-level overview for ADHD classification study. by Ahmed Akib Jawad Karim (20427740)

    Published 2025
    “…Referring to LastBERT, a customized student BERT model, we significantly lowered model parameters from 110 million BERT base to 29 million-resulting in a model approximately 73.64% smaller. …”