Search alternatives:
aspect » aspects (Expand Search)
decrease » increase (Expand Search)
Showing 1 - 20 results of 63 for search '(( significant ((aspect decrease) OR (largest decrease)) ) OR ( significant attention layer ))', query time: 0.10s Refine Results
  1. 1

    Agent Productivity Modeling in a Call Center Domain Using Attentive Convolutional Neural Networks by Ahmed , Abdelrahman

    Published 2020
    “…We explore several designs for the classifier based on convolutional neural networks (CNNs), long-short-term memory networks (LSTMs), and an attention layer. The corpus consists of seven hours collected and annotated from three different call centers. …”
    Get full text
    Get full text
  2. 2

    Efficient self-attention with smart pruning for sustainable large language models by Samir Brahim Belhaouari (9427347)

    Published 2025
    “…Second, the Weight Matrix Folding method is introduced to efficiently prune the self-attention layer matrices in a simple and efficient mathematical model. …”
  3. 3

    An End-to-End Concatenated CNN Attention Model for the Classification of Lung Cancer With XAI Techniques by Fariha Haque (21485518)

    Published 2025
    “…This approach integrates two distinct CNNs, followed by a multi-layer perceptron (MLP) and a multi-head attention (MHA) mechanism, to enhance performance. …”
  4. 4

    Bi-attention DoubleUNet: A deep learning approach for carotid artery segmentation in transverse view images for non-invasive stenosis diagnosis by Najmath, Ottakath

    Published 2024
    “…This paper proposes an automated segmentation method for the carotid artery in transverse B-mode ultrasound images, using a Bi-attention DoubleUnet architecture which incorporates spatial attention and channel wise attention using Bottleneck attention module. …”
    Get full text
    Get full text
    Get full text
    article
  5. 5
  6. 6

    Bi-attention DoubleUNet: A deep learning approach for carotid artery segmentation in transverse view images for non-invasive stenosis diagnosis by Najmath Ottakath (17430912)

    Published 2024
    “…This paper proposes an automated segmentation method for the carotid artery in transverse B-mode ultrasound images, using a Bi-attention DoubleUnet architecture which incorporates spatial attention and channel wise attention using Bottleneck attention module. …”
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18

    FLACON: A Deep Federated Transfer Learning-Enabled Transient Stability Assessment During Symmetrical and Asymmetrical Grid Faults by Mohamed Massaoudi (16888710)

    Published 2024
    “…By introducing convolutional layers alongside multi-head attention mechanisms, the FLACON framework significantly improves learning efficiency across geographically distributed datasets. …”
  19. 19
  20. 20