Code and data

<p dir="ltr">State-of-the-art (SOTA) Automatic Speech Recognition (ASR) systems primarily rely on acoustic information while disregarding additional multi-modal context. However, visual information are essential in disambiguation and adaptation. </p><p dir="ltr">...

Full description

Saved in:
Bibliographic Details
Main Author: Supriti Sinhamahapatra (22271917) (author)
Other Authors: Jan Niehues (22272010) (author)
Published: 2025
Subjects:
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:<p dir="ltr">State-of-the-art (SOTA) Automatic Speech Recognition (ASR) systems primarily rely on acoustic information while disregarding additional multi-modal context. However, visual information are essential in disambiguation and adaptation. </p><p dir="ltr">While most work focus on speaker images to handle noise conditions, this work also focuses on integrating presentation slides for the use cases of scientific presentation.</p><p dir="ltr">In a first step, we create a benchmark for multi-modal presentation including an automatic analysis of transcribing domain-specific terminology. Next, we explore methods for augmenting speech models with multi-modal information. </p><p dir="ltr">We mitigate the lack of datasets with accompanying slides by a suitable approach of data augmentation.</p><p dir="ltr">Finally, we train a model using the augmented dataset, resulting in a relative reduction in word error rate of approximately 34%, across all words and 35%, for domain-specific terms compared to the baseline model.</p>