Overall architecture of the FAR-AM model.

<p>The input text is first encoded by an LLM to generate short contexts (E1, E2, E3). Then, the context vectors (T1, T2, T3) were generated by the multi-layer Transformer module of BERT. Then, the output vector was convolved and pooled by TextCNN to extract local features and generate feature...

Description complète

Enregistré dans:
Détails bibliographiques
Auteur principal: Heng Peng (508975) (author)
Autres auteurs: Kun Zhu (447919) (author)
Publié: 2025
Sujets:
Tags: Ajouter un tag
Pas de tags, Soyez le premier à ajouter un tag!