Abbreviations and notation.

<div><p>Anomaly detection in attributed networks is critical for identifying threats such as financial fraud and intrusions across social, e-commerce, and cyber-physical domains. Existing graph-based methods face two limitations: (i) embedding-based approaches obscure fine-grained struct...

Deskribapen osoa

Gorde:
Xehetasun bibliografikoak
Egile nagusia: Hossein Rafieizadeh (22676722) (author)
Beste egile batzuk: Hadi Zare (20073000) (author), Mohsen Ghassemi Parsa (22676725) (author), Hocine Cherifi (8177628) (author)
Argitaratua: 2025
Gaiak:
Etiketak: Etiketa erantsi
Etiketarik gabe, Izan zaitez lehena erregistro honi etiketa jartzen!
_version_ 1849927640791646208
author Hossein Rafieizadeh (22676722)
author2 Hadi Zare (20073000)
Mohsen Ghassemi Parsa (22676725)
Hocine Cherifi (8177628)
author2_role author
author
author
author_facet Hossein Rafieizadeh (22676722)
Hadi Zare (20073000)
Mohsen Ghassemi Parsa (22676725)
Hocine Cherifi (8177628)
author_role author
dc.creator.none.fl_str_mv Hossein Rafieizadeh (22676722)
Hadi Zare (20073000)
Mohsen Ghassemi Parsa (22676725)
Hocine Cherifi (8177628)
dc.date.none.fl_str_mv 2025-11-24T18:37:59Z
dc.identifier.none.fl_str_mv 10.1371/journal.pone.0335135.t001
dc.relation.none.fl_str_mv https://figshare.com/articles/dataset/Abbreviations_and_notation_/30698020
dc.rights.none.fl_str_mv CC BY 4.0
info:eu-repo/semantics/openAccess
dc.subject.none.fl_str_mv Cell Biology
Science Policy
Environmental Sciences not elsewhere classified
Biological Sciences not elsewhere classified
Information Systems not elsewhere classified
intrusions across social
reconstructions across views
level contrastive learning
dual contrastive learning
across six benchmarks
div >< p
view discrepancies underutilized
augmented graph views
dcor improves auroc
view discrepancies
level contrast
dual autoencoder
augmented view
specific information
six datasets
reduces auroc
publicly available
preserves fine
physical domains
performing non
maximum gain
leaving cross
identifying threats
financial fraud
existing graph
dcor reconstructs
dcor ),
contrasts reconstructions
attributed networks
attribute patterns
dc.title.none.fl_str_mv Abbreviations and notation.
dc.type.none.fl_str_mv Dataset
info:eu-repo/semantics/publishedVersion
dataset
description <div><p>Anomaly detection in attributed networks is critical for identifying threats such as financial fraud and intrusions across social, e-commerce, and cyber-physical domains. Existing graph-based methods face two limitations: (i) embedding-based approaches obscure fine-grained structural and attribute patterns, and (ii) reconstruction-based methods neglect cross-view discrepancies during training, leaving cross-view discrepancies underutilized. To address these gaps, we propose Dual Contrastive Learning-based Reconstruction (DCOR), a dual autoencoder with a shared Graph neural network (GNN) encoder that contrasts reconstructions (not embeddings) between original and augmented graph views. Instead of contrasting embeddings, DCOR reconstructs both adjacency and attributes for the original graph and for an augmented view, then contrasts the reconstructions across views. This preserves fine-grained, view-specific information and improves the fidelity of both structure and attributes. Across six benchmarks (Enron, Amazon, Facebook, Flickr, ACM, and Reddit), DCOR achieves the best Area Under the Receiver Operating Characteristic curve (AUROC) on six datasets. In comparison with the best-performing non-DCOR baseline across datasets, DCOR improves AUROC by 11.3% on average, with a maximum gain of 21.3% on Enron. On Amazon, ablating the reconstruction-level contrast (RLC) reduces AUROC by 25.5% relative to the model, underscoring the necessity of reconstruction-level contrastive learning. Code and datasets are publicly available at <a href="https://github.com/Hossein1998/DCOR-Graph-Anomaly-Detection.git" target="_blank">https://github.com/Hossein1998/DCOR-Graph-Anomaly-Detection.git</a>.</p></div>
eu_rights_str_mv openAccess
id Manara_18341f8d001f5c2286967846ad43b5b7
identifier_str_mv 10.1371/journal.pone.0335135.t001
network_acronym_str Manara
network_name_str ManaraRepo
oai_identifier_str oai:figshare.com:article/30698020
publishDate 2025
repository.mail.fl_str_mv
repository.name.fl_str_mv
repository_id_str
rights_invalid_str_mv CC BY 4.0
spelling Abbreviations and notation.Hossein Rafieizadeh (22676722)Hadi Zare (20073000)Mohsen Ghassemi Parsa (22676725)Hocine Cherifi (8177628)Cell BiologyScience PolicyEnvironmental Sciences not elsewhere classifiedBiological Sciences not elsewhere classifiedInformation Systems not elsewhere classifiedintrusions across socialreconstructions across viewslevel contrastive learningdual contrastive learningacross six benchmarksdiv >< pview discrepancies underutilizedaugmented graph viewsdcor improves aurocview discrepancieslevel contrastdual autoencoderaugmented viewspecific informationsix datasetsreduces aurocpublicly availablepreserves finephysical domainsperforming nonmaximum gainleaving crossidentifying threatsfinancial fraudexisting graphdcor reconstructsdcor ),contrasts reconstructionsattributed networksattribute patterns<div><p>Anomaly detection in attributed networks is critical for identifying threats such as financial fraud and intrusions across social, e-commerce, and cyber-physical domains. Existing graph-based methods face two limitations: (i) embedding-based approaches obscure fine-grained structural and attribute patterns, and (ii) reconstruction-based methods neglect cross-view discrepancies during training, leaving cross-view discrepancies underutilized. To address these gaps, we propose Dual Contrastive Learning-based Reconstruction (DCOR), a dual autoencoder with a shared Graph neural network (GNN) encoder that contrasts reconstructions (not embeddings) between original and augmented graph views. Instead of contrasting embeddings, DCOR reconstructs both adjacency and attributes for the original graph and for an augmented view, then contrasts the reconstructions across views. This preserves fine-grained, view-specific information and improves the fidelity of both structure and attributes. Across six benchmarks (Enron, Amazon, Facebook, Flickr, ACM, and Reddit), DCOR achieves the best Area Under the Receiver Operating Characteristic curve (AUROC) on six datasets. In comparison with the best-performing non-DCOR baseline across datasets, DCOR improves AUROC by 11.3% on average, with a maximum gain of 21.3% on Enron. On Amazon, ablating the reconstruction-level contrast (RLC) reduces AUROC by 25.5% relative to the model, underscoring the necessity of reconstruction-level contrastive learning. Code and datasets are publicly available at <a href="https://github.com/Hossein1998/DCOR-Graph-Anomaly-Detection.git" target="_blank">https://github.com/Hossein1998/DCOR-Graph-Anomaly-Detection.git</a>.</p></div>2025-11-24T18:37:59ZDatasetinfo:eu-repo/semantics/publishedVersiondataset10.1371/journal.pone.0335135.t001https://figshare.com/articles/dataset/Abbreviations_and_notation_/30698020CC BY 4.0info:eu-repo/semantics/openAccessoai:figshare.com:article/306980202025-11-24T18:37:59Z
spellingShingle Abbreviations and notation.
Hossein Rafieizadeh (22676722)
Cell Biology
Science Policy
Environmental Sciences not elsewhere classified
Biological Sciences not elsewhere classified
Information Systems not elsewhere classified
intrusions across social
reconstructions across views
level contrastive learning
dual contrastive learning
across six benchmarks
div >< p
view discrepancies underutilized
augmented graph views
dcor improves auroc
view discrepancies
level contrast
dual autoencoder
augmented view
specific information
six datasets
reduces auroc
publicly available
preserves fine
physical domains
performing non
maximum gain
leaving cross
identifying threats
financial fraud
existing graph
dcor reconstructs
dcor ),
contrasts reconstructions
attributed networks
attribute patterns
status_str publishedVersion
title Abbreviations and notation.
title_full Abbreviations and notation.
title_fullStr Abbreviations and notation.
title_full_unstemmed Abbreviations and notation.
title_short Abbreviations and notation.
title_sort Abbreviations and notation.
topic Cell Biology
Science Policy
Environmental Sciences not elsewhere classified
Biological Sciences not elsewhere classified
Information Systems not elsewhere classified
intrusions across social
reconstructions across views
level contrastive learning
dual contrastive learning
across six benchmarks
div >< p
view discrepancies underutilized
augmented graph views
dcor improves auroc
view discrepancies
level contrast
dual autoencoder
augmented view
specific information
six datasets
reduces auroc
publicly available
preserves fine
physical domains
performing non
maximum gain
leaving cross
identifying threats
financial fraud
existing graph
dcor reconstructs
dcor ),
contrasts reconstructions
attributed networks
attribute patterns