Supplementary results for Table 3 (b).

<p>Attack success rates (%) of adversarial attacks against twelve models. This table contains the supplementary results for the subsection Flexibility.</p> <p>(XLSX)</p>

Kaydedildi:
Detaylı Bibliyografya
Yazar: Chen Lin (95910) (author)
Diğer Yazarlar: Sheng Long (14795824) (author)
Baskı/Yayın Bilgisi: 2025
Konular:
Etiketler: Etiketle
Etiket eklenmemiş, İlk siz ekleyin!
_version_ 1849927625942761472
author Chen Lin (95910)
author2 Sheng Long (14795824)
author2_role author
author_facet Chen Lin (95910)
Sheng Long (14795824)
author_role author
dc.creator.none.fl_str_mv Chen Lin (95910)
Sheng Long (14795824)
dc.date.none.fl_str_mv 2025-11-25T18:43:42Z
dc.identifier.none.fl_str_mv 10.1371/journal.pone.0337463.s005
dc.relation.none.fl_str_mv https://figshare.com/articles/dataset/Supplementary_results_for_Table_3_b_/30715085
dc.rights.none.fl_str_mv CC BY 4.0
info:eu-repo/semantics/openAccess
dc.subject.none.fl_str_mv Neuroscience
Biological Sciences not elsewhere classified
Information Systems not elsewhere classified
two crucial metrics
nesterov &# 8217
natural data distribution
introduces nesterov ’
goals often conflict
generation process toward
extensive experiments demonstrate
deep neural networks
achieving stealthy attacks
imperceptible perturbations tend
evaluating adversarial attacks
div >< p
diffusion mechanism guides
imperceptible adversarial examples
imperceptible perturbations
adversarial attacks
box attacks
strong acceleration
size strategy
perform poorly
novel framework
mislead models
highly vulnerable
generalization capabilities
art methods
adding subtle
adaptive step
accelerated gradient
dc.title.none.fl_str_mv Supplementary results for Table 3 (b).
dc.type.none.fl_str_mv Dataset
info:eu-repo/semantics/publishedVersion
dataset
description <p>Attack success rates (%) of adversarial attacks against twelve models. This table contains the supplementary results for the subsection Flexibility.</p> <p>(XLSX)</p>
eu_rights_str_mv openAccess
id Manara_62b572661d3a730e58960d8e05264c0b
identifier_str_mv 10.1371/journal.pone.0337463.s005
network_acronym_str Manara
network_name_str ManaraRepo
oai_identifier_str oai:figshare.com:article/30715085
publishDate 2025
repository.mail.fl_str_mv
repository.name.fl_str_mv
repository_id_str
rights_invalid_str_mv CC BY 4.0
spelling Supplementary results for Table 3 (b).Chen Lin (95910)Sheng Long (14795824)NeuroscienceBiological Sciences not elsewhere classifiedInformation Systems not elsewhere classifiedtwo crucial metricsnesterov &# 8217natural data distributionintroduces nesterov ’goals often conflictgeneration process towardextensive experiments demonstratedeep neural networksachieving stealthy attacksimperceptible perturbations tendevaluating adversarial attacksdiv >< pdiffusion mechanism guidesimperceptible adversarial examplesimperceptible perturbationsadversarial attacksbox attacksstrong accelerationsize strategyperform poorlynovel frameworkmislead modelshighly vulnerablegeneralization capabilitiesart methodsadding subtleadaptive stepaccelerated gradient<p>Attack success rates (%) of adversarial attacks against twelve models. This table contains the supplementary results for the subsection Flexibility.</p> <p>(XLSX)</p>2025-11-25T18:43:42ZDatasetinfo:eu-repo/semantics/publishedVersiondataset10.1371/journal.pone.0337463.s005https://figshare.com/articles/dataset/Supplementary_results_for_Table_3_b_/30715085CC BY 4.0info:eu-repo/semantics/openAccessoai:figshare.com:article/307150852025-11-25T18:43:42Z
spellingShingle Supplementary results for Table 3 (b).
Chen Lin (95910)
Neuroscience
Biological Sciences not elsewhere classified
Information Systems not elsewhere classified
two crucial metrics
nesterov &# 8217
natural data distribution
introduces nesterov ’
goals often conflict
generation process toward
extensive experiments demonstrate
deep neural networks
achieving stealthy attacks
imperceptible perturbations tend
evaluating adversarial attacks
div >< p
diffusion mechanism guides
imperceptible adversarial examples
imperceptible perturbations
adversarial attacks
box attacks
strong acceleration
size strategy
perform poorly
novel framework
mislead models
highly vulnerable
generalization capabilities
art methods
adding subtle
adaptive step
accelerated gradient
status_str publishedVersion
title Supplementary results for Table 3 (b).
title_full Supplementary results for Table 3 (b).
title_fullStr Supplementary results for Table 3 (b).
title_full_unstemmed Supplementary results for Table 3 (b).
title_short Supplementary results for Table 3 (b).
title_sort Supplementary results for Table 3 (b).
topic Neuroscience
Biological Sciences not elsewhere classified
Information Systems not elsewhere classified
two crucial metrics
nesterov &# 8217
natural data distribution
introduces nesterov ’
goals often conflict
generation process toward
extensive experiments demonstrate
deep neural networks
achieving stealthy attacks
imperceptible perturbations tend
evaluating adversarial attacks
div >< p
diffusion mechanism guides
imperceptible adversarial examples
imperceptible perturbations
adversarial attacks
box attacks
strong acceleration
size strategy
perform poorly
novel framework
mislead models
highly vulnerable
generalization capabilities
art methods
adding subtle
adaptive step
accelerated gradient