Supplementary 4esults for Table 3 (a).

<p>Attack success rates (%) of adversarial attacks against twelve models. This table contains the supplementary results for the subsection Flexibility.</p> <p>(XLSX)</p>

保存先:
書誌詳細
第一著者: Chen Lin (95910) (author)
その他の著者: Sheng Long (14795824) (author)
出版事項: 2025
主題:
タグ: タグ追加
タグなし, このレコードへの初めてのタグを付けませんか!
_version_ 1849927625944858624
author Chen Lin (95910)
author2 Sheng Long (14795824)
author2_role author
author_facet Chen Lin (95910)
Sheng Long (14795824)
author_role author
dc.creator.none.fl_str_mv Chen Lin (95910)
Sheng Long (14795824)
dc.date.none.fl_str_mv 2025-11-25T18:43:41Z
dc.identifier.none.fl_str_mv 10.1371/journal.pone.0337463.s004
dc.relation.none.fl_str_mv https://figshare.com/articles/dataset/Supplementary_4esults_for_Table_3_a_/30715082
dc.rights.none.fl_str_mv CC BY 4.0
info:eu-repo/semantics/openAccess
dc.subject.none.fl_str_mv Neuroscience
Biological Sciences not elsewhere classified
Information Systems not elsewhere classified
two crucial metrics
nesterov &# 8217
natural data distribution
introduces nesterov ’
goals often conflict
generation process toward
extensive experiments demonstrate
deep neural networks
achieving stealthy attacks
imperceptible perturbations tend
evaluating adversarial attacks
div >< p
diffusion mechanism guides
imperceptible adversarial examples
imperceptible perturbations
adversarial attacks
box attacks
strong acceleration
size strategy
perform poorly
novel framework
mislead models
highly vulnerable
generalization capabilities
art methods
adding subtle
adaptive step
accelerated gradient
dc.title.none.fl_str_mv Supplementary 4esults for Table 3 (a).
dc.type.none.fl_str_mv Dataset
info:eu-repo/semantics/publishedVersion
dataset
description <p>Attack success rates (%) of adversarial attacks against twelve models. This table contains the supplementary results for the subsection Flexibility.</p> <p>(XLSX)</p>
eu_rights_str_mv openAccess
id Manara_965f26a2176291543d6e2149befec016
identifier_str_mv 10.1371/journal.pone.0337463.s004
network_acronym_str Manara
network_name_str ManaraRepo
oai_identifier_str oai:figshare.com:article/30715082
publishDate 2025
repository.mail.fl_str_mv
repository.name.fl_str_mv
repository_id_str
rights_invalid_str_mv CC BY 4.0
spelling Supplementary 4esults for Table 3 (a).Chen Lin (95910)Sheng Long (14795824)NeuroscienceBiological Sciences not elsewhere classifiedInformation Systems not elsewhere classifiedtwo crucial metricsnesterov &# 8217natural data distributionintroduces nesterov ’goals often conflictgeneration process towardextensive experiments demonstratedeep neural networksachieving stealthy attacksimperceptible perturbations tendevaluating adversarial attacksdiv >< pdiffusion mechanism guidesimperceptible adversarial examplesimperceptible perturbationsadversarial attacksbox attacksstrong accelerationsize strategyperform poorlynovel frameworkmislead modelshighly vulnerablegeneralization capabilitiesart methodsadding subtleadaptive stepaccelerated gradient<p>Attack success rates (%) of adversarial attacks against twelve models. This table contains the supplementary results for the subsection Flexibility.</p> <p>(XLSX)</p>2025-11-25T18:43:41ZDatasetinfo:eu-repo/semantics/publishedVersiondataset10.1371/journal.pone.0337463.s004https://figshare.com/articles/dataset/Supplementary_4esults_for_Table_3_a_/30715082CC BY 4.0info:eu-repo/semantics/openAccessoai:figshare.com:article/307150822025-11-25T18:43:41Z
spellingShingle Supplementary 4esults for Table 3 (a).
Chen Lin (95910)
Neuroscience
Biological Sciences not elsewhere classified
Information Systems not elsewhere classified
two crucial metrics
nesterov &# 8217
natural data distribution
introduces nesterov ’
goals often conflict
generation process toward
extensive experiments demonstrate
deep neural networks
achieving stealthy attacks
imperceptible perturbations tend
evaluating adversarial attacks
div >< p
diffusion mechanism guides
imperceptible adversarial examples
imperceptible perturbations
adversarial attacks
box attacks
strong acceleration
size strategy
perform poorly
novel framework
mislead models
highly vulnerable
generalization capabilities
art methods
adding subtle
adaptive step
accelerated gradient
status_str publishedVersion
title Supplementary 4esults for Table 3 (a).
title_full Supplementary 4esults for Table 3 (a).
title_fullStr Supplementary 4esults for Table 3 (a).
title_full_unstemmed Supplementary 4esults for Table 3 (a).
title_short Supplementary 4esults for Table 3 (a).
title_sort Supplementary 4esults for Table 3 (a).
topic Neuroscience
Biological Sciences not elsewhere classified
Information Systems not elsewhere classified
two crucial metrics
nesterov &# 8217
natural data distribution
introduces nesterov ’
goals often conflict
generation process toward
extensive experiments demonstrate
deep neural networks
achieving stealthy attacks
imperceptible perturbations tend
evaluating adversarial attacks
div >< p
diffusion mechanism guides
imperceptible adversarial examples
imperceptible perturbations
adversarial attacks
box attacks
strong acceleration
size strategy
perform poorly
novel framework
mislead models
highly vulnerable
generalization capabilities
art methods
adding subtle
adaptive step
accelerated gradient