Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks.

<p>Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks.</p>

Shranjeno v:
Bibliografske podrobnosti
Glavni avtor: Chen Lin (95910) (author)
Drugi avtorji: Sheng Long (14795824) (author)
Izdano: 2025
Teme:
Oznake: Označite
Brez oznak, prvi označite!
_version_ 1849927625924935680
author Chen Lin (95910)
author2 Sheng Long (14795824)
author2_role author
author_facet Chen Lin (95910)
Sheng Long (14795824)
author_role author
dc.creator.none.fl_str_mv Chen Lin (95910)
Sheng Long (14795824)
dc.date.none.fl_str_mv 2025-11-25T18:43:51Z
dc.identifier.none.fl_str_mv 10.1371/journal.pone.0337463.t004
dc.relation.none.fl_str_mv https://figshare.com/articles/dataset/Attack_success_rates_of_adversarial_attacks_against_twelve_models_sup_sup_indicates_the_white-box_attacks_/30715115
dc.rights.none.fl_str_mv CC BY 4.0
info:eu-repo/semantics/openAccess
dc.subject.none.fl_str_mv Neuroscience
Biological Sciences not elsewhere classified
Information Systems not elsewhere classified
two crucial metrics
nesterov &# 8217
natural data distribution
introduces nesterov ’
goals often conflict
generation process toward
extensive experiments demonstrate
deep neural networks
achieving stealthy attacks
imperceptible perturbations tend
evaluating adversarial attacks
div >< p
diffusion mechanism guides
imperceptible adversarial examples
imperceptible perturbations
adversarial attacks
box attacks
strong acceleration
size strategy
perform poorly
novel framework
mislead models
highly vulnerable
generalization capabilities
art methods
adding subtle
adaptive step
accelerated gradient
dc.title.none.fl_str_mv Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks.
dc.type.none.fl_str_mv Dataset
info:eu-repo/semantics/publishedVersion
dataset
description <p>Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks.</p>
eu_rights_str_mv openAccess
id Manara_935db58a4bc0e8a4346ccd2044e45117
identifier_str_mv 10.1371/journal.pone.0337463.t004
network_acronym_str Manara
network_name_str ManaraRepo
oai_identifier_str oai:figshare.com:article/30715115
publishDate 2025
repository.mail.fl_str_mv
repository.name.fl_str_mv
repository_id_str
rights_invalid_str_mv CC BY 4.0
spelling Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks.Chen Lin (95910)Sheng Long (14795824)NeuroscienceBiological Sciences not elsewhere classifiedInformation Systems not elsewhere classifiedtwo crucial metricsnesterov &# 8217natural data distributionintroduces nesterov ’goals often conflictgeneration process towardextensive experiments demonstratedeep neural networksachieving stealthy attacksimperceptible perturbations tendevaluating adversarial attacksdiv >< pdiffusion mechanism guidesimperceptible adversarial examplesimperceptible perturbationsadversarial attacksbox attacksstrong accelerationsize strategyperform poorlynovel frameworkmislead modelshighly vulnerablegeneralization capabilitiesart methodsadding subtleadaptive stepaccelerated gradient<p>Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks.</p>2025-11-25T18:43:51ZDatasetinfo:eu-repo/semantics/publishedVersiondataset10.1371/journal.pone.0337463.t004https://figshare.com/articles/dataset/Attack_success_rates_of_adversarial_attacks_against_twelve_models_sup_sup_indicates_the_white-box_attacks_/30715115CC BY 4.0info:eu-repo/semantics/openAccessoai:figshare.com:article/307151152025-11-25T18:43:51Z
spellingShingle Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks.
Chen Lin (95910)
Neuroscience
Biological Sciences not elsewhere classified
Information Systems not elsewhere classified
two crucial metrics
nesterov &# 8217
natural data distribution
introduces nesterov ’
goals often conflict
generation process toward
extensive experiments demonstrate
deep neural networks
achieving stealthy attacks
imperceptible perturbations tend
evaluating adversarial attacks
div >< p
diffusion mechanism guides
imperceptible adversarial examples
imperceptible perturbations
adversarial attacks
box attacks
strong acceleration
size strategy
perform poorly
novel framework
mislead models
highly vulnerable
generalization capabilities
art methods
adding subtle
adaptive step
accelerated gradient
status_str publishedVersion
title Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks.
title_full Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks.
title_fullStr Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks.
title_full_unstemmed Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks.
title_short Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks.
title_sort Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks.
topic Neuroscience
Biological Sciences not elsewhere classified
Information Systems not elsewhere classified
two crucial metrics
nesterov &# 8217
natural data distribution
introduces nesterov ’
goals often conflict
generation process toward
extensive experiments demonstrate
deep neural networks
achieving stealthy attacks
imperceptible perturbations tend
evaluating adversarial attacks
div >< p
diffusion mechanism guides
imperceptible adversarial examples
imperceptible perturbations
adversarial attacks
box attacks
strong acceleration
size strategy
perform poorly
novel framework
mislead models
highly vulnerable
generalization capabilities
art methods
adding subtle
adaptive step
accelerated gradient