Supplementary results for Table 3 (c).

<p>Attack success rates (%) of adversarial attacks against twelve models. This table contains the supplementary results for the subsection Flexibility.</p> <p>(XLSX)</p>

Tallennettuna:
Bibliografiset tiedot
Päätekijä: Chen Lin (95910) (author)
Muut tekijät: Sheng Long (14795824) (author)
Julkaistu: 2025
Aiheet:
Tagit: Lisää tagi
Ei tageja, Lisää ensimmäinen tagi!
_version_ 1849927625941712896
author Chen Lin (95910)
author2 Sheng Long (14795824)
author2_role author
author_facet Chen Lin (95910)
Sheng Long (14795824)
author_role author
dc.creator.none.fl_str_mv Chen Lin (95910)
Sheng Long (14795824)
dc.date.none.fl_str_mv 2025-11-25T18:43:43Z
dc.identifier.none.fl_str_mv 10.1371/journal.pone.0337463.s006
dc.relation.none.fl_str_mv https://figshare.com/articles/dataset/Supplementary_results_for_Table_3_c_/30715088
dc.rights.none.fl_str_mv CC BY 4.0
info:eu-repo/semantics/openAccess
dc.subject.none.fl_str_mv Neuroscience
Biological Sciences not elsewhere classified
Information Systems not elsewhere classified
two crucial metrics
nesterov &# 8217
natural data distribution
introduces nesterov ’
goals often conflict
generation process toward
extensive experiments demonstrate
deep neural networks
achieving stealthy attacks
imperceptible perturbations tend
evaluating adversarial attacks
div >< p
diffusion mechanism guides
imperceptible adversarial examples
imperceptible perturbations
adversarial attacks
box attacks
strong acceleration
size strategy
perform poorly
novel framework
mislead models
highly vulnerable
generalization capabilities
art methods
adding subtle
adaptive step
accelerated gradient
dc.title.none.fl_str_mv Supplementary results for Table 3 (c).
dc.type.none.fl_str_mv Dataset
info:eu-repo/semantics/publishedVersion
dataset
description <p>Attack success rates (%) of adversarial attacks against twelve models. This table contains the supplementary results for the subsection Flexibility.</p> <p>(XLSX)</p>
eu_rights_str_mv openAccess
id Manara_da157a54de60feabdcff4fea986883a4
identifier_str_mv 10.1371/journal.pone.0337463.s006
network_acronym_str Manara
network_name_str ManaraRepo
oai_identifier_str oai:figshare.com:article/30715088
publishDate 2025
repository.mail.fl_str_mv
repository.name.fl_str_mv
repository_id_str
rights_invalid_str_mv CC BY 4.0
spelling Supplementary results for Table 3 (c).Chen Lin (95910)Sheng Long (14795824)NeuroscienceBiological Sciences not elsewhere classifiedInformation Systems not elsewhere classifiedtwo crucial metricsnesterov &# 8217natural data distributionintroduces nesterov ’goals often conflictgeneration process towardextensive experiments demonstratedeep neural networksachieving stealthy attacksimperceptible perturbations tendevaluating adversarial attacksdiv >< pdiffusion mechanism guidesimperceptible adversarial examplesimperceptible perturbationsadversarial attacksbox attacksstrong accelerationsize strategyperform poorlynovel frameworkmislead modelshighly vulnerablegeneralization capabilitiesart methodsadding subtleadaptive stepaccelerated gradient<p>Attack success rates (%) of adversarial attacks against twelve models. This table contains the supplementary results for the subsection Flexibility.</p> <p>(XLSX)</p>2025-11-25T18:43:43ZDatasetinfo:eu-repo/semantics/publishedVersiondataset10.1371/journal.pone.0337463.s006https://figshare.com/articles/dataset/Supplementary_results_for_Table_3_c_/30715088CC BY 4.0info:eu-repo/semantics/openAccessoai:figshare.com:article/307150882025-11-25T18:43:43Z
spellingShingle Supplementary results for Table 3 (c).
Chen Lin (95910)
Neuroscience
Biological Sciences not elsewhere classified
Information Systems not elsewhere classified
two crucial metrics
nesterov &# 8217
natural data distribution
introduces nesterov ’
goals often conflict
generation process toward
extensive experiments demonstrate
deep neural networks
achieving stealthy attacks
imperceptible perturbations tend
evaluating adversarial attacks
div >< p
diffusion mechanism guides
imperceptible adversarial examples
imperceptible perturbations
adversarial attacks
box attacks
strong acceleration
size strategy
perform poorly
novel framework
mislead models
highly vulnerable
generalization capabilities
art methods
adding subtle
adaptive step
accelerated gradient
status_str publishedVersion
title Supplementary results for Table 3 (c).
title_full Supplementary results for Table 3 (c).
title_fullStr Supplementary results for Table 3 (c).
title_full_unstemmed Supplementary results for Table 3 (c).
title_short Supplementary results for Table 3 (c).
title_sort Supplementary results for Table 3 (c).
topic Neuroscience
Biological Sciences not elsewhere classified
Information Systems not elsewhere classified
two crucial metrics
nesterov &# 8217
natural data distribution
introduces nesterov ’
goals often conflict
generation process toward
extensive experiments demonstrate
deep neural networks
achieving stealthy attacks
imperceptible perturbations tend
evaluating adversarial attacks
div >< p
diffusion mechanism guides
imperceptible adversarial examples
imperceptible perturbations
adversarial attacks
box attacks
strong acceleration
size strategy
perform poorly
novel framework
mislead models
highly vulnerable
generalization capabilities
art methods
adding subtle
adaptive step
accelerated gradient