Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks.
<p>Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks.</p>
Gardado en:
| Autor Principal: | |
|---|---|
| Outros autores: | |
| Publicado: |
2025
|
| Subjects: | |
| Tags: |
Engadir etiqueta
Sen Etiquetas, Sexa o primeiro en etiquetar este rexistro!
|
| _version_ | 1849927625932275712 |
|---|---|
| author | Chen Lin (95910) |
| author2 | Sheng Long (14795824) |
| author2_role | author |
| author_facet | Chen Lin (95910) Sheng Long (14795824) |
| author_role | author |
| dc.creator.none.fl_str_mv | Chen Lin (95910) Sheng Long (14795824) |
| dc.date.none.fl_str_mv | 2025-11-25T18:43:48Z |
| dc.identifier.none.fl_str_mv | 10.1371/journal.pone.0337463.t001 |
| dc.relation.none.fl_str_mv | https://figshare.com/articles/dataset/Attack_success_rates_of_adversarial_attacks_against_twelve_models_sup_sup_indicates_the_white-box_attacks_/30715106 |
| dc.rights.none.fl_str_mv | CC BY 4.0 info:eu-repo/semantics/openAccess |
| dc.subject.none.fl_str_mv | Neuroscience Biological Sciences not elsewhere classified Information Systems not elsewhere classified two crucial metrics nesterov &# 8217 natural data distribution introduces nesterov ’ goals often conflict generation process toward extensive experiments demonstrate deep neural networks achieving stealthy attacks imperceptible perturbations tend evaluating adversarial attacks div >< p diffusion mechanism guides imperceptible adversarial examples imperceptible perturbations adversarial attacks box attacks strong acceleration size strategy perform poorly novel framework mislead models highly vulnerable generalization capabilities art methods adding subtle adaptive step accelerated gradient |
| dc.title.none.fl_str_mv | Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks. |
| dc.type.none.fl_str_mv | Dataset info:eu-repo/semantics/publishedVersion dataset |
| description | <p>Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks.</p> |
| eu_rights_str_mv | openAccess |
| id | Manara_459a28048e5e2bc8823f5d6d101addd9 |
| identifier_str_mv | 10.1371/journal.pone.0337463.t001 |
| network_acronym_str | Manara |
| network_name_str | ManaraRepo |
| oai_identifier_str | oai:figshare.com:article/30715106 |
| publishDate | 2025 |
| repository.mail.fl_str_mv | |
| repository.name.fl_str_mv | |
| repository_id_str | |
| rights_invalid_str_mv | CC BY 4.0 |
| spelling | Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks.Chen Lin (95910)Sheng Long (14795824)NeuroscienceBiological Sciences not elsewhere classifiedInformation Systems not elsewhere classifiedtwo crucial metricsnesterov &# 8217natural data distributionintroduces nesterov ’goals often conflictgeneration process towardextensive experiments demonstratedeep neural networksachieving stealthy attacksimperceptible perturbations tendevaluating adversarial attacksdiv >< pdiffusion mechanism guidesimperceptible adversarial examplesimperceptible perturbationsadversarial attacksbox attacksstrong accelerationsize strategyperform poorlynovel frameworkmislead modelshighly vulnerablegeneralization capabilitiesart methodsadding subtleadaptive stepaccelerated gradient<p>Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks.</p>2025-11-25T18:43:48ZDatasetinfo:eu-repo/semantics/publishedVersiondataset10.1371/journal.pone.0337463.t001https://figshare.com/articles/dataset/Attack_success_rates_of_adversarial_attacks_against_twelve_models_sup_sup_indicates_the_white-box_attacks_/30715106CC BY 4.0info:eu-repo/semantics/openAccessoai:figshare.com:article/307151062025-11-25T18:43:48Z |
| spellingShingle | Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks. Chen Lin (95910) Neuroscience Biological Sciences not elsewhere classified Information Systems not elsewhere classified two crucial metrics nesterov &# 8217 natural data distribution introduces nesterov ’ goals often conflict generation process toward extensive experiments demonstrate deep neural networks achieving stealthy attacks imperceptible perturbations tend evaluating adversarial attacks div >< p diffusion mechanism guides imperceptible adversarial examples imperceptible perturbations adversarial attacks box attacks strong acceleration size strategy perform poorly novel framework mislead models highly vulnerable generalization capabilities art methods adding subtle adaptive step accelerated gradient |
| status_str | publishedVersion |
| title | Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks. |
| title_full | Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks. |
| title_fullStr | Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks. |
| title_full_unstemmed | Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks. |
| title_short | Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks. |
| title_sort | Attack success rates (%) of adversarial attacks against twelve models. <sup>*</sup> indicates the white-box attacks. |
| topic | Neuroscience Biological Sciences not elsewhere classified Information Systems not elsewhere classified two crucial metrics nesterov &# 8217 natural data distribution introduces nesterov ’ goals often conflict generation process toward extensive experiments demonstrate deep neural networks achieving stealthy attacks imperceptible perturbations tend evaluating adversarial attacks div >< p diffusion mechanism guides imperceptible adversarial examples imperceptible perturbations adversarial attacks box attacks strong acceleration size strategy perform poorly novel framework mislead models highly vulnerable generalization capabilities art methods adding subtle adaptive step accelerated gradient |