Supplementary Results for Table 2 (b) and Fig 3.
<p>Attack success rates (%) of adversarial attacks against twelve models. This table contains the supplementary results for the subsection Flexibility and Further Comparison with NI-FGSM.</p> <p>(XLSX)</p>
Sábháilte in:
| Príomhchruthaitheoir: | |
|---|---|
| Rannpháirtithe: | |
| Foilsithe / Cruthaithe: |
2025
|
| Ábhair: | |
| Clibeanna: |
Cuir clib leis
Níl clibeanna ann, Bí ar an gcéad duine le clib a chur leis an taifead seo!
|
| _version_ | 1849927625946955776 |
|---|---|
| author | Chen Lin (95910) |
| author2 | Sheng Long (14795824) |
| author2_role | author |
| author_facet | Chen Lin (95910) Sheng Long (14795824) |
| author_role | author |
| dc.creator.none.fl_str_mv | Chen Lin (95910) Sheng Long (14795824) |
| dc.date.none.fl_str_mv | 2025-11-25T18:43:39Z |
| dc.identifier.none.fl_str_mv | 10.1371/journal.pone.0337463.s002 |
| dc.relation.none.fl_str_mv | https://figshare.com/articles/dataset/Supplementary_Results_for_Table_2_b_and_Fig_3_/30715076 |
| dc.rights.none.fl_str_mv | CC BY 4.0 info:eu-repo/semantics/openAccess |
| dc.subject.none.fl_str_mv | Neuroscience Biological Sciences not elsewhere classified Information Systems not elsewhere classified two crucial metrics nesterov &# 8217 natural data distribution introduces nesterov ’ goals often conflict generation process toward extensive experiments demonstrate deep neural networks achieving stealthy attacks imperceptible perturbations tend evaluating adversarial attacks div >< p diffusion mechanism guides imperceptible adversarial examples imperceptible perturbations adversarial attacks box attacks strong acceleration size strategy perform poorly novel framework mislead models highly vulnerable generalization capabilities art methods adding subtle adaptive step accelerated gradient |
| dc.title.none.fl_str_mv | Supplementary Results for Table 2 (b) and Fig 3. |
| dc.type.none.fl_str_mv | Dataset info:eu-repo/semantics/publishedVersion dataset |
| description | <p>Attack success rates (%) of adversarial attacks against twelve models. This table contains the supplementary results for the subsection Flexibility and Further Comparison with NI-FGSM.</p> <p>(XLSX)</p> |
| eu_rights_str_mv | openAccess |
| id | Manara_056a0f4f8c1ff7c53edaf843b7e60043 |
| identifier_str_mv | 10.1371/journal.pone.0337463.s002 |
| network_acronym_str | Manara |
| network_name_str | ManaraRepo |
| oai_identifier_str | oai:figshare.com:article/30715076 |
| publishDate | 2025 |
| repository.mail.fl_str_mv | |
| repository.name.fl_str_mv | |
| repository_id_str | |
| rights_invalid_str_mv | CC BY 4.0 |
| spelling | Supplementary Results for Table 2 (b) and Fig 3.Chen Lin (95910)Sheng Long (14795824)NeuroscienceBiological Sciences not elsewhere classifiedInformation Systems not elsewhere classifiedtwo crucial metricsnesterov &# 8217natural data distributionintroduces nesterov ’goals often conflictgeneration process towardextensive experiments demonstratedeep neural networksachieving stealthy attacksimperceptible perturbations tendevaluating adversarial attacksdiv >< pdiffusion mechanism guidesimperceptible adversarial examplesimperceptible perturbationsadversarial attacksbox attacksstrong accelerationsize strategyperform poorlynovel frameworkmislead modelshighly vulnerablegeneralization capabilitiesart methodsadding subtleadaptive stepaccelerated gradient<p>Attack success rates (%) of adversarial attacks against twelve models. This table contains the supplementary results for the subsection Flexibility and Further Comparison with NI-FGSM.</p> <p>(XLSX)</p>2025-11-25T18:43:39ZDatasetinfo:eu-repo/semantics/publishedVersiondataset10.1371/journal.pone.0337463.s002https://figshare.com/articles/dataset/Supplementary_Results_for_Table_2_b_and_Fig_3_/30715076CC BY 4.0info:eu-repo/semantics/openAccessoai:figshare.com:article/307150762025-11-25T18:43:39Z |
| spellingShingle | Supplementary Results for Table 2 (b) and Fig 3. Chen Lin (95910) Neuroscience Biological Sciences not elsewhere classified Information Systems not elsewhere classified two crucial metrics nesterov &# 8217 natural data distribution introduces nesterov ’ goals often conflict generation process toward extensive experiments demonstrate deep neural networks achieving stealthy attacks imperceptible perturbations tend evaluating adversarial attacks div >< p diffusion mechanism guides imperceptible adversarial examples imperceptible perturbations adversarial attacks box attacks strong acceleration size strategy perform poorly novel framework mislead models highly vulnerable generalization capabilities art methods adding subtle adaptive step accelerated gradient |
| status_str | publishedVersion |
| title | Supplementary Results for Table 2 (b) and Fig 3. |
| title_full | Supplementary Results for Table 2 (b) and Fig 3. |
| title_fullStr | Supplementary Results for Table 2 (b) and Fig 3. |
| title_full_unstemmed | Supplementary Results for Table 2 (b) and Fig 3. |
| title_short | Supplementary Results for Table 2 (b) and Fig 3. |
| title_sort | Supplementary Results for Table 2 (b) and Fig 3. |
| topic | Neuroscience Biological Sciences not elsewhere classified Information Systems not elsewhere classified two crucial metrics nesterov &# 8217 natural data distribution introduces nesterov ’ goals often conflict generation process toward extensive experiments demonstrate deep neural networks achieving stealthy attacks imperceptible perturbations tend evaluating adversarial attacks div >< p diffusion mechanism guides imperceptible adversarial examples imperceptible perturbations adversarial attacks box attacks strong acceleration size strategy perform poorly novel framework mislead models highly vulnerable generalization capabilities art methods adding subtle adaptive step accelerated gradient |