Comparison of SI-NI-FGSM and SI+Ours.
<p>(a) Attack success rates of different model combinations with increasing steps, attacks are launched from ResNet-18 and transferred to four target models: ResNet-18 (white-box), ResNet-34, ResNet-50, and ResNet-101 (black-box settings); (b) Attack success rates of different models, the sour...
Spremljeno u:
| Glavni autor: | |
|---|---|
| Daljnji autori: | |
| Izdano: |
2025
|
| Teme: | |
| Oznake: |
Dodaj oznaku
Bez oznaka, Budi prvi tko označuje ovaj zapis!
|
| _version_ | 1849927625934372864 |
|---|---|
| author | Chen Lin (95910) |
| author2 | Sheng Long (14795824) |
| author2_role | author |
| author_facet | Chen Lin (95910) Sheng Long (14795824) |
| author_role | author |
| dc.creator.none.fl_str_mv | Chen Lin (95910) Sheng Long (14795824) |
| dc.date.none.fl_str_mv | 2025-11-25T18:43:46Z |
| dc.identifier.none.fl_str_mv | 10.1371/journal.pone.0337463.g003 |
| dc.relation.none.fl_str_mv | https://figshare.com/articles/figure/Comparison_of_SI-NI-FGSM_and_SI_Ours_/30715100 |
| dc.rights.none.fl_str_mv | CC BY 4.0 info:eu-repo/semantics/openAccess |
| dc.subject.none.fl_str_mv | Neuroscience Biological Sciences not elsewhere classified Information Systems not elsewhere classified two crucial metrics nesterov &# 8217 natural data distribution introduces nesterov ’ goals often conflict generation process toward extensive experiments demonstrate deep neural networks achieving stealthy attacks imperceptible perturbations tend evaluating adversarial attacks div >< p diffusion mechanism guides imperceptible adversarial examples imperceptible perturbations adversarial attacks box attacks strong acceleration size strategy perform poorly novel framework mislead models highly vulnerable generalization capabilities art methods adding subtle adaptive step accelerated gradient |
| dc.title.none.fl_str_mv | Comparison of SI-NI-FGSM and SI+Ours. |
| dc.type.none.fl_str_mv | Image Figure info:eu-repo/semantics/publishedVersion image |
| description | <p>(a) Attack success rates of different model combinations with increasing steps, attacks are launched from ResNet-18 and transferred to four target models: ResNet-18 (white-box), ResNet-34, ResNet-50, and ResNet-101 (black-box settings); (b) Attack success rates of different models, the source model is ResNet-18.</p> |
| eu_rights_str_mv | openAccess |
| id | Manara_5d541734aeca10b7521315e6ecb24508 |
| identifier_str_mv | 10.1371/journal.pone.0337463.g003 |
| network_acronym_str | Manara |
| network_name_str | ManaraRepo |
| oai_identifier_str | oai:figshare.com:article/30715100 |
| publishDate | 2025 |
| repository.mail.fl_str_mv | |
| repository.name.fl_str_mv | |
| repository_id_str | |
| rights_invalid_str_mv | CC BY 4.0 |
| spelling | Comparison of SI-NI-FGSM and SI+Ours.Chen Lin (95910)Sheng Long (14795824)NeuroscienceBiological Sciences not elsewhere classifiedInformation Systems not elsewhere classifiedtwo crucial metricsnesterov &# 8217natural data distributionintroduces nesterov ’goals often conflictgeneration process towardextensive experiments demonstratedeep neural networksachieving stealthy attacksimperceptible perturbations tendevaluating adversarial attacksdiv >< pdiffusion mechanism guidesimperceptible adversarial examplesimperceptible perturbationsadversarial attacksbox attacksstrong accelerationsize strategyperform poorlynovel frameworkmislead modelshighly vulnerablegeneralization capabilitiesart methodsadding subtleadaptive stepaccelerated gradient<p>(a) Attack success rates of different model combinations with increasing steps, attacks are launched from ResNet-18 and transferred to four target models: ResNet-18 (white-box), ResNet-34, ResNet-50, and ResNet-101 (black-box settings); (b) Attack success rates of different models, the source model is ResNet-18.</p>2025-11-25T18:43:46ZImageFigureinfo:eu-repo/semantics/publishedVersionimage10.1371/journal.pone.0337463.g003https://figshare.com/articles/figure/Comparison_of_SI-NI-FGSM_and_SI_Ours_/30715100CC BY 4.0info:eu-repo/semantics/openAccessoai:figshare.com:article/307151002025-11-25T18:43:46Z |
| spellingShingle | Comparison of SI-NI-FGSM and SI+Ours. Chen Lin (95910) Neuroscience Biological Sciences not elsewhere classified Information Systems not elsewhere classified two crucial metrics nesterov &# 8217 natural data distribution introduces nesterov ’ goals often conflict generation process toward extensive experiments demonstrate deep neural networks achieving stealthy attacks imperceptible perturbations tend evaluating adversarial attacks div >< p diffusion mechanism guides imperceptible adversarial examples imperceptible perturbations adversarial attacks box attacks strong acceleration size strategy perform poorly novel framework mislead models highly vulnerable generalization capabilities art methods adding subtle adaptive step accelerated gradient |
| status_str | publishedVersion |
| title | Comparison of SI-NI-FGSM and SI+Ours. |
| title_full | Comparison of SI-NI-FGSM and SI+Ours. |
| title_fullStr | Comparison of SI-NI-FGSM and SI+Ours. |
| title_full_unstemmed | Comparison of SI-NI-FGSM and SI+Ours. |
| title_short | Comparison of SI-NI-FGSM and SI+Ours. |
| title_sort | Comparison of SI-NI-FGSM and SI+Ours. |
| topic | Neuroscience Biological Sciences not elsewhere classified Information Systems not elsewhere classified two crucial metrics nesterov &# 8217 natural data distribution introduces nesterov ’ goals often conflict generation process toward extensive experiments demonstrate deep neural networks achieving stealthy attacks imperceptible perturbations tend evaluating adversarial attacks div >< p diffusion mechanism guides imperceptible adversarial examples imperceptible perturbations adversarial attacks box attacks strong acceleration size strategy perform poorly novel framework mislead models highly vulnerable generalization capabilities art methods adding subtle adaptive step accelerated gradient |