Visualization of adversarial samples.

<p>(a) represents the original image and (b)-(f) represent the adversarial examples generated by five methods.</p>

Պահպանված է:
Մատենագիտական մանրամասներ
Հիմնական հեղինակ: Chen Lin (95910) (author)
Այլ հեղինակներ: Sheng Long (14795824) (author)
Հրապարակվել է: 2025
Խորագրեր:
Ցուցիչներ: Ավելացրեք ցուցիչ
Չկան պիտակներ, Եղեք առաջինը, ով նշում է այս գրառումը!
_version_ 1849927625931227136
author Chen Lin (95910)
author2 Sheng Long (14795824)
author2_role author
author_facet Chen Lin (95910)
Sheng Long (14795824)
author_role author
dc.creator.none.fl_str_mv Chen Lin (95910)
Sheng Long (14795824)
dc.date.none.fl_str_mv 2025-11-25T18:43:47Z
dc.identifier.none.fl_str_mv 10.1371/journal.pone.0337463.g004
dc.relation.none.fl_str_mv https://figshare.com/articles/figure/Visualization_of_adversarial_samples_/30715103
dc.rights.none.fl_str_mv CC BY 4.0
info:eu-repo/semantics/openAccess
dc.subject.none.fl_str_mv Neuroscience
Biological Sciences not elsewhere classified
Information Systems not elsewhere classified
two crucial metrics
nesterov &# 8217
natural data distribution
introduces nesterov ’
goals often conflict
generation process toward
extensive experiments demonstrate
deep neural networks
achieving stealthy attacks
imperceptible perturbations tend
evaluating adversarial attacks
div >< p
diffusion mechanism guides
imperceptible adversarial examples
imperceptible perturbations
adversarial attacks
box attacks
strong acceleration
size strategy
perform poorly
novel framework
mislead models
highly vulnerable
generalization capabilities
art methods
adding subtle
adaptive step
accelerated gradient
dc.title.none.fl_str_mv Visualization of adversarial samples.
dc.type.none.fl_str_mv Image
Figure
info:eu-repo/semantics/publishedVersion
image
description <p>(a) represents the original image and (b)-(f) represent the adversarial examples generated by five methods.</p>
eu_rights_str_mv openAccess
id Manara_83960aae82abbbe35d97304eb97eb4bc
identifier_str_mv 10.1371/journal.pone.0337463.g004
network_acronym_str Manara
network_name_str ManaraRepo
oai_identifier_str oai:figshare.com:article/30715103
publishDate 2025
repository.mail.fl_str_mv
repository.name.fl_str_mv
repository_id_str
rights_invalid_str_mv CC BY 4.0
spelling Visualization of adversarial samples.Chen Lin (95910)Sheng Long (14795824)NeuroscienceBiological Sciences not elsewhere classifiedInformation Systems not elsewhere classifiedtwo crucial metricsnesterov &# 8217natural data distributionintroduces nesterov ’goals often conflictgeneration process towardextensive experiments demonstratedeep neural networksachieving stealthy attacksimperceptible perturbations tendevaluating adversarial attacksdiv >< pdiffusion mechanism guidesimperceptible adversarial examplesimperceptible perturbationsadversarial attacksbox attacksstrong accelerationsize strategyperform poorlynovel frameworkmislead modelshighly vulnerablegeneralization capabilitiesart methodsadding subtleadaptive stepaccelerated gradient<p>(a) represents the original image and (b)-(f) represent the adversarial examples generated by five methods.</p>2025-11-25T18:43:47ZImageFigureinfo:eu-repo/semantics/publishedVersionimage10.1371/journal.pone.0337463.g004https://figshare.com/articles/figure/Visualization_of_adversarial_samples_/30715103CC BY 4.0info:eu-repo/semantics/openAccessoai:figshare.com:article/307151032025-11-25T18:43:47Z
spellingShingle Visualization of adversarial samples.
Chen Lin (95910)
Neuroscience
Biological Sciences not elsewhere classified
Information Systems not elsewhere classified
two crucial metrics
nesterov &# 8217
natural data distribution
introduces nesterov ’
goals often conflict
generation process toward
extensive experiments demonstrate
deep neural networks
achieving stealthy attacks
imperceptible perturbations tend
evaluating adversarial attacks
div >< p
diffusion mechanism guides
imperceptible adversarial examples
imperceptible perturbations
adversarial attacks
box attacks
strong acceleration
size strategy
perform poorly
novel framework
mislead models
highly vulnerable
generalization capabilities
art methods
adding subtle
adaptive step
accelerated gradient
status_str publishedVersion
title Visualization of adversarial samples.
title_full Visualization of adversarial samples.
title_fullStr Visualization of adversarial samples.
title_full_unstemmed Visualization of adversarial samples.
title_short Visualization of adversarial samples.
title_sort Visualization of adversarial samples.
topic Neuroscience
Biological Sciences not elsewhere classified
Information Systems not elsewhere classified
two crucial metrics
nesterov &# 8217
natural data distribution
introduces nesterov ’
goals often conflict
generation process toward
extensive experiments demonstrate
deep neural networks
achieving stealthy attacks
imperceptible perturbations tend
evaluating adversarial attacks
div >< p
diffusion mechanism guides
imperceptible adversarial examples
imperceptible perturbations
adversarial attacks
box attacks
strong acceleration
size strategy
perform poorly
novel framework
mislead models
highly vulnerable
generalization capabilities
art methods
adding subtle
adaptive step
accelerated gradient