Complete figures generated during research for paper "<b><i>The Weaponization of Imperfection: Quantifying Adversarial Vulnerability in Pre-Trained Vision Models and its Direct Implications for AGI Catastrophe</i></b>"

<p dir="ltr">Figures for: This study presents a rigorous empirical quantification of adversarial vulnerability in state-of-the-art Convolutional Neural Networks (CNNs) and directly relates this measurable fragility to the looming systemic risk of Artificial General Intelligence (AGI)...

Full description

Saved in:
Bibliographic Details
Main Author: Qamar Muneer Akbar (22456075) (author)
Published: 2025
Subjects:
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:<p dir="ltr">Figures for: This study presents a rigorous empirical quantification of adversarial vulnerability in state-of-the-art Convolutional Neural Networks (CNNs) and directly relates this measurable fragility to the looming systemic risk of Artificial General Intelligence (AGI) misalignment. I conducted an extensive testing campaign on three widely adopted, <b>ImageNet-pretrained architectures</b>—<b>ResNet-50, DenseNet-121, and VGG-16</b>—using the same type of <b>ImageNet image samples</b> they were originally trained upon. My research focused exclusively on the vulnerability of these models to both targeted and untargeted gradient-based perturbations, specifically employing <b>Fast Gradient Sign Method (FGSM), Basic Iterative Method (BIM), Projected Gradient Descent (PGD), and Momentum Iterative Method (MIM)</b> attacks across various budgets (ε).</p>