Date of Award
Doctor of Philosophy (PhD)
College of Computing and Engineering
Michael J. Laszlo
Francisco J. Mitropoulos
adversarial robustness, data augmentation, deep learning, machine learning, neural networks, stochastic weight averaging
Deep neural networks used for image classification are highly susceptible to adversarial attacks. The de facto method to increase adversarial robustness is to train neural networks with a mixture of adversarial images and unperturbed images. However, this method leads to robust overfitting, where the network primarily learns to recognize one specific type of attack used to generate the images while remaining vulnerable to others after training. In this dissertation, we performed a rigorous study to understand whether combinations of state of the art data augmentation methods with Stochastic Weight Averaging improve adversarial robustness and diminish adversarial overfitting across a wide range of attacks and magnitudes in the very imperceptible range. To our knowledge, we are the first to study the combination of FMix with Stochastic Weight Averaging and to carefully analyze its effect on robustness. Lastly, we developed a shallower custom architecture, SimpleNet, which reached a sizeable improvement on average robust accuracy when compared to the more complex architecture ResNet-18 for CIFAR-10 and Fashion_MNIST.
Anabetsy Termini. 2023. Adversarial Training of Deep Neural Networks. Doctoral dissertation. Nova Southeastern University. Retrieved from NSUWorks, College of Computing and Engineering. (1182)