CCE Theses and Dissertations

Date of Award

2022

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

College of Computing and Engineering

Advisor

Wei Liu

Committee Member

Peixiang

Committee Member

Ajoy Kumar

Keywords

deep neural networks, image and voice recognition, robust adversarial training algorithm (LSRAT)

Abstract

With advancements in computer hardware, deep neural networks outperform other methods for many applications, such as image and voice recognition. Unfortunately, existing deep neural networks are fragile at test time against adversarial examples, which are intentionally calculated perturbations that cause a neural network to misclassify the correct input label. This vulnerability makes the use of neural networks risky, especially in critical applications. Extensive prior research studies to build a robust neural network using diverse approaches have been proposed to tackle this problem, including customized models and their parameters, input pre-processing to detect and remove adversarial perturbations, blocking the adversarial input, and many techniques to train the network using premade adversarial examples. Robust adversarial training is the most promising approach to achieving robustness against a wide range of different adversarial example attacks.

Despite the success of robust adversarial training, it is challenging to perform when using medium to large datasets due to the algorithms’ extensive training time requirement. This dissertation introduced an approach to lower the training time of robust adversarial training by modifying the adversarial inputs generation process and re-utilizing the same crafted adversarial examples during the robust adversarial training process. Five experiments were conducted in this dissertation research. The results obtained from the first two experiments guided the development of the layer-specific robust adversarial training algorithm (LSRAT).

The third experiment demonstrated empirically that the LSRAT algorithm was more computationally efficient than the traditional approach. Furthermore, the speed-up in training time had a negligible effect on the final model’s robust and standard accuracy. The last two experiments conducted showed that it was possible to further improve the efficiency of the LSRAT algorithm by tuning the size of the mini batch used during training and by correctly selecting the optimizer learning rate values. These experiments demonstrated that models trained using the proposed cost- reduction techniques were able to effectively reduce the computational overhead with little to no impact on the models' accuracy compared to the baseline model.

Share

COinS