Attacking an AI Classifier in a Realistic Context

Keywords
Loading...
Thumbnail Image
Issue Date
2009-01-05
Language
en
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
In this thesis a theoretical attack on an AI classifier as introduced by Barreno et al. (2006) is described and lifted to a more realistic setting. The dataset, classifier and attack algorithm were updated in order to increase reality. The KDD ’99 cup-data on network intrusion was used as a data set and a combination of kMeans and Learning Vector Quantization was used for classification. The hypothesis was confirmed that an increase in realism would result in a significant increase in the number of iterations needed for a successful attack. However in some attampts the attack still succeeded. Subsequently the randomization defense as suggested by Barreno et al. was implemented and tested in both abstract and realistic contexts. The defense was effective in all tested contexts, however seemed to be less effective in the most realistic one. Since both the realistic context and the randomization defense increase the number of iterations needed for the attack, further research can be performed on how an external network notifying suspicious changes in the primary classifier may benefit from this.
Description
Citation
Faculty
Faculteit der Sociale Wetenschappen