Attacking an AI Classifier in a Realistic Context

dc.contributor.advisorSprinkhuizen-Kuyper, I.G.
dc.contributor.advisorHeskes, T.M.
dc.contributor.authorDelft, B. van
dc.date.issued2009-01-05
dc.description.abstractIn this thesis a theoretical attack on an AI classifier as introduced by Barreno et al. (2006) is described and lifted to a more realistic setting. The dataset, classifier and attack algorithm were updated in order to increase reality. The KDD ’99 cup-data on network intrusion was used as a data set and a combination of kMeans and Learning Vector Quantization was used for classification. The hypothesis was confirmed that an increase in realism would result in a significant increase in the number of iterations needed for a successful attack. However in some attampts the attack still succeeded. Subsequently the randomization defense as suggested by Barreno et al. was implemented and tested in both abstract and realistic contexts. The defense was effective in all tested contexts, however seemed to be less effective in the most realistic one. Since both the realistic context and the randomization defense increase the number of iterations needed for the attack, further research can be performed on how an external network notifying suspicious changes in the primary classifier may benefit from this.en_US
dc.identifier.urihttp://theses.ubn.ru.nl/handle/123456789/40
dc.language.isoenen_US
dc.thesis.facultyFaculteit der Sociale Wetenschappenen_US
dc.thesis.specialisationBachelor Artificial Intelligenceen_US
dc.thesis.studyprogrammeArtificial Intelligenceen_US
dc.thesis.typeBacheloren_US
dc.titleAttacking an AI Classifier in a Realistic Contexten_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Delft, B.van BA-Thesis.pdf
Size:
673.49 KB
Format:
Adobe Portable Document Format
Description:
Scriptietekst