Auditing Algorithms: A working example on auditing algorithms
Arti ficial Intelligence (AI) is becoming a bigger part of the human society everyday. Intelligent algorithms are making decisions that can be of great impact on individuals. Therefore, there is a raising concern of possible harmful consequences of decisions that negatively impact minorities. This thesis investigates whether the SMACTR framework for internal audits can be applied on intelligent systems to detect possible harmful consequences in an early stage of model development. In order to do so, a case study is conducted to set out a working example to contribute to the fi eld of auditing algorithms. The internal audit is performed on the Dutch RobBERT model for natural language processing (NLP), ne-tuned to perform sentiment analysis. This thesis suggests that internal audits has the potential to become part of the software development process to help the field towards more transparency of intelligent algorithms and early detection of harmful behaviour.
Faculteit der Sociale Wetenschappen