Artificial Intelligence As the Solution to All Our Problems: Why It Is a Paradigm of Dehumanisation Instead
It goes without saying that Artificial Intelligence (AI) is at the pinnacle of our technological advancements. We have created intelligent systems that already vastly outperform certain human capacities, and it is clear that this will only increase as time passes. In many ways this is very promising, but the forms that technology and AI take in our society has also sparked many concerns of dehumanisation. On the one hand, attention is drawn towards the power play that is underlying the use of technology, such that the dehumanisation would be gone if only we rule out these hidden intentions or mistakes in the applications. Yet, such a view glosses over the fact that dehumanisation might also occur when a seemingly positive goal is perfectly reached, a phenomenon that I coin “nonintentional dehumanisation”. According to the philosopher Martin Heidegger, the essence of technology is “enframing”: a paradigm of efficiency and resourcification that enables a readily acceptance of technology, even though it is inherently nonneutral. This thesis argues how this “enframing” by AI leads to intrinsic nonintentional dehumanisation. Furthermore, by drawing from Haslam’s “mechanical dehumanisation” and Borgmann’s “device paradigm” this nonintentional form of dehumanisation is elaborated to take shape in two forms: the denial and the deprivation of a certain humanness in those who are subjected to it. Finally, in order to evaluate to what extent this problem is concretely occurring, this two-fold model is used to investigate two contemporary domains of AI applications: human resource management and healthcare. The message is that if we are to combat this paradigm of dehumanisation, we need to be wary of the values that are inherent to the AI-induced changes, and the proposed identification model could prove useful when evaluating AI applications in the future.
Faculteit der Sociale Wetenschappen