Balancing Access and Accountability: Ethical Challenges in Open-Source AI Deployment

Keywords

Loading...
Thumbnail Image

Issue Date

2024-08-15

Language

en

Document type

Journal Title

Journal ISSN

Volume Title

Publisher

Title

ISSN

Volume

Issue

Startpage

Endpage

DOI

Abstract

The release of ChatGPT by OpenAI started a significant shift in information retrieval, by providing public access to a state-of-the-art Large Language Model (LLM). Since the release of ChatGPT, the open-source community has significantly improved the quality of their models, making this advanced technology accessible to anyone. With this accessibility of open-source models, certain ethical questions arise, like: how does the trade-off between accessibility and accountability in open-source AI models impact potential misuse and safety? This thesis answers this question and gives recommendations on how to mitigate these risks in the short term while urging for subsequent research into this topic. It does so by exploring the potential risks and misuse of LLMs, such as the creation of misinformation, personalized scams, extremist and discriminatory texts and the potential threat to cybersecurity. After analysing current technological safety measures and the limitations of these open-source models it is evident that there is no technical solution that can keep these models accessible to anyone while also guaranteeing safe deployment. Instead, more focus should be on solutions around the deployment of these models to enhance safety. This thesis suggests the implementation of a Certified Access System, usage monitoring, laws or regulations which ensures that only models with adequate safety measures may be shared, and ethical training for users. Other findings are that balancing accessibility and accountability is crucial for the safe deployment of accessible open-source models, and that ethics must guide the design of AI to make truly safe systems. This work contributes to the understanding of the ethical landscape of open source AI models and provides recommendations for further research to mitigate risks associated with open-source AI systems.

Description

Citation

Supervisor

Faculty

Faculteit der Sociale Wetenschappen