Developing a Conformity Assessment Service for AI Systems under the EU AI Act
Keywords
Loading...
Authors
Issue Date
2024-12-02
Language
en
Document type
Journal Title
Journal ISSN
Volume Title
Publisher
Title
ISSN
Volume
Issue
Startpage
Endpage
DOI
Abstract
This research investigates the development of a conformity assessment service for providers and
deployers of AI systems, in compliance with the recently adopted EU Artificial Intelligence Act (AI
Act). The AI Act has set a turning point in the field of AI laws and regulations. Using a risk-based
approach, the Act categorizes AI systems in four risk categories: prohibited, high-, limited-, and
low-risk. To eliminate potential risks and ensure transparency, accuracy, and robustness of AI
systems, providers of prohibited and high-risk systems are asked to follow specified requirements.
Furthermore, voluntary application of these requirements for limited- and low-risk systems is highly
recommended.
eagle lsp is a legal service provider, handling mass challenges in the legal environment. They make
use of legal tech innovations to manage their cases. Furthermore, eagle lsp provides several legal
services to companies, including support in compliance according to the German Whistleblower
Protection Act (HinSchG) and the General Data Protection Regulation (GDPR). In their processes,
eagle lsp makes use of AI technologies to improve their workflows. eagle lsp values ethical and
lawful development and use of AI systems and wants to expand their service by providing conformity
assessments for providers of AI systems.
This research examines how companies can comply to the AI Act, focusing on the potential advantages
of third-party assessments for limited- and low-risk AI systems to enhance their systems’ quality
and safety. Setting quality standards also to limited- and low-risk systems, this service aims to
contribute to the development of trustworthy and ethical AI. First, the AI Act will be introduced,
pointing out challenges providers of limited- and low-risk systems might encounter. Then, existing
assessment methods will be examined and compared. An important part of AI development is an
ethical evaluation, which we aim to include in the conformity assessment. For this purpose we delve
into the ethical cycle for moral problem solving. Qualitative interviews will provide insights into the
effectiveness of third-party versus internal assessments. The study will conclude in proposing a
conformity assessment framework for such AI systems based on the AI Act and the previous insights,
which will then be tested on an existing system.
Description
Citation
Faculty
Faculteit der Sociale Wetenschappen
