Consent

This site uses third party services that need your consent.

AI Ethics and Societal Aspects of Biomedical Informatics

The rapid growth of biomedical data and AI-driven methods raises fundamental ethical challenges. Our research addresses issues such as algorithmic bias and the responsible use of sensitive health information. By integrating ethical reflection into technical development, we strive to ensure that biomedical informatics not only advances scientific innovation but also safeguards trust, fairness, and societal responsibility.

Publications

ELSA and the Data Scientist: A Qualitative Approach

2024 - Open Access -
Maria Christoforaki, Stephanie von Maltzan, Hannes Oellerich

Data Science (DS) and Artificial Intelligence (AI) are transforming research, industry and society at an unprecedented pace, enabling advances in areas such as healthcare, finance, e-commerce and beyond. Despite their potential, the rapid development and widespread use of DS and AI raise (novel) issues of reliability, accuracy, copyright and data protection, and bias and discrimination, among others, see for instance [1], [2], [3], [4]. It is therefore vital for data scientists to acknowledge the Ethical, Legal and Societal Aspects (ELSA) encountered in DS and AI projects, as this can promote critical thinking and reflection, thereby ensuring that data-driven systems and their underlying technologies are developed, deployed and used responsibly. In the framework of NFDI4DataScience, specifically in the Community and Training task area, we aim to develop ELSA guidelines for data scientists [5]. In order to achieve this objective, we tried to assess the landscape by conducting interviews with researchers and practitioners in the field, aiming to identify and analyse the most common ELSA challenges encountered in DS/AI projects and how to cope with them. The interviews were semi-structured interviews as this form is well suited to our purpose of collecting experiences, reflections and opinions from the participants [6]. A total of 30 were conducted between November 2022 and February 2024. The participants came mainly from academia, but the industry was also well represented. The application domains included a.o., healthcare, finance, engineering and digital humanities. In order to systematically interpret the material for manifest and latent meanings, we used qualitative content analysis [7]. Consequently, a categorisation of the material was developed to provide the basis for this interpretation [8, p. 33]. Initial categories were developed deductively, derived from the interview guide, which itself was based on existing theory and research. Subcategories were created inductively from the interviews following initial coding with the main categories. The categories reflected the key ELSA challenges faced by data scientists, including data protection, but also more specifically fairness, transparency, consent, intellectual property, and data scientists' knowledge (and also attitudes) towards ELSA challenges and how they influence their decision-making processes in the project. The results of our analysis reveal that data scientists are generally aware of ELSA issues, some more acutely than others; for example, legal issues, especially data protection, are more prominent, especially in application domains such as healthcare; bias is considered more during the data collection and less in connection to the model used or the system deployment; issues of transparency and explainability are also crucial although not prevalent. Insight was also provided regarding interdisciplinary cooperation, institutionalised ELSA support, and project documentation. Additionally, we have recorded critical assessments of the practices followed, spanning from issues with the application of laws to the responsibility and accountability of practitioners during a project life cycle. Finally, our findings emphasise the necessity of enhancing ELSA literacy and establishing and providing a strong foundational understanding of ethical and legal principles to data scientists. Developing recommendations/best practices for data scientists was regarded as a positive first step towards this goal

Community and Training in NFDI4DS

2023 - Open Access -
Lorenz, Anna-Lena; Christoforaki, Maria; Hennig, Christine; Kraft, Angelie; von Maltzan, Stephanie; Schimmler, Sonja

Towards an ELSA Curriculum for Data Scientists

2024 - Open Access -

Abstract

The use of artificial intelligence (AI) applications in a growing number of domains in recent years has put into focus the ethical, legal, and societal aspects (ELSA) of these technologies and the relevant challenges they pose. In this paper, we propose an ELSA curriculum for data scientists aiming to raise awareness about ELSA challenges in their work, provide them with a common language with the relevant domain experts in order to cooperate to find appropriate solutions, and finally, incorporate ELSA in the data science workflow. ELSA should not be seen as an impediment or a superfluous artefact but rather as an integral part of the Data Science Project Lifecycle. The proposed curriculum uses the CRISP-DM (CRoss-Industry Standard Process for Data Mining) model as a backbone to define a vertical partition expressed in modules corresponding to the CRISP-DM phases. The horizontal partition includes knowledge units belonging to three strands that run through the phases, namely ethical and societal, legal and technical rendering knowledge units (KUs). In addition to the detailed description of the aforementioned KUs, we also discuss their implementation, issues such as duration, form, and evaluation of participants, as well as the variance of the knowledge level and needs of the target audience.

AI Ethics—A Bird’s Eye View

Abstract

The explosion of data-driven applications using Artificial Intelligence (AI) in recent years has given rise to a variety of ethical issues regarding data collection, annotation, and processing using mostly opaque algorithms, as well as the interpretation and employment of the results of the AI pipeline. The ubiquity of AI applications negatively impacts a variety of sensitive areas, ranging from discrimination against vulnerable populations to privacy invasion and the environmental cost that these algorithms entail, and puts into focus on the ever present domain of AI ethics. In this review article we present a bird’s eye view approach of the AI ethics landscape, starting from a historical point of view, examining the moral issues that were introduced by big datasets and the application of non-symbolic AI algorithms, the normative approaches (principles and guidelines) to these issues and the ensuing criticism, as well as the actualization of these principles within the proposed frameworks. Subsequently, we focus on the concept of responsibility, both as personal responsibility of the AI practitioners and sustainability, meaning the promotion of beneficence for both the society and the domain, and the role of professional certification and education in averting unethical choices. Finally, we conclude with indicating the multidisciplinary nature of AI ethics and suggesting future challenges.

Ethical, Legal, and Societal Aspects of Data Science as manifested via a series of Interviews conducted within the framework of NFDI4DataScience, Study guide and anonymised transcripts

2025 - Open Access -
Maria Christoforaki, Stephanie von Maltzan

Dataset and Study Report

The upload consists of the two following files: 

  1. The study report on the Ethical, Legal, and Societal Aspects of Data Science, as manifested through a series of Interviews conducted within the framework of NFDI4DataScience Task Area 1: Community and Training. In the interviews, data science researchers and practitioners were asked to identify and analyse the most common legal and ethical challenges encountered in DS projects and how they address them. The objective was to use the collected information to create ELSA guidelines for Data Scientists. The file consists of the study guide itself and the following supplementary material:

    1. Call for participants email

    2. Call for participants email

    3. Email to the interviewees explaining the process

    4. Consent form template

    5. Anonymity Policy Clarification email

    6. Anonymisation Protocol

  2. The anonymized transcripts of the interviews