The Trustworthy AI Lab

The Trustworthy AI Lab at Helmut Schmidt University (HSU/UniBwHis an interdisciplinary team dedicated to pioneering research at the intersection of technology, society, and ethics. Our mission is to address the complex challenges posed by artificial intelligence (AI) and to develop sustainable solutions that comply with ethical standards. With a particular focus on trustworthiness, the Lab explores the essential elements that make AI applications ethical, just, and reliable. It also addresses governance issues related to AI to ensure its responsible use. The Lab aims to foster dialogue and critical reflection on the responsible use of AI, considering its development, implementation, and the broader implications of its use.

Members

E-Mail[email protected]

The Lab is affiliated with the Z-inspection® initiative. Z-Inspection® is a holistic process for evaluating the trustworthiness of AI-based technologies at different stages of the AI lifecycle. In particular, it focuses on identifying and discussing ethical issues and tensions through the development of socio-technical scenarios. It uses the European Union High-Level Expert Group’s (EU HLEG) general guidelines for trustworthy AI.

The process has been published in the IEEE Transactions on Technology and Society.

Z-Inspection® is distributed under the terms of the Creative Commons License (Attribution-NonCommercial-ShareAlike CC BY-NC-SA).

Z-Inspection® is listed in the new OECD Catalogue of AI Tools & Metrics.

Affiliated labs with the Z-Inspection® initiative.

Publications

  • Allert, H. & Hartong, S. (2021): Alternativlose Zukunft? Zur Notwendigkeit kritischer Auseinandersetzung mit den Lösungsversprechen Künstlicher Intelligenz. In: ON. Lernen in der digitalen Welt, 5/2021: 8-9.
  • Decuypere, M.; Alirezabeigi, S.; Grimaldi, E.; Hartong, S.; Kiesewetter, S.; Landri, P.: Masschelein, J.; Piattoeva, N.; Ratner, H.; Simons, M.; Vanermen, L. & van den Broeck, P. (2022): Laws of edu-automation? Three different approaches to deal with processes of automation and artificial intelligence in the field of education. In: Postdigital Science and Education.
  • Hartong, S. & Sander, I. (2021): Critical Data(fication) Literacy in und durch Bildung. In: Renz, André; Etsiwah, Bennet; Burgueño Hopf, Ana Teresa (Hrsg.) Whitepaper Datenkompetenz: 19-20.
  • Niggemann, O.; Zimmering, B.; Steude, H.; Augustin, J.L.; Windmann, A.; Multaheb, S. (2023): Machine Learning for Cyber-Physical Systems. In: Vogel-Heuser, B.; Wimmer, M. (eds) Digital Transformation. Springer Vieweg, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-65004-2_17
  • Schreiber, G. (2024): Reconsidering Agency in the Age of AI. In: Filozofia, Bd. 79(5), 2024, 529-537.
  • Schreiber, G. (2022): Data Toxicity. A Techno-Ethical Challenge [Datentoxikalität. Eine technikethische Herausforderung]. In: Augsberg S & Gehring P. (eds) Data Sovereignty: Positions on the Debate, Frankfurt a.M./New York: Campus, 199-217.
  • Windmann, A.; Wittenberg, P.; Schieseck, M.; Niggemann, O.: Artificial Intelligence in Industry 4.0 (2024): A Review of Integration Challenges for Industrial Systems. 22nd IEEE International Conference on Industrial Informatics (INDIN), Beijing, China.
  • Windmann, A.; Steude, H.; Niggemann, O. (2023): Robustness and Generalization Performance of Deep Learning Models on Cyber-Physical Systems: A Comparative Study. Workshop of Artificial Intelligence for Time Series Analysis (AI4TS), IJCAI 2023 – International Joint Conference on Artificial Intelligence, Macao, China.
HSU

Letzte Änderung: 22. November 2024