The PrEThics series has spearheaded the crucial discussion on ethical, social, legal, and privacy implications in eye tracking. Previous workshops have identified necessary adjustments for private and ethical eye tracking technology, and have resulted in the introduction of a "privacy and ethics statement" in this year's ETRA submission process. Following the successful PrEThics workshops held in conjunction with ACM ETRA 2021, 2022, and 2023 (one of the biggest workshops at the conference), we aim to continue providing the premier forum for these discussions.
The focus of the fourth edition is explainability, which has been investigated extensively in AI research (XAI). We will leverage these considerations for eye tracking and explore whether and how privacy and ethics can be safeguarded using XAI methods. This hands-on workshop will actively engage researchers and practitioners to reflect, specify, and apply XAI methods in addressing ethical, social, legal and privacy aspects of eye tracking systems.
This workshop brings together researchers and practitioners from
- eye tracking,
- (usable) privacy,
- human-computer interaction,
- and other eye tracking-related research fields,
- and industry.
... to collectively ...
- REFLECT on social, ethical, legal and privacy aspects of eye tracking models
- SPECIFY explanatory needs from this reflection using an established AI ethics framework
- APPLY current XAI methods and assess whether they address these explanatory needs
Program (Times might be subject to change)
|9:00 – 9:15
|Opening and Introduction
|9:15 – 10:15
|Input I: Keynote on "Explainable AI and Law" by Stephan Schindler and Talks on Ethical, Social and Privacy Aspects of Explainable AI in Eye Tracking by Organizers
|10:15 – 10:30
|10:30 – 11:30
|REFLECT: Questionnaires and Discussion on Ethical, Social, Legal and Privacy Aspects of an Exemplary Eye Tracking Use Case
|11:40 – 12:00
|12:00 – 01:00
|01:00 – 01:30
|Input II: Introduction to VCIO Framework
|01:30 – 02:00
|SPECIFY: Specify and Operationalise Values in Reflections using VCIO Framework
|02:00 – 02:45
|APPLY: Hands-on Exploration of XAI Methods to Verify Observables
|02:45 – 03:00
|03:00 – 03:30
|Group Presentations of Results and/or Code
|03:30 – 04:00
|Wrap-up and Next Steps
Stephan Schindler - University of Kassel, Germany
Stephan Schindler will give the keynote for the first input session on "Explainable AI and Law". He is an expert in data protection law, legally compliant technology design and regulation of AI. He has been a research assistant at the Chair for Public Law, IT Law and Environmental Law at the University of Kassel since October 2015. Since the 2018/2019 winter semester, he has also been a lecturer in the Master of Digital Transformation at Goethe Business School, Frankfurt University.
Susanne Hindennach - University of Stuttgart, Germany
Susanne Hindennach is a third-year PhD student at the University of Stuttgart, Germany. She studied Cognitive Science at University of Osnabrück, and Neuroscience at the University of Cologne. In her PhD, she analyses mind attribution in explanations about AI systems on the one hand, and, on the other hand, she investigates new XAI methods that provide explanations about the humans involved in building the AI system. As such, the goal of her PhD is to redirect Theory of Mind away from the AI systems and towards the human minds behind AI.
Mayar Elfares - University of Stuttgart, Germany
Mayar Elfares is a second-year PhD student at the University of Stuttgart, Germany. Her research interests are in Human-Computer Interaction, Computer Vision, and Privacy. She is conducting her PhD on privacy-preserving attentive user interface at the Institute for Visualization and Interactive Systems as well as the Institute of Information Security.
Céline Gressel - University of Tuebingen, Germany
Céline Gressel studied sociology, psychology and education in Tübingen. Simultaneously, in 2011 she started to work at the IZEW. From March 2016 she worked as a research assistant in INTEGRAM. In December 2018 she started her new project HIVE-Lab whereshe works on the Sociology of technology, Qualitative methods of empirical social research, in particular Grounded Theory, Ethics in the sciences and humanities an especially on the Integration of social, ethical an legal aspects into technology development.
Murat Karaboga - Competence Center Emerging Technologies of the Fraunhofer
Murat Karaboga is a senior researcher and has been working in the Competence Center Emerging Technologies of the Fraunhofer ISI since January 2014. His work focuses on policy analysis and the analysis of governance and actor structures in the domain of Information and Communication Technologies.
Michael Raschke - Blickshift GmbH, Germany
Michael Raschke is Co-Founder and managing director of Blickshift GmbH and an expert for a visualization-based eye movement analysis. From 2009 to 2015 he worked on new methods and techniques for the analysis of perceptual and cognitive processes at the Institute for Visualization and Interactive Systems at the University of Stuttgart. In 2016 he founded Blickshift GmbH together with two of his former colleagues from the institute to transfer research results in eye tracking analysis into commercial products.
Andreas Bulling - University of Stuttgart, Germany
Andreas Bulling is Full Professor of Human-Computer Interaction and Cognitive Systems at the University of Stuttgart. His research interests are in novel computational methods and systems to sense, model, and analyze everyday non-verbal human behavior, specifically gaze. He was one of the organisers and panelist of the privacy in eye tracking panel at ACM ETRA 2019. He received his PhD in Information Technology and Electrical Engineering from ETH Zurich and his MSc in Computer Science from the Karlsruhe Institute of Technology.