Logo PrEThics

4th International Workshop on Privacy and Ethics in Eye Tracking (PrEThics)

organised in conjunction with ACM ETRA 2024

Contact: PrEThicsworkshop@gmail.com

The PrEThics series has spearheaded the crucial discussion on ethical, social, legal, and privacy implications in eye tracking. Previous workshops have identified necessary adjustments for private and ethical eye tracking technology, and have resulted in the introduction of a "privacy and ethics statement" in this year's ETRA submission process. Following the successful PrEThics workshops held in conjunction with ACM ETRA 2021, 2022, and 2023 (one of the biggest workshops at the conference), we aim to continue providing the premier forum for these discussions.

The focus of the fourth edition is explainability, which has been investigated extensively in AI research (XAI). We will leverage these considerations for eye tracking and explore whether and how privacy and ethics can be safeguarded using XAI methods. This hands-on workshop will actively engage researchers and practitioners to reflect, specify, and apply XAI methods in addressing ethical, social, legal and privacy aspects of eye tracking systems.

This workshop brings together researchers and practitioners from

... to collectively ...

  1. REFLECT on social, ethical, legal and privacy aspects of eye tracking models
  2. SPECIFY explanatory needs from this reflection using an established AI ethics framework
  3. APPLY current XAI methods and assess whether they address these explanatory needs
The workshop will alternate between input and hands-on sessions. In the input sessions, expert speakers will give an overview on explainable AI (XAI) linking to ethical, social, privacy, and legal perspectives in pervasive eye tracking, as well as introduce the VCIO (Values, Criteria, Indicators, Observables) framework. In the hands-on sessions, participants will explore an exemplary eye tracking use case both conceptually with the help of questionnaires and the VCIO framework, and programmatically using current XAI methods.

Program (might be subject to change)

11:00 – 11:15 Opening and Introduction
11:15 – 11:30 Introduction to Exemplary Eye Tracking Use Case
11:30 – 12:30 REFLECT: Questionnaires and Discussion on Ethical, Social, Legal and Privacy Aspects of an Exemplary Eye Tracking Use Case
12:30 – 01:00 Plenary Discussion
01:00 – 02:00 Lunch Break
2:00 – 03:00 Input I: Keynote on "Explainable AI and Law" by Stephan Schindler
03:00 – 03:15 Input II: Introduction to VCIO Framework
03:15 – 04:00 SPECIFY: Specify and Operationalise Values in Reflections using VCIO Framework
04:00 – 04:30 Coffee Break
04:30 – 04:45 Introduction to XAI Methods
04:45 – 05:30 APPLY: Hands-on Exploration of XAI Methods to Verify Observables
05:30 – 06:00 Group Presentations of Results and/or Code
06:00 – 06:30 Wrap-up and Next Steps

Keynote Speaker

Photo Stephan Schindler Stephan Schindler - University of Kassel, Germany
Stephan Schindler will give the keynote for the first input session on "Explainable AI and Law". He is an expert in data protection law, legally compliant technology design and regulation of AI. He has been a research assistant at the Chair for Public Law, IT Law and Environmental Law at the University of Kassel since October 2015. Since the 2018/2019 winter semester, he has also been a lecturer in the Master of Digital Transformation at Goethe Business School, Frankfurt University.

Organisers

Photo Susanne Hindennach Susanne Hindennach - University of Stuttgart, Germany
Susanne Hindennach is a third-year PhD student at the University of Stuttgart, Germany. She studied Cognitive Science at University of Osnabrück, and Neuroscience at the University of Cologne. In her PhD, she analyses mind attribution in explanations about AI systems on the one hand, and, on the other hand, she investigates new XAI methods that provide explanations about the humans involved in building the AI system. As such, the goal of her PhD is to redirect Theory of Mind away from the AI systems and towards the human minds behind AI.

Photo Mayar Elfares Mayar Elfares - University of Stuttgart, Germany
Mayar Elfares is a second-year PhD student at the University of Stuttgart, Germany. Her research interests are in Human-Computer Interaction, Computer Vision, and Privacy. She is conducting her PhD on privacy-preserving attentive user interface at the Institute for Visualization and Interactive Systems as well as the Institute of Information Security.

Photo Celine Gressel Céline Gressel - University of Tuebingen, Germany
Céline Gressel studied sociology, psychology and education in Tübingen. Simultaneously, in 2011 she started to work at the IZEW. From March 2016 she worked as a research assistant in INTEGRAM. In December 2018 she started her new project HIVE-Lab whereshe works on the Sociology of technology, Qualitative methods of empirical social research, in particular Grounded Theory, Ethics in the sciences and humanities an especially on the Integration of social, ethical an legal aspects into technology development.

Photo Murat Karaboga Murat Karaboga - Competence Center Emerging Technologies of the Fraunhofer ISI, Germany
Murat Karaboga is a senior researcher and has been working in the Competence Center Emerging Technologies of the Fraunhofer ISI since January 2014. His work focuses on policy analysis and the analysis of governance and actor structures in the domain of Information and Communication Technologies.

Photo Andreas Bulling Andreas Bulling - University of Stuttgart, Germany
Andreas Bulling is Full Professor of Human-Computer Interaction and Cognitive Systems at the University of Stuttgart. His research interests are in novel computational methods and systems to sense, model, and analyze everyday non-verbal human behavior, specifically gaze. He was one of the organisers and panelist of the privacy in eye tracking panel at ACM ETRA 2019. He received his PhD in Information Technology and Electrical Engineering from ETH Zurich and his MSc in Computer Science from the Karlsruhe Institute of Technology.