EMICS

CHI'20 SIG meeting on "Eye Movements as an Interface to Cognitive State"

View project on GitHub

About

Eye movement recording is quickly becoming ubiquitous across different interfaces. The ability to draw inferences about processing from eye movements has long been used in academic settings. Here, we will explore how the connection between eye movements and cognitive processing can be used in applications. The cognitive state of a user can inform not only whether or not they stay on task, but what type of thinking (explorative, expert, creative) is currently engaged. With the advent of A.I. in this field, eye movement recordings can be used to decode, guide and encourage different cognitive states.

This SIG meeting will be used as a space in which researchers from different disciplines (HCI, psychology, A.I., cognitive neuroscience) can interact and strengthen this budding field. The use of eye movements to decode cognitive states could be extended to adaptive interfaces that use eye movements as feedback to guide attention, processing, learning and memory. The development of such tools could have some far reaching implications for our society and launch a whole new form of human-computer interaction.

Meeting Agenda

To facilitate discussion, we will kick off the SIG with a panel of experts giving 5-minute lightning talks about various applications of eye movements as an interface to cognitive state.

  • Welcome & Primer: Monica Castelhano (Queen’s University)
  • Lightning Talks
    • Invited Panelist: Andrew Duchowski (Clemson University)
              “Eye Tracking Measures of Cognitive Load”
    • Invited Panelist: Alexandra Papoutsaki (Pomona College)
              “Gaze Sharing for Remote Collaboration”
    • Invited Panelist: Oleg Komogortsev (Texas State University)
              “Virtual and Augmented Reality, Sensors, Eye Movements, and Their Impact on Healthcare and Security”
    • Invited Panelist: Tilman Dingler (University of Melbourne)
              “Alertness Measures and Modeling Circadian Rhythms Based on Eye Movements for Cognition-Aware Computing”
  • Panel Discussion (all the invited panelists)
  • Closing Remarks & Next Steps: Zoya Bylinskii (Adobe Research)

Join us in

Slack group Zotero reference group

Context:

The ability to leverage eye tracking data to infer perceptual and cognitive processes during activities such as reading, driving and medical image examinations (Barnes 2008, Hayhoe 2017, Liversedge & Findlay 2000), has long been used in academic settings. However, as eye tracking systems become ubiquitous across different interfaces (e.g. desktop, mobile and head-mounted displays), new opportunities arise for applying this knowledge to human-computer interaction. This SIG will be used to explore applications that can be made possible by connecting eye movements, cognitive processing, and cognitive state (Duchowski 2002, Fridman 2018, Hergeth 2016, Poole & Ball 2006).

The cognitive state of a user can inform not only whether or not they stay on task, but what type of thinking (explorative, expert, creative) is currently engaged (Eger 2007). With the advent of A.I. in this field, eye movement recordings can be used to decode, guide and encourage different types of cognitive processing (Appel 2018, Bylinskii 2017, Duchowski 2018, Henderson 2013, Wang 2019).

At the same time, the use of eye tracking in HCI has been highly promising for many years, but progress has been slow. In Jacob and Karn’s 2003 review of eye tracking in HCI and usability research, they state, “We see promising research work, but we have not yet seen wide use of these approaches in practice or in the marketplace” and it remains as true today as it was in 2003.

The present SIG meeting will be used as a space in which researchers from different disciplines (HCI, psychology, A.I., cognitive neuroscience) can interact and strengthen this budding field. Eye movements can, and have been, used as an input modality on mobile and head-mounted displays; for instance, for text entry or search (Papoutsaki 2017, Sharif 2012). However, the main focus of this SIG is on how eye movement patterns can be used to evaluate user intentions and task difficulty, by providing a window into the user’s cognitive processing and state. The use of eye movements to decode cognitive states could be extended to adaptive interfaces that use, for instance, eye fixations as feedback to guide attention (Holland 2012, Torralba 2006, Matzen 2017), affect processing (Kiefer 2017, Rayner 2009), and provide aids for learning and memory (Vitak 2012, Bylinskii 2015, Hannula & Ranganath 2009). The development of such tools could have some far reaching implications for our society and launch a new innovative approach to human-computer interaction.

Relevance: With the increasing accessibility of eye tracking devices, the growing popularity of mobile applications and head-mounted interfaces, and the development of powerful A.I. algorithms, inferring cognitive states from eye movement recordings has become feasible across many scenarios. These advancements open the possibility of integrating users’ cognitive processing into the design of interactive systems. Many studies have been conducted in psychology to understand the correspondence between eye movements and cognitive processing, however this connection has not yet been fully explored or implemented in HCI. So far it remains unclear how to create meaningful interactions that make full use of the information revealed by eye movements. A fundamental challenge lies in the generalizability of eye movement patterns across different tasks, users, and contexts. With the growing interest in the inference of users’ cognitive states and processing, we see an increased demand for bringing together the Cognitive Science and the HCI communities to share knowledge and explore the full potential of this intersection of research interests and its applications.

Discussion Questions:

  • Which cognitive states can be most reliably and robustly inferred from eye movements in practice?
  • Which applications of eye movements as an interface to cognitive state (EMICS) have already proven successful?
  • What challenges and limitations of current hardware, algorithms, etc. need to be addressed to facilitate future applications of EMICS?
  • How can we best align the interests between academia and industry to solve these challenges?

About the Panelists

Andrew Duchowski is a Professor in the School of Computing at Clemson University and his research and teaching interests include visual attention and perception, eye tracking, computer vision, and computer graphics.

Alexandra Papoutsaki is an Assistant Professor of Computer Science at Pomona College. Her main research focuses on eye tracking and gaze sharing in the context of remote collaboration. She received her PhD from Brown University where she developed a new approach for webcam eye tracking in the browser.

Oleg Komogortsev is currently a Professor of Computer Science at Texas State University. He conducts research in eye tracking with a focus on health assessment, cyber security (biometrics), bioengineering, human computer interaction, and usability.

Tilman Dingler is a Research Fellow and Associated Lecturer in the School of Computing and Information Systems at The University of Melbourne. His research focuses on systems that sense, model, and adapt to users’ cognitive states to enable interfaces to support their users’ information processing capabilities.

Organizers

Monica Castelhano is an Associate Professor in Psychology at Queen’s University. Dr. Castelhano studies memory and attentional processes in complex visual environments in real and virtual settings using techniques such as VR, eye tracking and electroencephalogram (EEG). Dr. Castelhano has been granted numerous awards including the Early Researcher Award from OSF and numerous grants from national and international funding agencies. She is currently an Associate Editor at the Quarterly Journal of Experimental Psychology.

Zoya Bylinskii is a Research Scientist at Adobe Inc. She received a PhD in Electrical Engineering and Computer Science from MIT in September 2018 and an Hon. B.Sc. in Computer Science and Statistics from the University of Toronto in 2012. Zoya is a 2016 Adobe Research Fellow, a 2014-2016 NSERC Postgraduate Scholar, a 2013 Julie Payette Research Scholar, and a 2011 Anita Borg Scholar. Zoya works at the interface of human vision, computer vision, and human-computer interaction: building computational models of people’s memory and attention, and applying the findings to graphic designs and data visualizations.

Xi Wang is a graduate student at the Technische Universität Berlin. She has an M.Eng. degree from Shanghai Jiao Tong University and an M.Sc. degree in computer science from Technische Universität Berlin. Her research is broadly concerned with human perception and its applications in computer graphics. She has worked on projects studying vergence eye movements in 3D space, using mental imagery paradigm to study encoded visual content in (episodic) memory and measuring where humans look on real three-dimensional stimuli.

James Hillis is a Research Scientist at Facebook Reality Labs. He completed a Vision Science PhD in sensory cue combination at UC Berkeley in 2002 and post-doctoral research in color vision at the University of Pennsylvania before joining the faculty at the University of Glasgow in 2006. At the University of Glasgow he headed a lab studying socio-economic decision making and worked on an ESRC funded project on the cognitive neuroscience of social decision making. In 2013, he became a research scientist in the optical and display system division of 3M and in 2016 he joined Oculus Research which then became part of Facebook Reality Labs. James’ research focuses on the development of computational models of interactions between humans and their social and physical environments.

Andrew Duchowski is a professor of Computer Science at Clemson University. He received his baccalaureate (1990) from Simon Fraser University, Burnaby, Canada, and doctorate (1997) from Texas A&M University, College Station, TX, both in Computer Science. His research and teaching interests include visual attention and perception, eye tracking, computer vision, and computer graphics. He is a noted research leader in the field of eye tracking, having produced a corpus of papers and a monograph related to eye tracking research, and has delivered courses and seminars on the subject at international conferences. He maintains Clemson’s eye tracking laboratory, and teaches a regular course on eye tracking methodology attracting students from a variety of disciplines across campus.

Please feel free to include relevant works on EMICS to the zotero reference group.

E. Abdulin, O. Komogortsev. User eye fatigue detection via eye movement behavior. CHI’EA (2015).

J. Jacobs, X. Wang, M. Alexa. Keep it simple: Depth-based dynamic adjustment of rendering for head-mounted displays decreases visual comfort. TAP (2019).

N. Abid, J. Maletic, B. Sharif. Using developer eye movements to externalize the mental model used in code summarization tasks. ETRA (2019).

T. Appel, C. Scharinger, P. Gerjets, E. Kasneci. Cross-subject workload classification using pupil-related measures. ETRA (2018).

D. Ballard, M. Hayhoe. Modelling the role of task in the control of gaze. Visual cognition (2009).

S. Brennan, X. Chen, C. A Dickinson, M. Neider, G. Zelinsky. Coordinating cognition: The costs and benefits of shared gaze during collaborative search. Cognition (2008).

Z. Bylinskii, M. Borkin, N. Kim, H. Pfister, A. Oliva. Eye fixation metrics for large scale evaluation and comparison of information visualizations. Eye Tracking and Visualization (2017).

Z. Bylinskii, N. Kim, P. O’Donovan, S. Alsheikh, S. Madan, H. Pfister, F. Durand, B. Russell, A. Hertzmann. Learning visual importance for graphic designs and data visualizations. UIST (2017).

M. Castelhano, M. Mack, J. Henderson. Viewing task influences eye movement control during active scene perception. Journal of vision (2009).

N. Castner, E. Kasneci, T. Kübler, K. Scheiter, J. Richter, T. Eder, F. Hüttig, C. Keutel. Scanpath comparison in medical image reading skills of dental students: distinguishing stages of expertise development. ETRA (2018).

A. Duchowski, K. Krejtz, I. Krejtz, C. Biele, A. Niedzielska, P. Kiefer, M. Raubal, I. Giannopoulos. The index of pupillary activity: measuring cognitive load vis-à-vis task difficulty with pupil oscillation. CHI (2018).

C. Holland, O. Komogortsev, D. Tamir. Identifying usability issues via algorithmic detection of excessive visual search. CHI (2012).

R. Gergely, A. Duchowski, M. Pohl. Designing online tests for a virtual learning environment - evaluation of visual behavior between tasks. International Conference on Human Behavior in Design (2014).

M. Hayhoe. Advances in relating eye movements and cognition. Infancy (2004).

M. Hayhoe. Vision and action. Annual Review of Vision Science (2017).

E. Kasneci, G. Kasneci, T. Kübler, W. Rosenstiel. Online recognition of fixations, saccades, and smooth pursuits for automated analysis of traffic hazard perception. Artificial neural networks (2015).

P. Kiefer, I. Giannopoulos, M. Raubal, A. Duchowski. Eye tracking for spatial research: Cognition, computation, challenges. Spatial Cognition & Computation (2017)

D. Kit, L. Katz, B. Sullivan, K. Snyder, D. Ballard, M. Hayhoe. Eye movements, visual search and scene memory, in an immersive virtual environment. PLoS One (2015).

G. Kütt, K. Lee, E. Hardacre, A. Papoutsaki. Eye-write: Gaze sharing for collaborative writing. CHI (2019).

M. Land, M. Hayhoe. In what ways do eye movements contribute to everyday activities? Vision research (2001).

D. Lohr, S.-H. Berndt, O. Komogortsev. An implementation of eye movement-driven biometrics in virtual reality. ETRA (2018).

L. Matzen, M. Haass, K. Divis, M. Stites. Patterns of attention: How data visualizations are read. International Conference on Augmented Cognition (2017).

A. Papoutsaki, J. Laskey, J. Huang. Searchgazer: Webcam eye tracking for remote studies of web search. CHIIR (2017).

K. Rayner, M. Castelhano, J. Yang. Eye movements when looking at unusual/weird scenes: Are there cultural differences?. Journal of Experimental Psychology (2009).

B. Sharif, M. Falcone, J. Maletic. An eye-tracking study on the role of scan time in finding source code defects. ETRA (2012).

A. Torralba, A. Oliva, M. Castelhano, J. Henderson. Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychological review (2006).

O. Mercier, Y. Sulai, K. Mackenzie, M. Zannoli, J. Hillis, D. Nowrouzezahrai, D. Lanman. Fast gaze-contingent optimal decompositions for multifocal displays. TOG (2017).

S. Vitak, J. Ingram, A. Duchowski, S. Ellis, A. Gramopadhye. Gaze-augmented think-aloud as an aid to learning. CHI (2012).

K. Yun, Y. Peng, D. Samaras, G. Zelinsky, T. Berg. Exploring the role of gaze behavior and object detection in scene understanding. Frontiers in psychology (2013)

X. Wang, A. Ley, S. Koch, D. Lindlbauer, J. Hays, K. Holmqvist, M. Alexa. The mental image revealed by gaze tracking. CHI (2019).

G. Zelinsky, Y. Peng, D. Samaras. Eye can read your mind: Decoding gaze fixations to reveal categorical search targets. Journal of vision (2013).

Partners