A4 Vertaisarvioitu artikkeli konferenssijulkaisussa
A Unified Holistic Model of Visual Perception for Code Reviews
Tekijät: Hauser, Florian; Ezer, Timur; Grabinger, Lisa; Mottok, Jürgen; Gruber, Hans
Toimittaja: Mottok, Jürgen; Hagel, Georg
Konferenssin vakiintunut nimi: European Conference on Software Engineering Education
Kustantaja: ACM
Kustannuspaikka: NEW YORK
Julkaisuvuosi: 2025
Kokoomateoksen nimi: ECSEE '25: Proceedings of the 6th European Conference on Software Engineering Education, Seeon Monastery, Germany, 02 – 04 June 2025
Aloitussivu: 115
Lopetussivu: 124
Sivujen määrä: 10
eISBN: 979-8-4007-1282-1
DOI: https://doi.org/10.1145/3723010.3723017
Verkko-osoite: https://doi.org/10.1145/3723010.3723017
Despite being a well-structured domain, software engineering lacks standardized definitions, metrics, and theories for analyzing eye movements in domain-specific tasks. This gap can be addressed by adapting models from other fields, such as radiology and psychology. In particular, the holistic models of image perception provide a suitable framework for software engineering applications. This paper introduces a unified model of visual perception focused on eye movements during code reviews. It is based on prior research, findings from other studies, and cross-domain theories. Empirical studies on C and C++ code reviews confirm a phase-based process, where experts switch between global scanning and focal viewing. In addition, significant differences in the fixation rate, fixation duration, number of saccades, and AOI-specific metrics highlight the role of expertise in visual processing. The proposed model offers a structured framework for eye-tracking analysis in software engineering, defining relevant metrics and supporting future refinements across various software engineering tasks.
Julkaisussa olevat rahoitustiedot:
The authors wish to acknowledge the use of ChatGPT and DeepL in the writing of this paper. This tools were used for translations and to assist with improving the language in the paper. The paper remains an accurate representation of the authors' underlying work and novel intellectual contributions. We thank the funding project FH-Invest (FKZ: 13FH101IN6) run by Prof. Dr. Jurgen Mottok for providing equipment for the eye tracking laboratory and Prof. Dr. ChristianWolff from the University of Regensburg for arranging the laboratory areas. The present paper is based on former results from the EVELIN project (FKZ: 01PL12022F, project sponsor: DLR) which was supported by the Ministry of Education and Research (BMBF) of the Federal Republic of Germany. The collection and analysis was done in the context of the HASKI project (FKZ: 16DHBKI035), also sponsored by the German Federal Ministry of Education and Research (BMBF).