A4 Vertaisarvioitu artikkeli konferenssijulkaisussa

A Unified Holistic Model of Visual Perception for Code Reviews




TekijätHauser, Florian; Ezer, Timur; Grabinger, Lisa; Mottok, Jürgen; Gruber, Hans

ToimittajaMottok, Jürgen; Hagel, Georg

Konferenssin vakiintunut nimiEuropean Conference on Software Engineering Education

KustantajaACM

KustannuspaikkaNEW YORK

Julkaisuvuosi2025

Kokoomateoksen nimiECSEE '25: Proceedings of the 6th European Conference on Software Engineering Education, Seeon Monastery, Germany, 02 – 04 June 2025

Aloitussivu115

Lopetussivu124

Sivujen määrä10

eISBN979-8-4007-1282-1

DOIhttps://doi.org/10.1145/3723010.3723017

Verkko-osoitehttps://doi.org/10.1145/3723010.3723017


Tiivistelmä

Despite being a well-structured domain, software engineering lacks standardized definitions, metrics, and theories for analyzing eye movements in domain-specific tasks. This gap can be addressed by adapting models from other fields, such as radiology and psychology. In particular, the holistic models of image perception provide a suitable framework for software engineering applications. This paper introduces a unified model of visual perception focused on eye movements during code reviews. It is based on prior research, findings from other studies, and cross-domain theories. Empirical studies on C and C++ code reviews confirm a phase-based process, where experts switch between global scanning and focal viewing. In addition, significant differences in the fixation rate, fixation duration, number of saccades, and AOI-specific metrics highlight the role of expertise in visual processing. The proposed model offers a structured framework for eye-tracking analysis in software engineering, defining relevant metrics and supporting future refinements across various software engineering tasks.


Julkaisussa olevat rahoitustiedot
The authors wish to acknowledge the use of ChatGPT and DeepL in the writing of this paper. This tools were used for translations and to assist with improving the language in the paper. The paper remains an accurate representation of the authors' underlying work and novel intellectual contributions. We thank the funding project FH-Invest (FKZ: 13FH101IN6) run by Prof. Dr. Jurgen Mottok for providing equipment for the eye tracking laboratory and Prof. Dr. ChristianWolff from the University of Regensburg for arranging the laboratory areas. The present paper is based on former results from the EVELIN project (FKZ: 01PL12022F, project sponsor: DLR) which was supported by the Ministry of Education and Research (BMBF) of the Federal Republic of Germany. The collection and analysis was done in the context of the HASKI project (FKZ: 16DHBKI035), also sponsored by the German Federal Ministry of Education and Research (BMBF).


Last updated on 2025-08-08 at 14:27