A4 Article in conference proceedings
Can computing academics assess the difficulty of programming examination questions?




List of Authors: Simon, Daryl Dsouza, Judy Sheard, James Harland, Angela Carbone, Mikko-Jussi Laakso
Publication year: 2012
Journal: Journal- ACM
Number of pages: 4
ISBN: 978-1-4503-1795-5
ISSN: 1557-735X

Abstract


This paper reports the results of a study investigating how accurately programming teachers can judge the level of difficulty of questions in introductory programming examinations. Having compiled a set of 45 questions that had been used in introductory programming exams, we took three measures of difficulty for the questions: the expectations of the person who taught the course and set the exams; the consensus expectations of an independent group of programming teachers; and the actual performance of the students who sat the exams. Good correlations were found between all pairs of the three measures. The conclusion, which is not controversial but needed to be established, is that computing academics do have a fairly good idea of the difficulty of programming exam questions, even for a course that they did not teach. However, the discussion highlights some areas where the relationships show weaknesses.



Last updated on 2019-20-07 at 08:47