A4 Refereed article in a conference publication

How difficult are exams? A framework for assessing the complexity of introductory programming exams




AuthorsSheard J, Simon, Carbone A, Chinn D, Clear T, Corney M, D'Souza D, Fenwick J, Harland J, Laakso M J, Teague D

EditorsAngela Carbone, Jacqueline Whalley

Publication year2013

JournalConferences in Research and Practice in Information Technology

Book title Proceedings of the Fifteenth Australasian Computing Education Conference (ACE 2013). CRPIT. 136. Adelaide, Australia, ACS

Series titleConferences in Research and Practice in Information Technology (CRPIT)

Volume136

First page 145

Last page154

Number of pages10

ISBN978-1-921770-21-0

ISSN1445-1336

Web address http://crpit.com/confpapers/CRPITV136Sheard.pdf


Abstract
Student performance on examinations is influenced by the level of difficulty of the questions. It seems reasonable to propose therefore that assessment of the difficulty of exam questions could be used to gauge the level of skills and knowledge expected at the end of a course. This paper reports the results of a study investigating the difficulty of exam questions using a subjective assessment of difficulty and a purpose-built exam question complexity classification scheme. The scheme, devised for exams in introductory programming courses, assesses the complexity of each question using six measures: external domain references, explicitness, linguistic complexity, conceptual complexity, length of code involved in the question and/or answer, and intellectual complexity (Bloom level). We apply the scheme to 20 introductory programming exam papers from five countries, and find substantial variation across the exams for all measures. Most exams include a mix of questions of low, medium, and high difficulty, although seven of the 20 have no questions of high difficulty. All of the complexity measures correlate with assessment of difficulty, indicating that the difficulty of an exam question relates to each of these more specific measures. We discuss the implications of these findings for the development of measures to assess learning standards in programming courses.
Student performance on examinations is influenced by the level of difficulty of the questions. It seems reasonable to propose therefore that assessment of the difficulty of exam questions could be used to gauge the level of skills and knowledge expected at the end of a course. This paper reports the results of a study investigating the difficulty of exam questions using a subjective assessment of difficulty and a purpose-built exam question complexity classification scheme. The scheme, devised for exams in introductory programming courses, assesses the complexity of each question using six measures: external domain references, explicitness, linguistic complexity, conceptual complexity, length of code involved in the question and/or answer, and intellectual complexity (Bloom level). We apply the scheme to 20 introductory programming exam papers from five countries, and find substantial variation across the exams for all measures. Most exams include a mix of questions of low, medium, and high difficulty, although seven of the 20 have no questions of high difficulty. All of the complexity measures correlate with assessment of difficulty, indicating that the difficulty of an exam question relates to each of these more specific measures. We discuss the implications of these findings for the development of measures to assess learning standards in programming courses.



Last updated on 2024-26-11 at 16:06