Do students need detailed feedback on programming exercises and can automated assessment systems provide it?

Kyrilov A, Noelle DC. Do students need detailed feedback on programming exercises and can automated assessment systems provide it?. Journal of Computing Sciences in Colleges. 2016;31(4).

Abstract

This paper examines the degree to which binary instant feedback on computer programming exercises, provided by an Automated Assessment (AA) system, benefits students. It also offers an approach to providing improved feedback. Student behavior in an undergraduate computer science class was studied. Students were assigned exercises requiring the generation of programs that met given specifications. We employed an AA system that evaluated the correctness of student code by executing it on a set of test cases. Students promptly received binary (“Correct”/”Incorrect”) feedback, and they could repeatedly resubmit solutions in response. We found that more than half of the students failed to achieve correct solutions within a reasonable time. A small group of students were also found to have plagiarized solutions. This result led us to investigate ways in which AA systems for programming exercises might provide more rich and detailed feedback. We propose the development of clustering algorithms that group solutions based on how similarly incorrect they are. For the exercises we considered, there were, on average, 64 incorrect submissions, but there were only 8-10 distinct logical errors. This means that, if all incorrect submissions were automatically grouped into 8-10 clusters, a human instructor would only have to produce detailed feedback once for each cluster. That feedback could then be automatically delivered in response to each submission that fell within that cluster. We provide evidence that such an approach would result in substantial labor savings, while providing instant detailed feedback to students.

Last updated on 07/21/2022