Research & Development


My online assessment focused research and development addresses: gaps in the educational technology literature and practical problems I have experienced as a teacher educator in higher education.

My PhD dissertation, Facilitating Variable-Length Computerized Classification Testing In Massively Open Online Contexts Via Automatic Racing Calibration Heuristics (ARCH), developed and validated ARCH – an approach for automatically calibrating test items during live computerized classification testing that shifts to more efficient adaptive testing as calibration data is collected. Educational technology and assessment literature does not provide specific criteria for when a test item is sufficiently calibrated for use in adaptive testing approaches that are most appropriate for educational contexts. I became interested in this topic when I was teaching educational technology courses to pre-service teachers and found that my students were regularly cheating on the test associated with the Indiana University (IU) Plagiarism Tutorial in order to quickly earn the confirmation certificate that was required for my class. An adaptive test is more difficult to cheat on since test items are drawn from a large item bank and the test ends as soon as a decision can be made. I’ve created an adaptive version of the test associated with the IU Plagiarism Tutorial as part of my dissertation and to address the practical issue of hindering cheating in massively open online assessment contexts.

Other areas of my research and development focus on approaches to measuring technological pedagogical knowledge and factors that impact formative feedback generated during online peer-assessment have resulted in paper presentations at the annual conference of the American Educational Research Association and a publication in the Journal of Educational Computing Research on which I am second author.