Newswise — CHARLOTTESVILLE, Va., June 4, 2013 — One of the obstacles to acceptance of massive open online courses is the potential for widespread cheating. Two University of Virginia researchers are offering a solution – but it may require MOOC instructors to do a little homework themselves.
In a special issue of the journal Research & Practice in Assessment on “MOOCs & Technology,” an article by Curry School of Education assistant professor J. Patrick Meyer and doctoral student Shi Zhu looks at ways to address cheating in MOOCs.
As in any course taught, the goal for professors teaching MOOCs is for their students to learn without cutting corners. With thousands of students potentially enrolled in an individual course, course instructors must use more sophisticated methods to combat cheating, Meyer and Zhu write.
The authors suggested that one strategy to reduce cheating for MOOCs is to use multiple different tests covering the same content.
“Cheating by obtaining test items or answer keys in advance of the test can be countered by the use of multiple test forms,” Meyer said. “However, this practice comes with its own complications. In order for the course to be fair, one version of the test cannot be more difficult than another. They all must have the same level of difficulty. Every test must measure the same level of learning.
“Principles of fair and equitable measurement require that all of the test forms have a common scale so that scores have the same meaning and interpretation,” he said.
In their article, “Fair and Equitable Measurement of Student Learning in MOOCs: An Introduction to Item Response Theory, Scale Linking, and Score Equating,” Meyer and Zhu discuss how “item response theory” helps counter cheating and ensure fair and equitable measurement of student learning.
“Item response theory is a type of measurement that is more complicated than methods instructors use for [standard] classroom tests,” Meyer said. “This type of measurement is used heavily in large-scale testing, such as the high-stakes testing in K-12 education.”
What makes this type of testing difficult for use in MOOCs is that large-scale testing is usually managed by companies that employ professionals with specialized knowledge of item response theory. Instructors of MOOCs typically do not have this level of expertise.
In an effort to introduce this theory to a larger audience, the article seeks to introduce readers to the concept and explain methods for placing test forms on a common scale. It describes the underlying theory and demonstrates the way an analysis is conducted.
Research & Practice in Assessment’s special issue offers some of the first analyses of actual MOOC data, and showcases the scholarship of faculty from the American Council on Education, Massachusetts Institute of Technology, Harvard University, the University of Virginia, Texas A&M University, New York University, James Madison University and Tulane University.
###