Randall Stross, author and business instructor at San Jose State wants a software program that helps every student to write well. He thinks that the recent contest and research on machine marking of tests, sponsored by the William and Flora Hewlett Foundation, is the first step towards this “dream” (see New York Times, June 10, 2012 The Algorithm Didn’t Like My Essay)
Apparently his hope is echoed (and expanded) by the people involved in the Automated Student Assessment Prize as they stated in their May 9, 2012 press release: “The goal is for students to acquire critical thinking and communication skills that writing requires – all without the burden of added time and cost to the system.”
I was somewhat surprised to see the prestigious WFHF involved in such targeted funding with such foggy objectives – I know them mostly from their tireless support of open content and opencourseware, so this focus on stimulating private enterprise and academia to compete to demonstrate the most successful algorithms used to mark tests seems a bit outside their normal realm. Education program director Barbara Chow was quoted in the June 2012 NYTimes article as explaining, “…we wanted to create a neutral and fair platform to assess the various claims of the vendors.”
The stated objective of the ASAP competition was “… to develop software that could score students’ essays used in state standardized tests that had already been individually graded by educators.” Prize money totalling $100,000 was awarded to three winning teams (press release, May 9, 2012, The Hewlett Foundation Announces Winners of Essay Scoring Technology Competition) However, the claims (suggestions? hopes?) of the Education Director Barbara Chow seem to range far beyond what the test was evaluating. ““More sophisticated assessments will drive better classroom practices because better tests
support better learning,” she said.
Now I agree that better tests, especially tests that involve students actually writing thoughtful analyses of challenging questions or case examples, have a strong likelihood of increasing students’ critical thinking skills. But when all the hoo-hah around this initiative is about saving time and money??? They claim that teachers have too many students to allow enough time to mark their attempts at writing. This software is intended to free up more time by doing the marking for them. But machines can only mark standardized tests, unless the teachers are capable of modifying the algorithms themselves. How does that achieve anything except to provide more data to spew forth. Better tests depend on better questions or activities or tasks that enable students to demonstrate deeper learning.
The terms “critical thinking” or “deeper learning” imply time to reflect, analyze, test, consider…If state exams are cutting out essay questions and moving back to multiple choice exams because they are more affordable, how meaningful are the state exams? Since when does the speed of marking lead to better learning? There would seem to be a lot more that has to happen then just cutting back on the work of educators. Why bother having state exams at all if we don’t value them enough to make them fair and honest instruments of stated learning outcomes for students?
Learning to be a better writer or a better thinker takes practice and attention to the task at hand. Software that supports basic feedback for writers is already available. If we want students to be better writers, they need to learn how to utilize the many technological tools available to them and to seek the feedback and support of experienced and proven writers. Simply developing software to replicate the evaluation of experienced educators is too narrow to result in better critical thinking and communication skills in our students.
It all makes me wonder: What is the real purpose?
Note: The winners of the 2012 Automated Student Assessment Prize (ASAP) to pit machine marking against human marking are found here. The study of commercial providers of automated essay scoring engines can be found here.