ETS Poised to Shorten Score-Reporting Time with SpeechRater
ETS is no stranger to developing products with automated scoring capabilities.
For several years customers have successfully used CriterionSM, which incorporates the e-rater® writing analysis tool, to automatically evaluate and score essays. The company is now poised to debut its latest automated scoring capability − SpeechRater.
A cross-functional team that includes staff from Higher Education, Research & Development, and Scoring, Reporting & Technology divisions spoke extensively about the features of SpeechRater at a May 10 seminar, and unveiled plans to integrate the automated speech scoring tool into TOEFL® Practice Online (TPO) this fall.
ETS staff noted at the seminar that the capability will offer many benefits, including lowering the cost of scoring speech samples and shortening the turn-around time associated with reporting test results.
"From a product front, integrating SpeechRater is a good solution for us," said Linda Tyler, HED (Higher Education Division) Group Executive Director.
From days to minutes
TPO, a membership-only Web site, is a test preparation resource that helps clients improve their English-language skills. It includes assessments and test preparation products for TOEFL® iBT, which was launched in the United States in 2005. (Both the new TOEFL® test and TOEFL iBT have a speaking component along with the current reading, listening and writing sections.)
The integration of SpeechRater into TPO will give customers access to an automated speech scoring capability in a low-stakes testing environment − a capability, the presenters said, that will significantly shorten the time associated with scoring spontaneous speech samples.
It takes five days for customers to receive their test results when a human evaluator scores speech samples. With SpeechRater, customers will get them in minutes.
How does it work?
The way SpeechRater works is that an examinee's responses to prompts are recorded and transferred to ETS, where the tool's speech recognition engine processes the results and computes numerous features that summarize the characteristics of his or her speech. These features are then combined to produce an automated score on a scale of 1-4. The automated score is a prediction of the rating a human evaluator, using the scoring rubric for the TOEFL test, would provide for the same response.
In addition to discussing the current attributes of SpeechRater, the presenters noted the many challenges ETS will face as it incorporates the tool into TPO.
One of these is the fact that the capability currently measures a limited subset of the construct of speech as defined for the TOEFL test, with an emphasis on the single aspect of fluency. Fluency includes characteristics such as the rate of speech, pacing, and the length and frequency of pauses. However, ETS staff have created a versioning strategy that provides a potential road map for how ETS could improve the capability in future years.
Those improvements could include the ability to measure test takers' topic development and language use, provide more informative feedback, and eventually replace human evaluators for some uses.
Ida Lawrence, Senior Vice President of R&D, noted, "A versioning strategy allows us to bring an ETS capability, in an early form, into the market in a timely way: We'll be able to support the limited claims we make about speaking proficiency in this low-stakes environment, and at the same time, gather data that will help us prepare for a second version of the capability."