Pacific Metrics’ Automated Essay Evaluation Study Published

Share Article

Study Suggests That a Blended Man-Machine Scoring Model Can Result in More Accurate Scores.

As new demands for writing instruction and evaluation emerge, such as with the Common Core State Standards, advanced scoring methods are also being developed.

Pacific Metrics Corporation (, a leading provider of education technology solutions, is pleased to announce a significant milestone in automated scoring research efforts with the recent publication of “The Handbook of Automated Essay Evaluation.” Pacific Metrics psychometricians Drs. Susan Lottridge, Matt Schulz, and Howard Mitzel outlined their recent automated scoring research in Chapter 14, “Using Automated Scoring to Monitor Reader Performance and Detect Reader Drift in Essay Scoring.” As new demands for writing instruction and evaluation emerge, such as with the Common Core State Standards, advanced scoring methods are also being developed.

At a time when automated scoring is being advocated as a replacement for human scoring, the central idea behind Pacific Metrics’ research was to see if two imperfect scoring methods, human and machine, could be mutually leveraged to improve the overall accuracy of essay scoring. Using Pacific Metrics’ automated scoring software CRASE® (, the research team conducted studies to identify human rater bias or scoring drift, and considered what types of interventions could be applied within the live scoring window in order to improve accuracy. The analysis suggests that automated scoring can quickly detect trends in group and individual reader performance, alerting scoring operations in real time, to correct potential bias or drift.

“The use of automated scoring as a read-behind tool has proven to be very effective and acceptable in high-stakes testing. Scores are assigned by humans, while automated scoring allows 100% read-behinds in real time. Drift and bias can be detected and corrected in real time. There is no longer a need to seed validity papers into the scoring stream or to introduce parameters into our measurement models to correct for reader drift. It can simply be monitored out of existence very quickly,” says the Pacific Metrics team authoring the study.

This research falls into the general field of man-machine interaction, in which machines are used to try to improve human performance. The authors believe that their research is an important development in our understanding of how automated scoring can assist human scoring organizations to more accurately score student writing, and they hope it will be a valuable resource for future studies and research in the advancement of essay evaluation methods.

For more information about Pacific Metrics or any of their assessment solutions, e-mail info(at)pacificmetrics(dot)com or call (831) 646-6400.

About Pacific Metrics
Pacific Metrics Corporation creates lasting improvements to the assessment and learning environment through the thoughtful use of technology. Headquartered in Monterey, California, Pacific Metrics provides specialized psychometric software and sophisticated online systems to state departments of education and to organizations that develop, administer, and score large-scale and small-scale assessments. The company has achieved recognition for its technical work and for being a leading force in the development of innovative, customized, web-based systems. For more information, visit

Share article on social media or email:

View article via:

Pdf Print

Contact Author

Stella Gibbs
Visit website