Monday, July 26, 2010

Journal of Technology, Learning, and Assessment Article: The Effectiveness of Distributed Training for Writing Assessment Raters

Read the latest published article from several of our research colleagues:

The Journal of Technology, Learning, and Assessment (JTLA)
Volume 10, Number 1, July 2010

The Effectiveness and Efficiency of Distributed Online, Regional Online, and Regional Face-to-Face Training for Writing Assessment Raters

Edward W. Wolfe, Staci Matthews, and Daisy Vickers

This study examined the influence of rater training and scoring context on training time, scoring time, qualifying rate, quality of ratings, and rater perceptions. 120 raters participated in the study and experienced one of three training contexts: (a) online training in a distributed scoring context, (b) online training in a regional scoring context, and (c) stand-up training in a regional context. After training, raters assigned scores to qualification sets, scored 400 student essays, and responded to a questionnaire that measured their perceptions of the effectiveness of, and satisfaction with, the training and scoring process, materials, and staff. The results suggest that the only clear difference on the outcomes for these three groups of raters concerned training time—online training was considerably faster. There were no clear differences between groups concerning qualification rate, rating quality, or rater perceptions.

Download the article
(Acrobat PDF, 239 KB).

About The Journal of Technology, Learning and Assessment

The Journal of Technology, Learning and Assessment (JTLA) is a peer-reviewed, scholarly on-line journal addressing the intersection of computer-based technology, learning, and assessment. The JTLA promotes transparency in research and encourages authors to make research as open, understandable, and clearly replicable as possible while making the research process – including data collection, coding, and analysis – plainly visible to all readers.

Wednesday, July 07, 2010

Impressions of 2010 CCSSO Conference

This year, the Council of Chief State School Officers (CCSSO) Large-Scale Assessment conference took place in Detroit, the world's automotive center. The weather was a pleasant contrast compared to the heat in Texas, where I live and work. As the conference overlapped with River Days, it was nice to share in the festive atmosphere. The 30-minute fireworks display was splendid.

The participants in the CCSSO conference were quite different from those in NCME or in AERA Division D. The psychometrician/research scientist was a “rare animal.” However, the conference provided a good opportunity to meet assessment users from different states’ Departments of Education.

Most of the presentations were neither technical nor psychometric in orientation but they provided information about the trend of large-scale assessment in the United States. This year, the topics, like Race to the Top, Common Core Standards, and Best Assessments Practice, were especially “hot.”

Not many people showed up for my presentation. It might be the title, “…Threats to Test Validity….” Was it too technical?

Pearson proudly sponsored a reception buffet dinner with live music and several vice presidents greeted guests at the door. The dinner provided an enjoyable situation in which to meet and build relationships with others in the assessment field. I met a number of colleagues from global Pearson and I felt proud to be a member of this big family.

C. Allen Lau, Ph.D.
Senior Research Scientist
Psychometrics & Research Services
Assessment & Information