Wednesday, March 19, 2008

More Pearson at AERA/NCME!

Sometimes I forget how big Pearson really is. Here are additional presentations at both the AERA and NCME national conventions.

NCME Papers and presentations
Chu, Kwang-lee, & Lin, Serena Jie
Distracter Rationale Taxonomy: A Formative Evaluation Utilizing Multiple-Choice Distracters

Jirka, Stephen
Test Accommodations and Item-Level Analyses: Mixture DIF Models to Establish Valid Test Score Inferences

Lau, Allen
Evaluating Equivalence of Test Forms in Test Equating With the Random Group Design

Lin, Serena Jie
Examining the Impact of Omitted Responses on Equating

Seo, Daeryong
Exploring the Structure of Achievement Goal Orientations Using Multidimensional Rasch Models

Stephenson, Agnes
Examining Individual Students’ Growth on Two States’ English Language Learners Proficiency Assessments

Using HLM to Examine Growth of English Abilities for ELL Students and Group Differences

Wang, Jane
Modeling Growth: A Longitudinal Study Based on a Vertical Scaled English-Language Proficiency Test

Wang, Shudong
Vertical Scaling: Design and Interpretation

The Sensitivity of Yen’s Q3 Statistics in Detecting Local Item Dependence

NCME Papers and presentations

Arce-Ferrer, Alvaro & Diaz, Ileana
An Experimental Investigation of Rating Scale Construction Guidelines: Do They Work with Spanish-Speaking Populations

Yi, Qing
Item Pool Characteristics and Test Security Control in CAT

Wang, Shudong; Zhang, Liru; Kersteter, Patsy; Bolig, Darlene; Yi, Qing
An Investigation of Linking a State Assessment to the 2003 National Achievement of Educational Progress (NAEP) for 4th and 8th Grade Reading

Arce-Ferrer, Alvaro & Shin, Seon-Hi
Three Approaches to Measuring Individual Growth


Wang, Shudong; Jiao, Hong; & Hi, Wei
Parameter Estimation of One-Parameter Testlet Model

Wang, Shudong & Jiao, Hong
Empirical Evidences of construct Equivalence of Vertical Scale Across Grades in K-12 Large-Scale Standardized Reading Assessments

Tuesday, March 18, 2008

Pearson Presentations at AERA & NCME

The contingent of Pearson researchers has, once again, done an admirable job of representing our industry at the annual meeting of the American Educational Research Association (AERA) and the National Council on Measurement in Education (NCME) the week of March 24th in New York City.

The following are the AERA paper and symposium submissions:
Jason Meyers & Xiaojin Kong
An Investigation of the Changes in Item Parameter Estimates for Items Re-field Tested

Leslie Keng, Walter L. Leite, & Natasha Beretvas
Comparing Growth Mixture Models when Measuring Latent Constructs with Multiple Indicators

Leslie Keng, Edward Miller, Kimberly O'Malley, & Ahmet Turhan
Composite Score Reliability Given Correlated Measurement Errors between Subtests and Unknown Reliability for Some Subtests

Ye Tong, Sz-Shyan Wu, & Ming Xu
A Comparison of Pre-Equating and Post-Equating Using Large-Scale Assessment Data

Rob Kirkpatrick & Denny Way
Field Testing and Equating Designs for State Educational Assessments

Lei Wan & Brad Ching-Chow Wu
Person-fit of English Language Learner (ELL) Students in High-Stakes Assessments
Ellen Strain-Seymour
A User-Centered Design Approach for the Refinement of a Computer-Based Testing Interface

Jeff Wilson
A User-Centered Design Approach to Developing an Assessment Management System

Paul Nichols
The Role of User-Centered Design in Building Better Assessments

Michael Harms
An Introduction to User-Centered Design in Large-Scale Assessment

The following are NCME paper and symposium submissions:
Paul Nichols & Natash Williams
Evidence of Test Score Use In Validity: Roles And Responsibility

Denny Way, Chow-Hong Lin, Katie McClarty, & Jadie Kong
Maintaining Score Equivalence as Tests Transition Online: Issues, Approaches and Trends

Denny Way, Paul Nichols, & Daisy Vickers
Influences of Training and Scorer Charactersistics on Human Constructed Response Scoring

Ye Tong & Michael Kolen
Maintenance of Vertical Scales

Leslie Keng, Tusng-Han Ho, Tzu-An Chen, & Barbara Dodd
A Comparison of Item and Testlet Selection Procedures In Computerized Adaptive Testing

Jon S. Twing
Off-the-Shelf Tests and NCLB: Score Reporting Issues

Erika Hall & Timothy Ansley
Exploring the Use of Item Bank Information to Improve IRT Item Parameter Estimation

Canda Mueller
Response Probability Criterion and Subgroup Performance

Tony Thompson
Using CAT To Increase Precision In Growth Scores

Come see us in action. You are bound to go away smarter!