Thursday, September 16, 2010

Under Pressure

I cannot seem to get the rhythmic refrain of the famous Queen/David Bowie song “Under Pressure” out of my head when thinking about Common Core and the Race to the Top (RTTT) Assessment Consortia these days. Yes, this is a remarkable time in education presenting us with opportunities to reform teaching, learning, assessment, and the role of data in educational decision making, and all of those opportunities come with pressure. But, when Bowie’s voice starts ringing in my head, I am not thinking about those issues. I am instead worried about a relatively small source of pressure the assessment systems must bear; a source of pressure that is about 2% of the target population of test takers in size.

Currently under Elementary and Secondary Education Act (ESEA), states are allowed to develop three categories of achievement standards: general achievement standards, alternate achievement standards, and modified achievement standards. These standards all refer to criteria students must meet on ESEA assessments to reach different proficiency levels. Modified achievement standards only became part of the No Child Left Behind Act (NCLB) reporting options in 2007* after years of pressure from states. It was felt that the general assessment and alternate assessments did not fully meet states’ needs for accurate and appropriate measurement of all students. There were many students for whom the alternate assessment was not appropriate, yet nor was the general assessment. These kids were often referred to as the “grey area” or “gap” kids.

I do not think anyone would have argued that the modified achievement standards legislation fully addressed the needs of this group of kids, but it did provide several benefits. States that opted to create assessments with modified achievement standards were able to explicitly focus on developing appropriate and targeted assessments for a group of students with identifiably different needs. The legislation also drew national attention in academics, teaching, and assessment to the issue of “gap” students. This raised important questions, including:

-- Which students are not being served by our current instructional and assessment systems?
-- Is it because of the system, the students, or both?
-- What is the best way to move underperforming students forward?

In the relatively short time since legislative sanction of modified assessments, significant amounts of research and development have been undertaken. However, as I asserted that the legislation did not fully meet the needs of “gap” kids, I also assert that the research and development efforts have yet to unequivocally answer any of the questions that the legislation raised. Though research has not yet answered those questions, this does not mean that the research has not improved our understanding of the 2% population and how they learn. And it does not mean that we should stop pursuing this research agenda.

Now, in the context of the RTTT Assessment competition, the 2% population seems to be disappearing, or is being re-subsumed into the general assessment population. I do not think that the Education Department means to decrease attention on the needs of students with disabilities or to negatively impact students with disabilities. There is still significant emphasis given to meeting the needs of students with disabilities and consistently underperforming students in the RTTT Assessment RFP and in the proposals submitted by the two K-12 consortia. However, the proposals do seem to indicate that the general assessment will need to meet the needs of these populations, offering both appropriate and accurate measurement of students’ Knowledge, Skills, and Abilities (KSAs) and individual growth. I wonder how much attention these students will receive in test development, research, and validation efforts when the test developers are also taxed with creating innovative assessments, designing technologically enhanced interactive items, moving all test-takers online, and myriad other issues. The test development effort is already under significant pressure before the needs of students previously involved in assessments with modified achievement standards were lumped in.

I applaud the idea of creating an assessment system that is accessible, appropriate, and accurate for the widest population of students possible. I also hope that the needs of all students will truly inform the development process from the start. However, I cannot help worrying. We are far from finished with the research agenda designed to help us better understand students who have not traditionally performed well on the general assessment. With so many questions left unanswered, and with so many new test development issues to consider, I hope that students with disabilities and under-performing students are not, once again, left in a “gap” in the comprehensive assessment system.

* April 19, 2007 Federal Register (34 C.F.R. Part 200) officially sanctioned the development of modified achievement standards.

Kelly Burling, PhD.
Senior Research Scientist
Psychometric and Research Services

1 comment:

Mark G. Robeck , Ph.D. said...


After my first reading of the Federal Register in April 2007, I wondered how the so-called 2% regulations—filled, as they were, with carrots and sticks—would allow assessment specialists and research scientists to address the needs of gap kids as gap kids.

Nevertheless, since that time, discriminating information concerning the progress of gap kids has indeed been provided to parents and educators as never before.

Despite that progress, I believe the that demise of the 2% (or modified) test has been due to a lack of consensus manifested in that initial set of carrots and sticks. The carrot was the possibility of improving the state report card proficiency results. The stick was the limit of 2% on the proficiency credit for participating states for programs that were essentially self-funded. So, even though more than 2% of a state’s population could sit for a modified assessment, the political “bang-for-the-buck” was limited.

But why was there this artificial limit? Weren’t there more than 2% of each state’s population with IEPs that qualified the students to participate in the assessment? Shouldn’t their progress be acknowledged?

The answer was “Yes,” but regulators were worried about the unintended consequences of an open-ended program. That is, without restraints, states might try to assign much of their below basic population to this modified assessment, thereby lowering the bar for the lowest performing students by grouping them with the gap kids with special educational needs.

I believe a modified assessment for gap kids will not be renewed until the fear of lowered expectations is made explicit, thoroughly debated, and practically addressed in a way that acknowledges, once and for all, that gap kids need special assessments to provide the information to help them learn and retain basic knowledge and skills.

To overcome the “Yes, it’s needed, but what about our fears” logic of the past, we might consider the following: What if information garnered from formative assessments was such that it did not matter whether a student was misplaced into the gap (false positive) or misplaced into regular assessments (false negative)?

In other words, when the teachers of the highest performing gap kids and the teachers of the lowest performing regular assessment kids all receive information of comparable quality for facilitating student learning, we will see those fears about lowered standards dissolve. Why? Because the line between these two groups will no longer be a litmus test about our relative commitment to high standards of learning for our children. Rather, it will be about getting the most out of each of our students.

Mark G. Robeck, Ph.D., Senior Proposal Manager