Friday, August 31, 2007

IEREA Poster Submissions Deadline Soon Approaching

A reminder to let everyone know that our annual conference for the Iowa Educational Research and Evaluation Association (IEREA) is fast approaching and will be upon us before we know it. One of the popular features of the conference is our poster presentation and our paper contest. We need lots of poster and paper submissions to make this part of our conference a success. This conference is also a great opportunity for people to get involved in IEREA, support Iowa, and get feedback on timely topics in education.

Please mark your calendars and see conference information below, including the call for proposals.

Theme: Success For All: Access, Connections, and Transitions
Date: Friday, November 30, 2007
Location: Sheraton Hotel, Iowa City

Iowa Educational Research and Evaluation Association: 2007 Call for Proposals

Iowa educators are invited to submit proposals to present their research at IEREA's annual conference in Iowa City, IA. Proposals from faculty members, graduate students, and education professionals conducting research related to education, specifically this year's theme, are invited to submit proposals. Additionally, individuals involved in school-based or university-school collaborative action research studies, innovative program evaluations, and work related to technical issues of assessment are also encouraged to submit proposals. IEREA utilizes a poster presentation format, designed to foster dialogue among presenters and conference attendees. To maximize interaction during the poster sessions, posters will be displayed in an open space with sufficient room to congregate, browse, and discuss.

Refreshments will also be provided during poster sessions.
Instructions for displaying research in a poster format will be sent to presenters of all accepted posters. At least one presenter per poster must register to attend the IEREA Conference, and all poster presenters qualify for reduced conference registration fees. Details are provided upon acceptance of the proposal.

The deadline to submit poster proposals is 5:00 pm, Friday, September 14, 2007. Submissions must include two copies of the proposal. One copy of the proposal should contain author name(s), institutional affiliation(s), and complete contact information for the coordinating presenter all on a separate cover sheet. The second copy of the proposal should contain no author names, titles, or contact information in order to facilitate blind review of all proposals. The poster proposal itself should be no more than three (3) double-spaced pages (excluding
references) with reasonable margins and minimum 11-point type. Each proposal must include the following:

Title of Poster
Abstract (maximum 50 words)
Goals/Objectives
Design and Methods
Results
Significance/Impact
References

E-mail submissions are strongly encouraged (please type IEREA Proposal in the email Subject line), and receipt of proposals will be acknowledged via return e-mail. Send all poster proposals to

Jan Walker
jan.walker@drake.edu

or

IEREA Conference Planning Committee
ATTN: Dr. Jan Walker
3206 University Av
Des Moines, IA 50311
(515) 271-3719

Tuesday, August 28, 2007

Griddable Items Get No Respect...No Respect at All!

While most people argue that you have to earn the respect you are given, this is not always the case. Take for example the hard working, informative, creative and open-ended test item type commonly known as the "griddable item." This item type gets no respect. In fact, my guess is you don't even know what I mean when I refer to a gribbable item. Let me elaborate.

When criterion-referenced and mastery testing was all the rage back in the late 60's and early 70's, most bashers of multiple-choice or supply-only assessment items came crawling out of the woodwork. Now remember, this was prior to high-stakes assessment so most tests were loved by all! In response to this, assessment developers looked to "enhance" objective measures by making them more "authentic." One way to do this and still keep the advantages of machine scoring was to ask an open-ended item (say a multiple-step mathematics problem) and to place a grid on the response document similar to how you might grid your name or date of birth. Once the student solved the math problem and presumably reached one correct answer in one format, he or she could grid the answer on the document. What a great idea! Boy, did people hate it and, as far as I can tell, people still hate it today.

Pearson has conducted research in all manners of investigations regarding the gribbable item (see Pearson Research Bulletin #3), and very little of which has generated much interest. For example, when Pearson was advising the Florida Department of Education in this regard, the griddable item was perceive by the program's critics as an "ineffective" attempt to "legitimize" a large-scale objective assessment as measuring "authentic" and meaningful content (i.e., including performance tasks) when it did not. This really seemed to be a policy and/or political battle which positioned the proponents of performance tasks, who wanted rich embedded assessments, against the policy makers, who wanted economical and psychometrically defensible measures. It is too bad gribbable research did not carry the day.

Another issue with griddables seems to be their content classification. Multiple step mathematics problems, for example, are likely to match more than one cell of a content classification. Furthermore, depending on how they are classified, substeps are not likely to reach a Depth of Knowledge (DOK) of 3 even if the total item does. Finally, some concerns have been raised from psychometricians using IRT to calibrate gribbable items. Under IRT the argument goes, unless you are using the Rasch Model, a 3PL model will be required for traditional multiple-choice type items, but there will be no guessing associated with a griddable item. Hence, a 2PL model will be required to calibrate these items with no pseudo-guessing parameter. (We will save the argument of forcing the c-parameter to zero and not going to a mixed model for another blog.) Add to this the inevitable sprinkling of two and three category open-response items and the mixed model becomes a burden that might not be justified given the relatively few gridded items. Other attributes of the griddable item are delineated in the Pearson Research Bulletin #3.

The point of this blog (clearly a failure given that I feel the necessity to remind you of the point I was making) is to get assessment specialists, psychometricians, policy makers and teachers to objectively evaluate the merits of this item type. Another goal is to have my readers consider how the use of griddable items might help assessment become more of a driving force for good instruction. These are the goals of the blog despite the fact that gribbable items get no respect.