Monday, April 23, 2007

What I Did on my Spring Break

Psychometricians are peculiar. While most people look forward to spring break by planning family outings, going to the beach to drink beer, or simply forgetting about the grind of existence, we (psychometricians) typically spend them at our annual conference for the National Council on Measurement in Education (NCME) and the American Educational Research Association (AERA). This year was no different as I drove to Chicago on Easter Sunday (April 8th for those of you with different persuasions) looking forward to an invigorating conference with winds in excess of 20 mph and temperatures below freezing. (It even snowed, and the Cubbies were cancelled because of sleet!)

Make fun as I might (and do), I am genuinely recharged at such meetings. I am reminded, in this political world where so little really matters, that what teachers do daily is very important. As such, what psychometricians and measurement professionals do daily is also important. As my batteries consume the flow of research energy, I am also reminded that we are scientists and that our standards of best practice, measurement design and our profession in general MUST BE guided by research.

Perhaps the quality of the research in general does not seem to be at the levels it once was. Perhaps there were too many sessions lamenting the terribleness of NCLB. Perhaps some members of NCME still refuse to embrace their AERA brethren. Despite all of these, there is much to be learned from each research paper presented—if you are wise enough to understand it.

Here are some of the things I learned:

  1. Graduate students, despite how sophisticated they might seem, are very poor presenters. They need coaching around the most simple aspects of text size for overheads, how to articulate without the dreaded "umhs..." and "...ahs..." typical of nervous presenters, and most of all, they have to understand how much information they can really present in the 1o or 12 minutes they have. I don't recall struggling so much with these when I was a gradual student, but I'm sure my memory is as sharp as my presentations were.
  2. Calling the front desk or speaking to the cleaning staff is not likely to bring the elevators to the 29th floor any faster than sacrificing chickens would.
  3. While walking down 29 flights of stairs might be easier than walking up, it is still a really long journey and certainly enough to make you break a sweat.
  4. I would rather buy dinner for a large group of people than listen to ten Ph.D.'s figure out how best to split the bill.
  5. If you bring a printer along, you can actually be quite productive while you work out of your hotel room.
  6. The One-Parameter Logistic Model (OPLM) is really either the Rasch model with two parameters or it is a 2PL Model with fixed, estimated elsewhere, integer a-parameter values.
  7. Kansas, of all places, has an assessment program, and it seems very rigorous and robust.
  8. You don't need to be an alum to attend the Iowa, North Carolina or Michigan State Alumni parties.
  9. Walking home after three alumni parties in a town like Chicago is quite a challenge.
  10. There are more things in common between Thurstone, Guttman, Rasch, and Mokken than there are differences.
  11. "Just Noticeable Differences" or JNDs are alive and well when comparing self-parking at $26.00 a night to valet parking at $35.00 a night.
  12. Pearson sponsored the NATD Breakfast, the NATD Dinner, the Division H Breakfast, the RASCH SIG Dinner, and a graduate student reception, to name a few.

OK, so maybe I should have learned more, but it was spring break after all.

Tuesday, April 03, 2007

My 15 Minutes of Fame

Education News columnist, Robert Oliphant, has in the past agreed with some of the notions, comments or papers written by myself or others as referenced in this blog. (See for example one of his earlier posts.) While this is quite flattering and I appreciate his perspective, our mission (his and mine) seems to continuously miss the mark—or at least policy makers still don't seem to understand what we are trying to say.

One of Mr. Oliphant's more recent columns continues my call that assessment be "transparent, verifiable and not too complex." As a psychometrician, this is a "no brainer" as the scientific side of psychometrics is ingrained in mathematical statistics, where proofs and reproducibility are paramount. (Most mathematical statisticians I know are still working on the communication and complexity aspect.) Mr. Oliphant applies this principle to national standards—which is just fine by me—and others have applied this principle to instruction, to education in general, as well as to the definition of what the "product" of our school systems needs to be.

While this last aspect sounds simple, the current debate about college readiness and workplace readiness, the rigor of high school (particularly the senior year) and the recent lack of mandated achievement standards for the accreditation of institutions of higher learning speaks volumes. Namely, that we are still thinking about education in far too complicated ways and are missing the "forest for the trees."

Here is how I think about education:
First, we need to link the curriculum (content standards) in a progressive manner that delineates what it is we want students to learn from pre-kindergarten to college—the old-fashion notion of PreK–16 or K–20. This will allow the "compound interest" of learning to continue across the grades.

Second, we have to measure what it is we expect children to learn across the linked system. We can then use these measures to not only improve our instruction but also to manage our intervention. As novel as it sounds, the measurement data could actually inform teachers regarding what is working and what isn't.

Finally, (and this is arguably the most controversial) we have to stop denying individual differences and allowing students without pre-requisite skills to advance. I don't mean that failing students should repeat the grade, but rather, the system should have a continuous feedback/intervention loop such that students will master the pre-requisite skills before moving to the next level of content. Notice I said content level and not necessarily grade level. Students who move on in the current system—many of whom struggle with the mastery of basic skills—are destined to failure at later grades without mastering those skills.
Some people would call these ideas naive; and some would label them as another example of the failed "ungraded systems" that were the rage in education in other decades. I call it a transparent, verifiable and not too complex system of education, and a simple way to focus our attention on what is important: instruction, learning, measuring, and the feedback/intervention loop.