University of Iowa News Release
Feb. 27, 2006
Ho Nets $63,000 Grant To Compare Results From State And National Assessments
When it comes to assessing K-12 academic achievement in the United States, not all tests are created equal.
It's more than an apple-and-orange dilemma. Tests that ostensibly seek to assess the same thing (say, reading comprehension among fourth-graders) can use widely divergent methodologies or slice and dice the data in ways that make meaningful test result comparisons difficult, if not impossible.
That presents challenges to researchers and policymakers who want to compare results from the National Assessment of Educational Progress (NAEP) with results gathered from the many varied kinds of tests administered by individual states for their own purposes.
Enter Andrew Ho, an assistant professor in the University of Iowa College of Education. Ho, an expert on educational assessment, recently received a $63,000 grant from the U.S. Department of Education to develop a methodology that provides a sound basis for comparing trends and gaps from state test scores to trends and gaps from NAEP scores.
Ho's primary objective is to develop a formula for more readily squaring results of state-sponsored tests -- like the Illinois Standards Achievement Test and the California Standards Test -- with those from NAEP tests, otherwise known as "The Nation's Report Card." NAEP allows for comparisons of student performance across all 50 states by representatively sampling schools to be tested in each state.
Although participation in NAEP is voluntary, federal law requires all states and school districts that receive Title I funds (federal money intended to improve the learning of children from low-income families) to participate in NAEP reading and mathematics assessments at fourth and eighth grades if they are asked.
"With the release of the 2005 State NAEP results in October of [last] year, researchers and policymakers will have new opportunities to compare state NAEP trends and gaps to large-scale test results from state accountability programs," Ho wrote in his grant application. "Serious technical and substantive issues arise with these comparisons. Technically, the summary statistics that are likely to be used are not well suited for cross-test comparisons, whether they are percent-above-cut measures, effect sizes, or percentiles. Substantively, if there are discrepancies between State NAEP and state testing results, there is little guidance for stakeholders about how to interpret these discrepancies."
And that, Ho said, is bad, especially in light of growing pressure on schools under the No Child Left Behind Act to demonstrate test score improvement.
"Trend and gap statistics are at the heart of discussions about educational improvement and equity, and comparisons between NAEP and state test scores are often used to bolster or threaten the validity of state testing programs and/or NAEP itself," he said.
Ho, whose project officially begins in May, has two specific goals: to develop a "metric-free" methodology that supports cross-test comparisons of trends and gaps, and to provide new conceptual frameworks for interpreting discrepancies between state NAEP and state testing results.
Ho says that UI is in an ideal position to head this kind of research because it's the only university that maintains an active educational testing program as an academic unit, the College of Education-based Iowa Testing Programs (ITP).
Ho, who has been at the UI since August 2005, holds a joint appointment in ITP and the UI College of Education's Department of Psychological and Quantitative Foundations. He has a Ph.D. in educational psychology and a M.S. in statistics from Stanford University, as well as a B.S. from Brown University.
STORY SOURCE: University of Iowa News Services, 300 Plaza Centre One, Suite 371, Iowa City, Iowa 52242-2500.
MEDIA CONTACT: Stephen Pradarelli, 319-384-0007, email@example.com.