Grants and Contracts Details
Description
Alternate assessment is moving more firmly into a standards-based accountability world,
due in large part to the No Child Left Behind Act of2001 (NCLB) and the 2004
reauthorization of IDEA (Quenemoen, Rigney, and Thurlow, 2002). The NCLB
standards and assessment peer review process increased the requirements for
documenting the technical quality of all assessments, but the biggest shift was for AAAAS.
The type of technical documentation necessary to fulfill the peer review
requirements for regular education assessments has never been expected from AA-AAS
developers previously. Additionally, the alternate assessment systems in many states are
now being reviewed by the states' technical advisory committees (TAC). Many of these
traditionally-trained measurement experts justifiably expect substantial documentation of
the psychometric worth of the AA-AAS for them to be considered legitimate assessment
activities. Building a convincing case to support the technical adequacy of any largescale
assessment is a challenging undertaking, but doing so for AA-AAS has been
daunting at both a conceptual and operational level.
The recently completed New Hampshire Enhanced Assessment Initiative
(NHEAI) and the currently funded National Alternate Assessment Center (NAAC),
particularly Goal 1, was very successful in developing a framework, conceptual papers,
and practical tools to assist states and their organizational partners in documenting the
technical quality of alternate assessments. These projects relied on the framework
presented in Knowing What Students Know: The Science and Design of Educational
Assessments (Pellegrino, Chudowsky, and Glaser, 2001) to organize the evaluation of
technical quality of AA-AAS. This approach was intentionally based on a validity
foundation to ensure that the technical documentation would be useful for supporting or
refuting the inferences about students and schools from the assessment scores. We, as
measurement and special education communities, have made tremendous strides in
helping states document the technical quality of AA-AAS through these two and other
related projects. Understandably, states focused their energy on the "nuts and bolts" of
technical documentation-e.g., administration fidelity, alignment, item development,
scoring, aspects of reliability, and standard setting-in order to meet key peer review
requirements. States have not been able to devote as much attention during the course of
the previous projects' short time frame (18 months) to conduct validity studies and
construct a coherent validity argument.
Our proposed GSEG Consortium for Priority B will work with five states-at
various states of system "maturity"-to begin the task of building validity arguments for
their alternate assessments based on alternate achievement standards. Ed Haertel (1999)
reminded us of the importance of constructing a validity argument compared with simply
conducting a variety of studies when he noted that the individual pieces of evidence do
not make the assessment system valid or not, it is only by weaving these pieces of
evidence together into a coherent argument can we judge the validity ofthe assessment
program. We intend to borrow from Ryan's (2002) approach for organizing collecting
validity evidence within the context of high-stakes accountability systems to assist states
in weaving these study results into a defensible validity argument.
Status | Finished |
---|---|
Effective start/end date | 10/1/07 → 3/31/12 |
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.