Guest Writer
Guest Opinion
Less than a month after finishing AIMS, my sixth-, seventh- and eighth-graders (my children, not my students) are now taking NWEA MAP testing.
Students will be pulled out for one to two periods at separate times to take the math and reading portions of this Northwest Evaluation Association Measures of Academic Progress – far less invasive than the many hours required to administer Arizona’s Instrument to Measure Standards.
NWEA MAP testing is typically done three times a year to track a student’s progress. While it provides immediate results as an adaptive computerized test, AIMS results won’t be known for a couple of months yet.
Both tests factor into the school’s label under AZ LEARNS (e.g., “Performing Plus”), but neither factors into whether a student moves on to the next grade. Schools are accountable, not students.
Next year, my son will skip the whole thing and then in 10th grade, he’ll start taking AIMS to see if he’s learned enough to graduate from high school.
Teachers in my district will be meeting the week after school ends to begin planning learning priorities for each quarter of the coming year.
They still won’t know how their students did on AIMS, so they can’t use it to assess their pedagogical success.
When school ends, parents and students won’t know, either. Students will have report cards mailed that won’t include AIMS scores, though they will include their NWEA score.
If you think this sounds like a convoluted mess, you’re right.
We need a more-coherent set of accountability for students and schools.
NWEA should have us rethinking the administration and structure of AIMS.
NWEA adapts question difficulty to how well a student responds, potentially enabling better diagnosis, whereas AIMS can’t tell us why a student is getting something wrong.
The rapid results of NWEA demonstrate that we can dramatically improve the turnaround time on AIMS math, science and reading – whether through computer-based testing or traditional testing with electronic enhancements. (In my large lecture classes, students use remote clickers to answer paper tests.)
This would enable AIMS to be administered in early May when everything has been taught, and allow student performance to be evaluated in a manner to assess both teaching effectiveness and whether particular students are ready for the next grade.
Writing, which generally has higher test scores anyway, would continue to be done early to allow for manual grading.
We should extend this system into high school. The final exit AIMS test is so weak that some 10th-graders pass it easily and yet so strong that, by 12th grade, others are still falling short.
This contradictory result tells me we’re not holding students sufficiently accountable for their learning – expecting too little of some and waiting too long before expecting too much of others.
Rather than a graduation test, subject area testing can continue into high school with scores part of the equation in determining advancement and graduation.
Programs with more rigorous independent testing, like International Baccalaureate programs, could opt out.
We need to be careful when implementing expanded test-based accountability systems. In my classes, tests are neither my only means of assessment, nor my preferred form of assessment.
Tests have strengths and significant limitations. We like them because they generate specific numbers, and we can standardize it, so all students take the same thing and are graded on the same basis.
AIMS is full of multiple-choice questions because they’re cheap to grade. That should also give us pause.
Learning has multiple dimensions that go beyond what a multiple-choice test or single essay can capture. However, we do need a statewide system that helps us better evaluate student learning and school and teacher performance in a manner that helps our entire system improve.
Reconfiguring AIMS would be a good place to start.
Dave Wells teaches at Arizona State University. The views are his own. E-mail: Dave@Make DemocracyWork.org