Skip to content
Aurora Institute

The Power of Deep Discussions around Student Work

CompetencyWorks Blog

Author(s): Laurie Gagnon

Issue(s): Issues in Practice, Activate Student Agency


Laurie Gagnon
Laurie Gagnon

Originally posted on September 15, 2014 for the Center for Assessment’s Reidy Interactive Lecture Series.

During the first week of August, thirteen educators from five states gathered for a three-day scoring institute as part of the Innovation Lab Network’s Performance Assessment project. The goals of the institute included attaining reliable scoring on the performance assessment the teachers had field tested in spring 2014 and informing the design of the emerging national task bank and accompanying resources to support implementation of tasks.

I had the privilege of co-facilitating the English Language Arts group. As we discussed the rubric and the annotated anchor work samples, and practiced scoring student work, the group gained a common understanding of the elements of the rubric and a level of confidence about how to apply them to student work. In the course of the three days several themes emerged that underscore some guiding principles for implementing performance assessment.

Collaboration builds and sustains understanding of the standards. Everyone had used the rubric to score student work from the same performance assessment task, yet each teacher brought his or her own nuanced ideas of how to determine which descriptor best fit a student’s work. One shift in thinking was the realization that a solid “3” could be an “A paper.” A rubric truly aligned to core competencies and anchor standards reflects the shifts intended with the Common Core and it was an important “ah-ha to see how to avoid the trap of norming to the group of students. A strong paper does not necessarily go beyond the competency or standard. One participant reflected:

“I was able to see us adopt a common language for expectations in student writing and the evaluation of that work. Seeing the way we came together in agreement about what proficient looks like, as well as the other categories of the rubric, demonstrated the value in performance tasks as a measure and the benefit of teachers working together in developing common expectations for student analysis. I loved seeing essays that came out of multiple classrooms have similar characteristics of effectiveness and struggle.”


Rubrics are a teaching and learning tool, not just a scoring tool. Investing time to build understanding of what the rubric criteria look like in actual student work can illustrate learning targets and help benchmark student progress over time on different types of tasks. One participant reports that she and the other English teacher at her small school agreed to use the QPA literary analysis rubric as a common English rubric. They used a class period to discuss the rubric and explain how it would be applied to student work. The students then scored their own essays and she reports, “Interestingly, the majority gave themselves much higher scores than we would have at the institute. I have set aside a time next week to re-teach the rubric in the hope that my students will assess their own papers with more scrutiny…I am grateful for the time we spent together because I feel confident to help my students understand their essays.”

Teacher and student voice and choice are keys to engagement. All three teachers implemented the same task in different ways. Determining what is relevant to the teacher and to his or her students helped make the task engaging. In our suggested revisions to the task itself, we found more flexibility for how students demonstrated the standards could be allowed while still eliciting evidence of the target standard.

The process of implementing a performance assessment, engaging in collaborative conversations about the results and making decisions for next steps related to instruction and future assessment design is a performance assessment for teachers.

It would be wonderful if every educator had the opportunity to engage in three days of discussion and scoring student work with colleagues, however, it is obviously not feasible to bring every teacher to Stanford for three days. This leads us to the question: What is possible as we seek to bring rich learning experiences that foster assessment literacy and the ability of all teachers to effectively link assessment and instructional practices? I will begin to address this question in a related post where I will connect lessons from the ILN scoring institute to the past two years of a significant professional development invested in New Hampshire.


Laurie Gagnon is the Director of the Quality Performance Assessment Program (QPA) at the Center for Collaborative Education (CCE) in Boston, MA. Laurie is a key designer of the QPA model, which she has worked on since its inception in 2008, and she is leading the program’s expansion in her current role, which has done work in over a dozen states.

During the research phase she authored a qualitative study about learning from performance assessment entitled Ready for the Future: The Role of Performance Assessments in Shaping Graduates’ Academic, Professional, and Personal Lives. She was also a contributing author to Quality Performance Assessment: A Guide for Schools and Districts which describes the QPA process, designed to benefit the entire learning system, by always keeping student learning at the center and engaging educators in designing practices that align curriculum, instruction and assessment, based on evidence of what and how students are learning.