Skip to content
Aurora Institute

Summative vs. Formative Assessment: From Binary Choice to Continuum

CompetencyWorks Blog

Author(s): Don Weafer

Issue(s): Issues in Practice, Rethink Instruction, Create Balanced Systems of Assessments

Student WritingThis post originally appeared on the Great Schools Partnership blog on June 10, 2020.

The Covid-19 pandemic has thrown persistent educational problems into sharp relief. What many of these problems share is that they were always there, hiding in plain sight in our systems, mindsets, and practices. Remote learning favors privileged students with secure homes and reliable internet access? It already did. Grading rewards compliance and playing the educational game as much as mastery of content and skills? It already did. Our most at-risk students are hard to find, let alone support? They already were. Assessments don’t translate to environments outside of the classroom? They never did.

Assessment has been on my mind over the last year, and remote learning has highlighted the fact that much of what we do for assessment simply does not translate outside the tightly controlled environment of a traditional classroom. In particular, it has highlighted real dissonance around formative and summative assessment.

The main distinctions between formative and summative assessments are when they are given and how they are used. A quiz given before learning to find out what students already know is formative; the same quiz given at the end of the learning is summative. The purpose of the first quiz is to provide data to the teacher about what students do and don’t know so that the teacher can design or adjust instruction to improve student learning. The purpose of the second quiz is to evaluate the student’s level of learning. Educators aren’t confused about these distinctions, and students readily understand them. The problem is not understanding, but application.

The Effect of Ineffective Assessment

As schools attempt to improve their use of formative assessments in particular, some patterns emerge. Teachers who don’t use formative assessment effectively tend to make a few mistakes:

  • They think first in terms of how to grade formative assessments rather than how to use them.
  • Every piece of work that isn’t summative is thought of as formative.
  • They focus on how to make formative assessments “count” so that students have to do them.

The cumulative effect of these mistakes: Formative and summative become gradebook categories distinguished mostly by weight, rather than sources of information to guide learning.

A More Effective Approach

Teachers who use formative assessments effectively do a few things well:

  • They make formative assessments a routine part of learning.
  • They use information from those assessments transparently to make immediate decisions about instruction.
  • They are always clear—for themselves and for their students—about the purpose of every task they ask students to complete.

In these classrooms, students learn over time that formative assessments matter because they impact what the student will do next, and because evaluation of progress—by the teacher, by the student, or by peers—matters far more than if the work is scored or even recorded. Teachers and students stop discussing how a task is graded and start discussing what good work looks like.

Is There an Even Better Way?

What if instead of thinking of assessment as either summative or formative, we started to think of it as a continuum? On one end of the scale, we’d have obvious formative assessments. For example, informal teacher observations. On the other, some level of standardized summative testing will probably be mandated for the foreseeable future. But in between, we might focus less on those categories and more on learning. Here are five ways to make it happen:

  1. Make sure the purpose of student work is always clear both for the teacher and the student. Homework, in particular, might be assigned for practice, to prepare for class, or simply to complete work that can’t be finished during class time. None of these tasks are necessarily assessments at all, and shouldn’t be treated as if they are. And any consequence for not doing the work should depend entirely on its purpose.
  2. Count everything as formative until it isn’t. Recording student learning is good. It provides a record of progress towards knowledge and skills. Once students have built enough of a body of evidence toward mastery, teachers should be able to describe accurately their current level of performance, without a separate, high-stakes assessment called “summative.”
  3. Use interim summative assessments. The fact that students practice their learning over a period of time does not mean that a teacher can only summatively assess all of their skills and knowledge at the very end of a grading period. For example, students working on argumentative writing might be assessed in an interim way on their ability to write claims well before they write a final essay.
  4. Use summative assessment data formatively. One of the strongest practices educators can implement is to collectively evaluate summative assessment data, and then use that learning to inform future instruction. For instance, a cross-disciplinary team of teachers might come together to collaboratively score student writing and identify areas where the teachers’ expectations were unclear or where students demonstrated common areas of need.
  5. Allow for flexibility, collaboration, and student choice in how to demonstrate learning. Particularly in a time of remote learning, students will need to show what they know within the contexts of their environments, and those environments are very different.

As we move forward in schools that will include some measure of distance learning for the foreseeable future, we need to understand that all assessment is for learning. If it isn’t, then its purpose can only be for grading and sorting students. If there is one thing we should know from our experiments in remote learning, it’s that we don’t need schools to sort kids: their environments do that already.

Don Weafer Don Weafer is a senior associate with the Great Schools Partnership, where he works with high schools in Maine and New England, particularly on proficiency-based and personalized learning initiatives. His other professional interests include school transformation models, systems thinking in education, and supporting teacher capacity for change. Don has been an English teacher for sixteen years, most recently at Lake Region High School, in Naples, Maine, where he also served as a teacher-leader in the school’s improvement work. Don earned a BA in English and history from Bowdoin College and an MEd in literacy from the University of Maine.