I had a sense of dread as I flew to Colorado to join the National Center for the Improvement of Educational Assessment for its annual Colloquim on Assessment and Accountability Implications of Competency and Personalized Learning Systems. A room full of experts on measurement? I was prepared to have any ideas I might have about what assessment looks like in a fully developed competency-based system destroyed in a Terminator-like fashion.
Instead what I found was a room of incredibly thoughtful, creative, forward-thinking people who are willing to explore along with all of us how we might organize a system that keeps the focus on learning while using discrete and meaningful mechanisms to ensure rigor and equity. Along with myself, Ephraim Weisstein, founder of Schools for the Future, Maria Worthen, Vice President for Federal and State Policy at iNACOL, and Laura Hamilton of Rand were invited to the Colloquium to kick off the conversation. My brain started churning as I listened to the presentations from Kate Kazin, Southern New Hampshire University; Samantha Olson, Colorado Education Initiative; Christy Kim Boscardin, University of California, San Francisco; and Eva Baker, CRESST.
And then my brain went into overdrive listening to the insights of the team of assessment experts as they sorted through the conversation, explored different options, and identified where there was opportunity to create a system that generated consistency in determining levels of learning. It would be a system in which credentialing learning generates credibility, a system that allows us to trust when a teacher says a student is proficient, providing us with real confidence that they are, in fact, ready for the next set of challenges.
Some Big Take-Aways
Below are some of the big take-aways that Ephraim, Maria, and I came away with.
1. Get Crystal Clear on the Goal: It’s critical for the field and competency-based districts and schools to be explicit about their learning targets (however they might be defined and organized) so results can be evaluated and measured. There are a variety of ways of structuring competencies and standards, and we need to think about the ways in which we can measure them (or not).
2. Consider Applying Transparency to Designing Assessments: We all operate with the assumption that summative assessment items have to be hidden in a black box. However, we could make test items transparent – not their answers, of course – but the questions themselves. Consider the implications—lower costs, more sharing, more opportunity for the stakeholders to understand the systems of assessments. It’s worth having an open conversation about the trade-offs in introducing transparency as a key design principle in designing the system of assessments to support competency education.
3. Understand Implications of Grain Size: The grain size of learning targets needs to be better understood and defined. We want to make sure it is meaningful to students, helpful to teachers to think about next instructional step, and that we understand the implications for assessment. There are always reasons to get more granular, especially in increasing the ability to measure and provide more fine-tuned feedback. However, there are trade-offs that we need to take into consideration, including the learning experience for students, cost, and determining how the data would really be used.
4. What is the Primary Job of Assessments?: Over the last decade, state summative assessments have become nearly synonymous with accountability. In reality, accountability is only one of the jobs that assessment can do. We know we need to continue to monitor the degree to which the education system is equitable in ensuring that traditionally under-served students are fully benefiting from the system. We might also consider jobs such as monitoring the degree to which teachers are calibrating proficiency, improving instruction and interventions, or determining when students are ready to move on to the next level of studies as key jobs of the assessments system. In piloting new systems of assessments to support competency education, we need to evaluate these systems based on the specific job they are supposed to do—but also not put too much pressure on them to do more than that. The flip side of that, however, is that assessments need to be part of a strong theory of action that includes the teaching and learning design, related supports, and accountability.
5. Expand Field’s Knowledge on Formative Assessment: We really need to get a handle on how best to develop and use formative assessments. We know that competency education doesn’t work well if students aren’t getting timely feedback, and that the feedback is helpful in improving their learning. So we need to make sure as a field and as a district that we are utilizing state-of-the-art knowledge about formative assessments.
6. Incorporate the Best of What We Know about How We Can Help Students Learn: Learning progressions (the ways we can help students progress from one big concept to the next as compared to the learning continuums that are the set of standards that define what we want students to know and be able to do) are an important piece of the puzzle, but there is a lot of work yet to do to make them a useful part of the system. There are varying opinions on the evidence base and promise of this work. Funding agencies can make a difference by investing more money in understanding and assessing the validity of learning progressions. If we can really inform our instruction (and pre-service and professional development) based on how students learn, it will have huge effects for educators and students.
7. Can Ontological Maps Help Us Build the “Learn to Learn” Skills?: Eva Baker introduced us to ontological maps, which are being developed based on how all the concepts and skills in a discipline relate. They can guide both how we shape instructional learning experiences and assessments and open the door to personalized pathways based on what we know about how students learn (i.e., moving beyond linear ways of understanding sequencing learning). The maps can be used for mathematics….but also for problem-solving and other higher order skills. Could they be a way for us to begin to build our capacity to support students in developing habits of learning, as well?
8. Measurement Experts Can Be Our Friends: At times, it feels like the measurement experts sit hidden in a castle somewhere telling us what our assessment systems can and cannot be. But the team at the Center totally busted through my assumptions. We need to make sure that we engage experts in assessment early in the conversation of thinking through what our system of assessments should look like. Assessment experts can be incredibly thoughtful partners in thinking through the opportunities and challenges for assessment for competency education. We went into this meeting with some assumptions around how comparability and validity would constrain the conversation, but found that a much deeper and broader understanding of the elements are used to consider in developing assessments. We all left with optimism for the opportunities of developing assessment strategies that can be useful for building the capacity of the system to help students learn as well as contribute to building a strong set of policies.
Thanks to everyone at the Colloquim—it was an incredibly insightful and challenging two days. And I feel all the more prepared to keep trying to think through what the system might look like once we help federal policymakers get past the idea that the only way to design systems of assessments starts with state summative assessments.