{"id":3350,"date":"2015-06-02T00:00:00","date_gmt":"2015-06-02T04:00:00","guid":{"rendered":"http:\/\/aurora-institute.org\/blog\/cw_post\/needed-partners-with-assessment-expertise\/"},"modified":"2020-02-05T12:55:10","modified_gmt":"2020-02-05T17:55:10","slug":"needed-partners-with-assessment-expertise","status":"publish","type":"cw_post","link":"https:\/\/aurora-institute.org\/cw_post\/needed-partners-with-assessment-expertise\/","title":{"rendered":"Needed: Partners with Assessment Expertise"},"content":{"rendered":"
I had a sense of dread as I flew to Colorado to join the National Center for the Improvement of Educational Assessment<\/a> for its annual Colloquim on Assessment and Accountability Implications of Competency and Personalized Learning Systems. A room full of experts on measurement? I was prepared to have any ideas I might have about what assessment looks like in a fully developed competency-based system destroyed in a Terminator-like fashion.<\/p>\n Instead what I found was a room of incredibly thoughtful, creative, forward-thinking people who are willing to explore along with all of us how we might organize a system that keeps the focus on learning while using discrete and meaningful mechanisms to ensure rigor and equity. Along with myself, Ephraim Weisstein, founder of Schools for the Future<\/a>, Maria Worthen<\/a>, Vice President for Federal and State Policy at iNACOL<\/a>, and Laura Hamilton<\/a> of Rand<\/a> were invited to the Colloquium to kick off the conversation. My brain started churning as I listened to the presentations from Kate Kazin, Southern New Hampshire University<\/a>; Samantha Olson<\/a>, Colorado Education Initiative<\/a>; Christy Kim Boscardin<\/a>, University of California, San Francisco<\/a>; and Eva Baker<\/a>, CRESST<\/a>.<\/p>\n And then my brain went into overdrive listening to the insights of the team of assessment experts as they sorted through the conversation, explored different options, and identified where there was opportunity to create a system that generated consistency in determining levels of learning. It would be a system in which credentialing learning generates credibility, a system that allows us to trust when a teacher says a student is proficient, providing us with real confidence that they are, in fact, ready for the next set of challenges.<\/p>\n Some Big Take-Aways<\/b><\/p>\n Below are some of the big take-aways that Ephraim, Maria, and I came away with.<\/p>\n 1. Get Crystal Clear on the Goal<\/b>: It\u2019s critical for the field and competency-based districts and schools to be explicit about their learning targets (however they might be defined and organized) so results can be evaluated and measured. There are a variety of ways of structuring competencies and standards, and we need to think about the ways in which we can measure them (or not).<\/p>\n 2. Consider Applying Transparency to Designing Assessments<\/b>: We all operate with the assumption that summative assessment items have to be hidden in a black box. However, we could make test items transparent \u2013 not their answers, of course \u2013 but the questions themselves. Consider the implications\u2014lower costs, more sharing, more opportunity for the stakeholders to understand the systems of assessments. It\u2019s worth having an open conversation about the trade-offs in introducing transparency as a key design principle in designing the system of assessments to support competency education.<\/p>\n 3. Understand Implications of Grain Size<\/b>: The grain size of learning targets needs to be better understood and defined. We want to make sure it is meaningful to students, helpful to teachers to think about next instructional step, and that we understand the implications for assessment. There are always reasons to get more granular, especially in increasing the ability to measure and provide more fine-tuned feedback. However, there are trade-offs that we need to take into consideration, including the learning experience for students, cost, and determining how the data would really be used.<\/p>\n 4. What is the Primary Job of Assessments?<\/b>: Over the last decade, state summative assessments have become nearly synonymous with accountability. In reality, accountability is only one of the jobs that assessment can do. We know we need to continue to monitor the degree to which the education system is equitable in ensuring that traditionally under-served students are fully benefiting from the system. We might also consider jobs such as monitoring the degree to which teachers are calibrating proficiency, improving instruction and interventions, or determining when students are ready to move on to the next level of studies as key jobs of the assessments system. In piloting new systems of assessments to support competency education, we need to evaluate these systems based on the specific job they are supposed to do\u2014but also not put too much pressure on them to do more than that. The flip side of that, however, is that assessments need to be part of a strong theory of action that includes the teaching and learning design, related supports, and accountability.<\/p>\n 5. Expand Field\u2019s Knowledge on Formative Assessment<\/b>: We really need to get a handle on how best to develop and use formative assessments. We know that competency education doesn\u2019t work well if students aren\u2019t getting timely feedback, and that the feedback is helpful in improving their learning. So we need to make sure as a field and as a district that we are utilizing state-of-the-art knowledge about formative assessments.<\/p>\n 6. Incorporate the Best of What We Know about How We Can Help Students Learn<\/b>: Learning progressions (the ways we can help students progress from one big concept to the next as compared to the learning continuums that are the set of standards that define what we want students to know and be able to do) are an important piece of the puzzle, but there is a lot of work yet to do to make them a useful part of the system. There are varying opinions on the evidence base and promise of this work. Funding agencies can make a difference by investing more money in understanding and assessing the validity of learning progressions. If we can really inform our instruction (and pre-service and professional development) based on how students learn, it will have huge effects for educators and students.<\/p>\n