Skip to content
Aurora Institute

iNACOL and KnowledgeWorks Submit Public Comments on Innovative Assessment Pilot Under ESSA

Education Domain Blog

Author(s): iNACOL Center for Policy Advocacy

Issue(s): Federal Policy, Harness Opportunities in ESSA

The Every Student Succeeds Act (ESSA) presents states with a historic opportunity to redesign K-12 bigstock-Students-Standing-In-Classroom-6742340---webeducation around a new definition of student success. States can create space and support for personalized, competency-based learning with next generation accountability systems and innovative new systems of assessments, and build capacity with modernized educator development. (Learn more about the new flexibility in ESSA for states to redesign systems of assessments around student-centered learning here and here.) Over the next year, states will engage in critical design conversations as they craft a new vision for their education system under ESSA.

Central to these design conversations is how systems of assessments can support personalized instruction and enable competency-based learning progressions. ESSA offers a new opportunity for states wishing to pilot new, innovative systems of assessments in a subset of districts before scaling to statewide use for accountability and reporting purposesthe Innovative Accountability and Assessment Demonstration Authority (Innovative Assessment Pilot).

The U.S. Department of Education recently issued proposed regulations for the Innovative Assessment Pilot, soliciting input and feedback from the public. In response, iNACOL and strategic partner KnowledgeWorks, jointly submitted comments on the draft rules. The letter praises the Department’s thoughtful analysis of implementation challenges and offered solutions to prevent unnecessary barriers to piloting innovative new approaches that seek to improve equity.

The full text of the letter is below:

September 9, 2016

Ann Whalen
Senior Advisor to the Secretary
Delegated the Duties of Assistant Secretary for Elementary and Secondary Education
U.S. Department of Education
400 Maryland Avenue SW., Room 4W311
Washington, DC 20202

Docket ID: ED-2016-OESE-0047

NPRM: ESSA Innovative Assessment Demonstration Authority

Dear Ms. Whalen:

We are writing to provide comment on the proposed Innovative Assessment Demonstration Authority priorities for the Elementary and Secondary Education Act, as authorized by the Every Student Succeeds Act of 2015, published in the Federal Register on July 11, 2016. We appreciate the opportunity to provide input on this topic.

KnowledgeWorks and iNACOL have partnered extensively over the past few years to advance equity in education policies that enable competency-based, personalized learning. Together, we have a deep reach into the field of K-12 student-centered learning. With over 5,000 front-line innovator members, iNACOL is catalyzing the transformation of K-12 education through policy advocacy, quality standards, and communities of research and practice. KnowledgeWorks develops the capabilities of educators to implement and sustain competency-based and early college schools, works with state and federal leaders to establish aligned policy conditions and provides national thought leadership around the future of learning.

Overview of Comments

Our organizations applaud the Department for carefully balancing its commitment to flexibility and innovation with the need to ensure state assessment systems developed under this authority are of high quality. We appreciate the flexibility for states to pursue unique visions for better systems of assessments and we strongly support the Department’s set of selection criteria for evaluating state readiness for this important opportunity.

While we are pleased with the Department’s continued commitment to ensuring this opportunity is of high quality and achievable for interested states, we believe there are several improvements to the proposed regulations that are necessary to prevent unnecessary barriers to innovation. Our specific recommendations below address the following sections of the NPRM:

  • Section 200.76 – Innovative Assessment Demonstration Authority: Definition of “Demonstration Authority Period”
  • Section 200.76 – Innovative Assessment Demonstration Authority: Definition of “Innovative Assessment System”
  • Section 200.77 – Demonstration Authority Application Requirements: Comparability
  • Section 200.79 – Transition to Statewide Use: Individual vs. System of Assessments
  • Section 200.7 9- Transition to Statewide Use: Transition Evaluation
Section 200.76 — Innovative Assessment Demonstration Authority: Definition of “Demonstration Authority Period”

The proposed regulations would define “demonstration authority period” to clarify that an SEA must be ready to implement the innovative assessments in at least some of its LEAs at the time of its application and must be ready to use the system for purposes of accountability and reporting.

We believe states will need significant time and resources to design and build their systems before they are ready to implement in a group of districts. Simply requiring states to be ready to implement at the time of application without the promise of extensive support will likely deter states from participating and limit the demonstration authority’s potential to build knowledge of alternative approaches to assessment. We are pleased with the Department’s proposal to provide technical assistance to interested states in the background section of the NPRM, and think this is an important first step. We strongly encourage the Department to incorporate two additional strategies for bolstering state readiness:

  • Establish a Conditional Approval Process Prior to Final Approval: The Department should offer states the opportunity to participate in a two-step peer review process that consists of a conditional approval followed by an official approval to begin implementation. States seeking conditional approval must submit a compelling state plan for the design and implementation of an innovative assessment along with evidence of state readiness aligned to the selection criteria included in the application. The peer review panel would use this opportunity to signal to the state that it is moving in the right direction and indicate what additional evidence would be necessary to grant final approval. A conditional approval would give states the confidence to invest the time and resources in the final steps of building and field testing its assessment system. Once a state completes this process, it would need to provide satisfactory evidence that the system is ready to use for purposes of accountability and reporting, as well as any other evidence required to meet all selection criteria so the peer review panel can recommend final approval. This two-step process would create a partnership between the Department and interested states, providing a workable pathway to application that increases transparency for stakeholders at all levels while raising the quality of state plans.
    • Contingency for eligible applications exceeding the 7-state cap: In order to provide conditional approval of a state’s application, we believe that it would be necessary to structure the application review as a binary determination of whether a state meets or does not meet the criteria, rather than simply granting the authority to the top 7 states on a slate of qualitative scores. However, because the Demonstration Authority is limited to 7 states for the first 3 years, a contingency plan is needed in the event that the number of high-quality applications that fully meet all selection criteria exceeds this cap. We propose that the Department assign qualitative scores to rank the applications only in the event that the number of high quality applications exceeds the cap, in order to select the 7 best applications.
  • Create a Process for Identifying Interested States – The Department should ask states, as part of their consolidated plan under ESSA, to indicate if they are interested in applying to be part of this demonstration authority and would want to receive support for planning and design work for this purpose. As part of this initial process, states should provide an overview of their preliminary thinking about how they would envision the demonstration authority operating within their state. This request for information and declaration of initial interest should not be binding since many states are still in the exploratory phase, but should encourage states to share a preliminary vision for building better systems of assessments that might require flexibility under the demonstration authority. The Department could use this information to structure targeted technical assistance and to prioritize resources to support states in the development of high quality plans aligned to their vision.
Section 200.76 — Innovative Assessment Demonstration Authority: Definition of “Innovative Assessment System”

The proposed regulations would clarify that any innovative assessment design may be used so long as it meets applicable requirements and produces an annual summative determination for each student of grade-level achievement aligned to state standards. While we support this proposal and believe it is important for driving equity and rigor of the system, we recommend the Department make the following two clarifications to ensure states have the ability to produce information of each student’s grade level and current level of performance:

  • Clarify that Assessments Approved Under the Demonstration Authority May Use Items Above or Below A Student’s Grade Level – The Department should clarify that states applying under the Demonstration Authority have the same flexibility afforded to all states in Section 1111(b)(2)(J)(II)(bb) to develop assessments that measure a student’s grade level and current performance level using items that are above or below a student’s grade level. Further, the Department should clarify that states may use information on a student’s current performance level in its state accountability system required in Section 1111(c). Without this clarification, states interested in building student-centered accountability systems that emphasize growth to proficiency would have to make a false choice between the flexibility in Section 1111(b)(2)(J)(II)(bb) and the Innovative Assessment and Accountability Demonstration Authority.
  • Clarify that this Flexibility Applies to Any Type of Innovative Assessment System, not just Computer Adaptive Assessments – The Department should clarify that the flexibility to incorporate assessment items that are above or below grade level is not limited to computer adaptive assessment designs. We believe it was Congress’ intent to encourage innovation by including this flexibility in Section 1111(b)(2)(J)(II)(bb), and as such, should also apply to all assessment systems approved under the Innovative Assessment and Accountability Demonstration Authority.
Section 200.77 — Demonstration Authority Application Requirements: Comparability

The statute requires an innovative assessment system to generate results that are valid, reliable, and comparable as compared to the results for students on the statewide assessment. The proposed regulations would require states to submit an annual plan to determine comparability using one of the following methods: Using the state assessment in grade span; administering the state assessment to a sample of students, using a significant number of common items from the state assessment, or some alternative that is equally rigorous and statistically valid.

While we appreciate the Department’s intention to not constrain states to one approach for evaluating comparability, we are concerned that the fourth option would ultimately be determined by a peer review panel that must decide what qualifies as an “equally rigorous and scientifically valid” standard. We believe it is important for the Department to clearly define the expectations for demonstrating comparability in its final regulation.

As such, we support the recommendations for evaluating comparability submitted by Susan Lyons, Ph.D. and Scott Marion, Ph.D., at the National Center for the Improvement of Educational Assessment (Center for Assessment) and endorsed by nearly a dozen of the nation’s leading measurement specialists with expertise in comparability. These recommendations emphasize the following points:

  • The Department should not require specific methods for evaluating comparability because such evaluations will be context dependent, but the Department should require states to provide strong evidence of comparability that will be reviewed by a peer review panel.
  • The most compelling evidence of alignment for the two assessment systems will be the alignment of each system to the content standards rather than alignment of one assessment system to the other.
  • Comparability of assessment results should focus on the achievement level classifications across the two assessment systems.
  • In the event that a state intends to pilot an innovative assessment system with a group of districts, the state should include evidence of within-pilot district comparability as well as evidence of pilot to non-pilot district comparability. This would ensure that innovative assessments are not only comparable to the statewide test but also comparable to each other, preventing local assessments that differ significantly in difficulty and content coverage.
Section 200.79 — Transition to Statewide Use: Individual vs. Systems of Assessments

The proposed regulations would clarify that the innovative assessment system and each assessment in the system must meet all of the requirements of Section 1111(b)(2) and the application requirements in order for a state to transition out of the demonstration authority and use its assessment system for purposes of Section 1111(b)(2). We believe this proposal is not consistent with statute which requires the system of assessments to meet all of the requirements of Section 1111(b)(2).

We believe it would be impractical if not impossible to require this information for each individual assessment in the system. Individual assessments that are part of a system are likely to assess portions of standards as opposed to the comprehensive set of standards covered in the state test. As such, they will not be able to demonstrate comparability to the full breadth and depth of the state tests. Furthermore, states would have to significantly increase the amount of required testing time for districts participating in the innovative assessment system in order to satisfy this requirement. This would likely discourage districts from participation.

Section 200.79 — Transition to Statewide Use: Transition Evaluation

For purposes of evaluating whether a state may use its innovative assessment system for purposes of Section 1111(b)(2) beyond the Demonstration Authority period, the proposed regulations would require an SEA to demonstrate that it has examined the statistical relationship between student performance on the innovative assessment in each subject area and on the other remaining indicators in the accountability system for each grade span in which an innovative assessment was used. Further, a state would have to demonstrate how the innovative assessment that would be used to produce results for the Academic Achievement indicator affects meaningful differentiation of schools. While we believe this information is important to consider, we encourage the Department to make the following clarification to its proposal:

Clarify that Information on Student Performance Alone Should not Determine Whether a State May Transition to its Assessment System for Purposes of Section 1111(b)(2). We agree that the Department, the peer review panel, and participating states should take into account student performance on required accountability indicators as they consider the full picture of change throughout the demonstration authority period. That said, the Department must be careful not to imply that an assessment system on its own will drive improvements in student performance. The goal of the transition evaluation must be to determine whether the assessment system produces a valid and reliable inference of each student’s performance level that can inform decisions for accountability purposes. Changes in student performance, however, are the result of state and district teaching and learning strategies. We believe the Department should clarify that the Secretary and the peer review panel should consider the statistical relationship between student performance on the innovative assessment in each subject area and on the other remaining indicators in the accountability system for each grade span in which an innovative assessment was used, but that this information alone should not impact whether a state may be allowed to continue use of its innovative assessment system beyond the demonstration authority period for purposes of Section 1111.

Thank you for your consideration of these important recommendations. We look forward to working with the Department and interested states as they explore this opportunity and begin to build better assessment systems that drive student-centered learning.



Lillian Pace

Senior Director of National Policy



Maria Worthen

Vice President, Federal & State Policy