Skip to content
Aurora Institute

What Do We Know About the Implementation and Outcomes of Personalized, Competency-Based Learning? A Synthesis of Research from 2000 to 2019

CompetencyWorks Blog

Author(s): Carla M. Evans, Erika Landl, and Jeri Thompson

Issue(s): Issues in Practice, Evaluate Quality


Cover of Journal of Competency-Based EducationStates and districts interested in transitioning to personalized, competency-based education models often wonder what the research says about these systems-change approaches. For example, what are the factors that serve as facilitators or barriers to K-12 competency-based education implementation? What are identified academic and nonacademic outcomes for K-12 students?

We set out to examine these questions using the last twenty years of research on personalized, competency-based education systems. The resulting article, Making sense of K-12 competency-based education: A systematic literature review of implementation and outcomes research from 2000 to 2019, was recently published in the Journal of Competency-Based Education. It is the first systematic review and integration of this body of literature.

Our goal was two-fold: (1) examine the research on personalized, competency-based reforms intended to reshape traditional understandings of what, when, where, and how students learn and demonstrate academic content knowledge and skills; and (2) distill those findings for practitioners, policymakers, and researchers. Our synthesis included 25 peer-reviewed studies and unpublished reports.

Overall Comments

We found that it was difficult to isolate the research on implementation and outcomes of K-12 competency-based education (CBE) approaches in some “pure form,” because of the inter-relatedness and somewhat overlapping definitions of CBE, personalized learning, student-centered learning, and proficiency-based education. Rather than trying to decipher some “pure” form of personalized, competency-based approaches to education, we suggest that it may make more sense for the field to coalesce around a common continuum of practices in relation to the seven elements of CBE from more traditional models to more competency-based models.

Assessment of student learning—one key element of CBE—was noticeably absent from most of the studies reviewed. The absence of this element is striking considering that determining competence is fundamentally an assessment decision. One hypothesis as to why more studies did not describe CBE with reference to assessment is that schools and teachers may not see assessment as part of the initial wave of implementing personalized, competency-based practices. Assessment may not be the primary focus of change efforts at the beginning of the work such that teachers continue to use previously established assessments to evaluate student competency.

Findings Related to Implementation Facilitators and Barriers

There is a lot known about barriers and facilitators to implementing CBE in K-12 schools. The factors that most strongly facilitate CBE implementation are those that (a) provide teachers and students with clear procedures and tools to identify the status of students along a learning progression and (b) provide supports to allow students to reach grade-level expectations at their own pace, place, and level of personalization. Barriers to CBE implementation across studies tend to highlight how the reform requires changes to traditional education practices such as those found in instruction, grading, and assessment.

Findings suggest that practitioners should not try to just avoid the barriers to implementation and focus only on the facilitators. Instead, barriers and facilitators can be seen as two sides of the same coin. Understanding why certain factors can act as both barriers and facilitators in different contexts, or even in the same context depending on their stage of implementation, can help practitioners understand what aspects of implementation can more quickly accelerate (or derail) implementation and focus attention on these key factors during the planning stages.

Findings Related to Student Outcomes

Results from the synthesis of the literature suggest that our understanding of student outcomes in relation to CBE implementation is relatively weak, both in terms of academic and nonacademic domains. There are several reasons for this lack of clarity including the different ways in which CBE has been defined across studies and the lack of more rigorous study designs.

For many, the promise of CBE and related practices is that it will improve student achievement and minimize equity gaps. However, in most of the studies reviewed, the evaluation of student outcomes was either absent or a secondary consideration. Inferences/hypotheses about observed or perceived student outcomes based on survey results, focus group discussions, or the researchers’ observations were provided in some studies, but they were not the primary focus of the research.

Implications of Research Synthesis for Advancing the Field

One key implication of the research synthesis is that CBE is not an all-or-nothing approach. CBE constitutes many elements that are implemented to varying degrees. Thinking about CBE approaches along a continuum of implementation eschews a binary approach to investigating schools as either fully competency-based or not, and instead recognizes that schools typically implement a range of practices, some of which are more aligned with competency-based elements than others. Understanding the degree of implementation and associated effects on student outcomes can provide practitioners with a greater understanding of what is essential in different contexts to realize the equity goals at the center of reform efforts.

Second, the review isolates gaps in the existing knowledge base. For example, there is a need to better understand how assessment of, for, and as student learning fits within these models, how schools think about and approach measuring competency, and the nature of assessment in competency-based systems. If determining competence is fundamentally an assessment decision, then there is a lot left to explore about assessment and CBE approaches. Also, there is a gap in the existing literature with respect to theories of action supporting competency-based systems. Designing theories of action that support CBE approaches will help elucidate the connections among the key elements of CBE, the intended role of assessment in a competency-based system, and mechanisms by which assessment serves to promote student agency, equity, and other desired outcomes.

Third, this research synthesis identifies questions that need further research. For example, there has been a fair amount of research conducted on the factors (facilitators and barriers) that affect CBE implementation. We know from this research that barriers can turn into facilitators that accelerate and improve the quality of implementation. What we do not yet know, however, is how barriers and facilitators relate to different profiles of implementation. If a school or district decides to implement the key elements of CBE to varying degrees along a continuum, thereby creating a profile of implementation, are there certain barriers or facilitators that are more relevant than others to specific profiles? Do the barriers and facilitators change or become more complex in later stages of implementation?

Additionally, there is a lot still unknown about the effects of CBE on student academic and nonacademic outcomes and how those effects may vary by prior achievement, background characteristics, grade span, content area, and so on. Given that equity is a central tenet of CBE systems-change approaches, exploring the perceptions and effects of CBE approaches on marginalized student populations seems especially relevant.

Learn More

Photos of the Authors

Carla M. Evans, Ph.D., is an Associate at the National Center for the Improvement of Educational Assessment (Center for Assessment). Carla’s research focuses on the implementation of assessment and accountability policies and their impacts on teaching and learning. She is interested in policy research related to innovative assessment and accountability systems, competency-based education, and performance-based assessments (email [email protected], Twitter @CarlaMEvans).

Erika Landl, Ph.D., is a Senior Associate at the National Center for the Improvement of Educational Assessment (Center for Assessment). Erika strives to develop innovative, user-friendly frameworks and procedures that operationalize best practice related to the design, implementation, and validation of assessments and accountability systems. She works nationally to help states articulate coherent, defensible theories of action aligned to state goals and policy initiatives.

Jeri Thompson, Ed.D., is a Senior Associate at the National Center for the Improvement of Educational Assessment (Center for Assessment). Jeri combines her knowledge of educational systems, including assessments, curriculum, and instruction, to offer states and districts guidance and support for both assessment and accountability purposes. She provides leadership in designing effective performance assessments and rubrics, facilitating deep understanding of cognitive rigor, scoring and analyzing student work, and deepening understanding of assessment and data literacy.