Skip to content
Aurora Institute

Using Technology-Enhanced Items Effectively to Close Student Achievement Gaps

CompetencyWorks Blog

Author(s): Aditya Agarkar

Issue(s): Lead Change and Innovation, Issues in Practice


Aditya Agarkar
Aditya Agarkar

This post was originally published at Getting Smart on January 17, 2015.

Don’t you think it’s time we retired those Scantron machines? Since the 70s, they’ve been trusted in hundreds of school districts across the country to tally the scores of students who filled pink ovals with #2 pencils. The Scantron machine heralded the pervasive use of multiple-choice questions in the decades that followed. Today, with all that we know about how to assess a student’s mastery of a topic, MCQs are an anachronism — like cassette tapes and typewriters. As readers of this blog are well aware, the education sector is undergoing the same technological innovation that has swept through businesses and households — and the rate of change is accelerating.

With all this technological progress underway, why are MCQs are still in use? One reason is that they are the default question format for many of the technology-assisted tools that, when introduced, made the assignment and grading process much more efficient and scalable. However, MCQs are simply not the best way test a student’s knowledge because they shed little light on the student’s ability to apply, integrate, and synthesize knowledge. Information gleaned from a MCQ test can often be misleading because students can guess the right answer or game the system to eliminate all the wrong choices – even if they don’t understand the question or know any of the correct answers.

To counteract these shortcomings, students must answer a large number of questions before their proficiency in a topic can be reliably assessed. For this reason, MCQs are adequate for summative assessments, where scores at an overall subject level are needed. Conversely, MCQs have little value when it comes to formative assessments, which strive to illuminate how students learn and apply their new knowledge.

More importantly, MCQs also tell us nothing about how students arrived at the correct answer—nor do they provide granular performance insights or a deeper understanding of their problem solving strategies. What steps did they follow? If they incorrectly answered a question, what was the root cause? What was the knowledge gap or misconception? And—most importantly—“Why is there a knowledge gap?” This gets to the heart of where MCQs fail: they tend to emphasize more of the “knowing” rather than the “doing.”

Today, educators need to ensure students can demonstrate more than just knowing. The Common Core standards adopted by most states are meant to be evidence-based and stress students’ ability to both “know” and “do.” There is clearly a need for assignments and assessments (aligned to the CCSS) that are focused less on testing memory skills and more on testing a student’s synthesis of knowledge — ideally, providing a window into how a student employs higher-order thinking skills such as problem-solving and critical thinking.

This is where new kinds of formative assessment tools and question types can help. These tools enable teachers to raise the level of interaction and response in their formative assessments, and thereby gain valuable observations into student mastery that classic multiple-choice formats cannot deliver. These new approaches, which feature technology-enhanced items (TEI), simultaneously drive higher-order thinking skills and inform instruction. But how does this work?

By switching from classic MCQs to TEIs, teachers can transform their assessments, instruction, and grading process. They can craft assessments that are more challenging and engaging. And they can save time, compared to paper-based approaches. Assessments that leverage TEIs soundly prepare students by mirroring the consortia testing experience, which requires them to showcase their problem-solving and analytic skills. Technology-powered formative assessments can also play a crucial role in informing both teachers and students about progress and mastery at a time when essential tweaks can still be made to the curriculum. These adjustments can help ensure students achieve targeted standards-based learning goals every day.

When I tell teachers about this, they often tell me that formative assessments are gaining acceptance and momentum in the classroom. With no surprise, as resistance to end of year assessments and ineffective testing methods are some of the most important trends in US education.

Many of them are enthusiastic about the potential of using assessment platforms like these and TEIs. But often they have the same question: “How can we incorporate our most crucial learning objectives and standards into the existing platforms and in the TEI format?”

At Edulastic we’re creating a next-generation assessment platform that lets students perform daily practice in a learning environment that strives to replicate the online consortia testing experience. Teachers are offered a variety of question formats that test a wider range of thinking skills and provides more in-depth insights on students’ thought process. This new level of assessment includes features such as a digital scratchpad and freehand drawing tools that allow students to show their work — making the once-difficult task of measuring student mastery more transparent. Moreover, teachers can collect and analyze this student performance data to derive more meaningful intervention, remediation, and feedback as they implement improved personalized and blended learning strategies.

What do these questions look like? A good example would be a graph-plotter question, where the student answers a question by drawing an online graph. Unlike an MCQ, it’s almost impossible to provide the correct answer to this question through guessing or gaming strategies such as elimination of all the wrong choices. The question itself is highly interactive and not only tests the student’s mathematical skills but also his or her graphing and data visualization skills.

Teachers can use evidence-based assessment — a Hot Text question that students answer by highlighting words or phrases in a passage. An ”information-dense” approach such as multi-part questions can help diagnose specific skills gaps. For example, the first part of the question may ask the student to enter the algebraic equation whereas the subsequent parts ask the student to enter its solution. You can see more examples on our blog.

There has been significant progress in assessment development over the past year, although not all of it focused on technology-enhanced assessments. The K-12 assessment in i-Ready from Curriculum Associates comes with K-8 instructional units in math and reading. Dreambox Learning covers K-8 math. McGraw’s formative system Acuity was updated to include 400 performance tasks (and should soon work more seamlessly with recently purchased Engrade). MasteryConnect makes assessment tools that allow teachers to quickly assess student progress and share those quizzes with other teachers. MasteryConnect also recently extended its reach by acquiring competitor Socrative.

Last summer, leading social learning platform Edmodo launched Snapshot, a formative assessment system, and recently launched free Common Core resource recommendations. Likewise, open resource catalog OpenEd launched a formative assessment system. Literacy Design Collaborative is an open set of resources for writing across the curriculum that encourages more writing and improves the quality of writing assessment. Lightside Labs, recently acquired by Turnitin, joins a handful of essay feedback and scoring systems.

Of course, the acid test is how this works technology in the classroom. Damion Frye, special assistant for curriculum and instruction at Newark Public Schools, has been at the forefront of using TEIs in the classroom, and is cautiously optimistic. “An assessment with TEIs is a good way to ensure that our teachers are ready to make the transition to PARCC instructionally, and that the students are prepared for the rigor expected from these tests. They also mitigate the problems that teachers face: correctly identifying how the learning of each individual is progressing, who requires support, and what aspects of the lesson needs reinforcement.”

At this point, what we see on the assessments look like TEI 1.0. As the technology gets more advanced and becomes mainstream, we will begin to see TEIs that are closer to how the academic concepts are applied in real life. In short, expect assessments to become more varied, less mundane, and more interesting overall. The result: reduced achievement gaps and better learning outcomes.


Aditya Agarkar is a co-founder and VP of products at Edulastic, an educational technology startup on a mission to make formative assessments more accurate and insightful to promote learning.