The history of edtech shapes where we're going.
This is Part 1 of our Shaping the Future of Learning series, in which we explore how various innovations and trends have shaped, and will continue to shape, the educational landscape.
Edtech is having a moment.
It’s been instrumental in supporting continuous learning this past year, and experts predict it will remain a standard component of K-12 education, not to mention a critical tool to address unfinished learning.
As a cognitive scientist specializing in educational technology and artificial intelligence, I’m continuously studying how evolving edtech can best support teachers and students. To understand the exciting direction that ed tech is going, let’s reflect on how far it’s come.
When gamified educational software entered schools in the 1990s-early 2000s, edtech was never considered to be mainstream instruction. We had programs like Oregon Trail that were fun but only loosely aligned to learning objectives, or simplistic drill-and-kill programs like Math Blaster.
Computer technology was also nascent, and few schools had enough computers (or knowledge) to use software as a mainstream activity. Educators had to ration computers, usually to keep advanced or remedial students busy.
As computers got sleeker and smarter, so did the software.
Although some edtech still has the same approach today as twenty years ago, the wide availability of hardware and the sophistication of software has made it a fundamental instructional tool for many schools.
And the best edtech now provides more than the historical games and glorified worksheets of the past, giving teachers immediate and actionable insight into individual students’ needs to guide their instruction.
For example, when MATHia first entered classrooms in 1992, we thought of it as software that was responsible for meeting students’ needs most of the time. When students got stuck, they called the teacher over.
We’ve found that it’s better to think of the software and the teacher as educational resources that have different strengths. Teachers are better able to motivate students and can flexibly respond to their individual needs, but the teacher’s time is limited, so it’s important that the teacher’s time be used well. We’ve found that students who call the teacher over are not necessarily the students who can most benefit from the teacher’s help at that time.
Instead, MATHia signals to the teacher (through LiveLab, our real-time dashboard) which students most need their help and what specific support they need. This signalling allows us to make the best use of the teacher’s time and helps make the software a fundamental part of the instructional approach.
Edtech is on the verge of changing assessment.
The current practice of giving people formal written tests in order to understand what they know is pretty recent. In order to make tests practical and easy to grade, they tend to ask questions that are disconnected from tasks that we expect students to do.
In math, a test might ask students to solve a equation, rather than ask them to model the rise in users of TikTok. This is especially true of standardized, high-stakes tests.
However, prior to the early 20th century, we determined what people knew by watching them solve tasks. In fact, school is one of the only places where tests are used to assess. We don’t measure how good a baseball player is by having them take a test. We watch how they play the game.
New techniques in artificial intelligence (AI) and machine learning are getting us closer to the day where we use software to understand what students can do as they do it—not through tests.
In MATHia, we’re able to analyze what students know while they complete authentic problem-solving tasks. The software tracks students’ problem-solving strategies, errors, and requests for help, giving educators a clear picture on what students have mastered and what they struggle with.
Such detailed formative assessment allows us to reliably predict how well students will do on a standardized test. But if we can predict performance on this test, then why do we need to have them take the test at all? Students have already demonstrated, by doing authentic mathematical problem-solving, that they know the content.
Not having to sit for an exam is great, but the implications of this go further. AI-assisted formative assessment means we don’t need to wait until the middle or end of a year to know where students are in their learning.
Another benefit is that ongoing assessment lets us go further with personalized learning. More advanced students can move to the next course whenever they are ready, and those who need it get just-in-time support.
Plus, since students’ abilities are assessed every day, we eliminate test-day anxiety. And there’s no need to cram for a test, which learning science shows is an ineffective way to learn.
With cutting-edge edtech, students will not only learn by doing, they’ll show their learning by doing.
As edtech innovation continues, assessment won’t be the only thing that changes. But one thing that shouldn’t change is the ultimate goal of supporting teachers and students. Past, present, and future—that priority stays the same.
Steve Ritter is Founder and Chief Scientist at Carnegie Learning. He has been developing, analyzing and evaluating educational technology for over 20 years. He earned his Ph.D. in Cognitive Psychology at Carnegie Mellon University and was instrumental in the development and evaluation of the Cognitive Tutors for mathematics. He is the author of numerous papers on the design, architecture and evaluation of Intelligent Tutoring Systems and other advanced educational technology. He currently leads the research team at Carnegie Learning, focusing on improving the educational effectiveness of its products and services. Each year, over 500,000 students use Carnegie Learning’s mathematics curricula.Explore more related to this author
School is one of the only places that tests are used to assess. We don’t measure how good a baseball player is by having them take a test. We watch how they play the game.
Dr. Steve Ritter