I subscribe to Dan Meyers’ blog because he thinks and writes with great clarity about the teaching of mathematics. I’m no mathematician, but good teaching techniques are cross-disciplinary, permitting me to apply many to the teaching of history. One of Dan’s recent posts has a title that would catch anybody’s attention: “Adaptive Learning Is An Infinite iPod That Only Plays Neil Diamond.” In the post, Dan laments his encounters with futurists, who are promoting an adaptive learning solution to solve a problem that is part of a teacher-centric classroom. For Meyers, who practices a student-centered approach, the adaptive learning solution, in its current incarnation, does not apply. Rather than elaborating, Dan sends the reader to a series of excellent Education Week blog posts by Justin Reich, where the topic is explored in more detail. I recommend that you read Justin’s work because he lends some additional perspective to the Meyers lamentations.

While we have watched technology tools blossom for consumption and content creation, enhancements to online assessment or any related automated learning program such as adaptive learning have fallen woefully short of what good teachers require. Reich elaborates on the reasons why, and offers a few suggestions for future software development. The operative questions are whether the lack of good adaptive assessment tools is a show-stopper for blended and online learning (clearly not), and if the real question is not so much about assessment, but who is at the center of the classroom. The answer to the latter question might shape the direction of the development of educational technology tools.

The point of adaptive learning is to focus individual student effort on types of questions that were difficult or impossible to answer while minimizing questions for which the student showed mastery. Computers can certainly “learn” or adapt to a student based on prior results, but Reich argues that computers are only effective with specific kinds of quantitative or objective questions. Meyers says, in a student-centered classroom, computer-based adaptive learning provides a shadow of what good peer and teacher interaction would provide to help clear up conceptual and procedural misunderstandings for students learning at different rates. Those different rates fall under the umbrella of personalized learning services, driven by the assumption that children’s brains understand things quickly or slowly for a variety of reasons, ranging from emotional and physical states to specific learning challenges.

Returning to the arguments of  Meyers and Reich, one can infer that a student-centered classroom permits the teacher and student peers to provide effective personalized learning as necessary without adaptive learning software. The software applications of the future might then be used to provide a more specific diagnosis subject to the constraints of the data collected. Without a student-centered classroom, we are compelled to design and build curricula and pedagogy that expects students to learn at the same pace. Our current philosophy of assessment is that students move forward together and are assessed on at critical junctures. Once we have the results of those assessments, we move into reactive mode for students who have fallen short of mastery. Some of our greatest teachers are those who come to the rescue of “fallen students” and effectively erase the gaps in student understanding. In that type of classroom where all students are expected to end up in the same place with respect to skills and content, good teachers must rescue students. However, there seems to be a flaw in this strategy. Instead of building on prior successes, we are repairing failure. This is not the best method of improving student performance unless one is working with an incredibly resilient child.

How about a more proactive approach to student learning? We already see signs of that in Dan Meyers’ classroom, and others. When students work with their peers under the right circumstances, they level the playing field by trying to achieve some form of common understanding. It can work with teacher-student interaction as well, but the process is not as natural and intuitive. Better for the teacher to intervene when the peer dynamic breaks down or gets stuck. How is a student-centered scenario more proactive? All of this interaction reveals problems with understanding and potentially corrective action prior to the major assessment, and therefore avoids the need for damage control after the assessment is graded. It also maximizes student learning prior to the assessment, which is what we seek in our students.

Is there a downside to this proactive and student-centered approach to learning? Well, students won’t move at the same pace (that will be a reflection of how their brains actually operate), and that has implications in our current factory model of testing and evaluation. Perhaps there is an alternative that will thrive in both the traditional culture and a more innovative environment. In a typical scenario, if a student scores 90% on a first test, we view that performance as good and move forward. If the course material is cumulative and there has been no intervention, however, then a 90% on the second test means the student understands 90% of the 90% they previously understood, or 81%. You can see where this exercise is going. Furthermore, checking the declining cumulative result of successive tests requires regular intervention with most students, and in a reactive mode since the intervention is triggered by the test result. This process does have an appropriate role in diagnostic testing such as tests that determine a student’s reading level. But in a summative setting, the teacher is always chasing the tails of the students and practicing damage control.

Try this on for size. Is it better to complete 100% of the learning expectations at a rapid pace and understand 70% or is it better to complete 70% of the learning expectations at a comfortable pace and understand 100% of those expectations? If the summative assessment covers all of the material equitably, then one would, in theory, score 70% either way. In the former case, the student lives with some misunderstanding throughout the course and brings that misunderstanding to each subsequent assessment. In the latter case, the student has experienced mastery for 70% of the material going into the final assessment, and has the confidence to apply that mastery to what will be new material for the last 30% of the learning requirement. I like those odds better.

Yes, we need enhanced assessments, and computers may be able to help. While we are waiting for them to be developed, we might rethink how we structure our classrooms and existing assessments so we can be more proactive addressing gaps in student understanding.