I was recently invited to attend a presentation for a new interactive math software at the community college where I work. The designers had clearly put a lot of time and thought into the program, which boasts some of the following features:
- Identifies Individual Zone of Proximal Development with a pre-test
- Individualized pace of instruction
- SBG feel to the goal tracking system (although the idea was never mentioned)
- Open responses with a “high answer tolerance” rather than multiple choice questions
- Records time spent on certain subjects
- Shows instructor summary of student activity to lend with planning based on problem areas
And a few other attractive options as well. The programmers were clearly looking to create an interactive system that automates as much assessing, assigning, and instructing as possible. It was an impressive program.
And totally useless for teaching mathematics.
I have no doubt that this program is capable of doing what is advertised. In fact, as far as doing what it is supposed to, I would say the program’s only flaw is its ability to ensure authentic assessment. We were told that the program would know if a student received help on the pre-test because it was smart enough to recognize if the student preformed worse later on, but that does not allow for the possibility that the student receives help every time they log onto the program.
Let’s assume that all students are following the rules. Then, the program breaks down the subject into identifiable and testable skills and then tracks student progress on those skills, but unless this program upgrades to Skynet levels of awareness, it will never be able to help students with:
- Utilizing and assessing Standards for Mathematical Practice
- Creating experiences where students see math as a creative human endeavor
- Identifying student misconceptions and sources of mistakes like this
- Truly identifying the full continuum of “partially correct” answers
- Addressing student frustration and motivation
That last point is particularly important. I was annoyed with the phrasing used by the rep who presented the program when she talked about students’ “pies”. The progress charts that allow students to keep track of how they are progressing toward learning algorithms, formulas, and steps. She said the students’ goal is to “fill the pie” rather than learning the material. This was particularly disappointing to hear because the chart had very SBG feel to its philosophy. A video we watched at the beginning of the presentation said that “student motivation would increase as the pie chart filled”.
If as human teachers we sometimes struggle to identify sources of student frustration and to motivate them, a pie chart is not going to cut it. (Real pie might help, but I wouldn’t want my students to only do math for slices of apple pie…) A program like this cheats the student out of a personal experience that comes from interacting with a real human as a teacher. No, a video of a person is not an adequate substitute.
The argument that you could come back with is that most teachers also do not do the things I say the program doesn’t do. True. But humans can be taught and supported. I have serious doubts that we will ever have a program intelligent enough to do what an excellent human math teacher can do until the program comes with a free T-1000.