With all the hype about massive and adaptive and big data, it’s hard to remember sometimes that real flesh and blood humans do know some things about teaching and learning. In fact, we know a lot about it in some areas. We’re just not very good at disseminating and deploying it effectively across our educational systems. Massive and adaptive and big data could help solve those problems to the degree that the people behind them would pay close attention to what humans already know. But more often than not, they are not paying attention. Or they have picked up on some little sliver, such as the role of repetition in long-term memory formation, and built a whole product around it. In general, educational experts are brought in as occasional consultants and best, and ignored altogether at worst.
That is why I find this critique of Sal Khan’s videos by Christopher Danielson, a math and math education teacher at a community college, so refreshing and important. This is not snarky, “Mystery Teacher Theatre 2000” stuff. Nor is it a hysterical rant on why computers could NEVER replace teachers. Rather, it is a careful, research-backed analysis of what’s wrong with one aspect of Sal Khan’s math videos, with the explicit goal of helping Khan to improve the value of his work. This is incredibly good stuff. And as a bonus, Danielson demonstrates just how easy it is to use a social network like Twitter to get help from a community of educators.
We really need to figure out how to get better at inviting this kind of dialogue. We shouldn’t just rely on the Christopher Danielsons of the world to beat on the doors of the great tech temples until somebody can be bothered to answer.
Bill Fitzgerald says
Do we need to get better at inviting this kind of dialogue, or do we need to get better about the planning and implementation process so as that this dialogue is part of everything we build?
A lot of the issues you describe in this post (seizing narrowly on one element to the exclusion of others, etc) could be mitigated by making sure that the people sitting around the table are representative of the talent and perspectives needed to build quality systems (and a quality system is not necessarily the same as a quality product).
One way of looking at KA is that it has done a great job on scale, delivery, and breadth of content – I’d love to see usage metrics on various bodies of OER, but I’d gladly wager that KA is used more frequently than other sources of OER.
However, as people like Christopher Danielson, Dan Meyer, Sylvia Martinez, Karim Ani, Frank Nochesne, Audrey Watters, etc have pointed out, there are some serious issues of depth and accuracy of knowledge with KA. While the pursuit of scale doesn’t need to come at the expense of accuracy, it feels like that shortcut has been taken with KA. More inclusive planning would eliminate this – and from what I have seen (and as Danielson says) there is no shortage of people willing and able to provide fact checks on this material. The talent is there for the asking.
Michael Feldstein says
Well said, Bill.
Laura Gibbs says
The example of using Twitter contributions to gather student errors and misconceptions was really excellent – and since I “know” math but don’t teach it, I’m not in contact with the day-to-day errors that students make. For example, one item on the list that really struck me was this one: 0.3*0.3=0.9. Of course I know that answer is not right, but I’m really not sure how I would go about explaining that to a student who really thinks it is right. That is probably how a lot of professors who don’t teach language or writing feel when they see errors in their students’ writing but honestly don’t know exactly how to label the error, much less how to break it down and explain it in a way that will really click with the student. As teachers, we can learn so much from students’ errors, which is why it is so important that as people build their automated self-tutorial systems (like Ka and similar), they also need to build in a way to make sure they are allowing for and collecting student errors, especially unanticipated errors. Often a teacher is unable to predict what student errors will be simply because we are so much inside our own knowledge. Students will surprise you – and figuring out just why they make the errors they do is one of the most fascinating things about teaching, in math, in writing – and, I would suppose, in any subject area.
Maha Bali says
Thanks for this post. I’ve read Christopher Danielson’s critique, and even though it is spot on, I think he is pointing not necessarily to a pedagogical issue in terms of “how to teach” but a knowledge-gap in KA. He is pointing to specific issues that KA is not able to help students learn effectively, not because of the medium or method, but because KA is not produced by a teacher who is “in contact with” students, and therefore does not know misconceptions, etc. Basically, I’m not sure why Sal Khan doesn’t just hire actual teachers to do the videos, rather than have “advisers” there, but still do (most of?) the material himself?
I love the twitter idea, too, and I think beyond KA, novice teachers, or veterans teaching new subjects or new age groups might benefit from that idea…
Ted Kosan says
In the 1970s and 1980s, some members of the artificial intelligence community conducted research on how computers can be “taught” to do mathematics using the same techniques humans use. The first thing they did was to determine how mathematicians perform mathematics. They were surprised to discover that a significant number of the techniques mathematicians used to perform mathematics were not written down anywhere. The techniques were not in any textbooks, nor were they in any journals or research papers. As the researchers dug deeper, they learned the techniques did not have names, and they were not taught explicitly. The researchers concluded that the mathematicians were using these techniques unconsciously. (Alan Bundy. The Computer Modelling of Mathematical Reasoning. Academic Press, 1983, p. 164.)
This discovery lead to the question: If mathematics educators don’t consciously know a significant number of the techniques that humans use to do mathematics, how do they teach students how to do math? The researchers concluded that the way persistent students “picked up” these hidden techniques was by being exposed to (and doing) mathematics over many years.
Why were AI researchers (and not education researchers) the first people to discover this information? I think one reason is because computers were the first “students” that were incapable of learning mathematics that was not taught explicitly. The researchers then devoted years of effort to discovering and naming some of the hidden techniques that mathematicians used to perform mathematics. When they “taught” computers these techniques, the computers were able to perform mathematics (in the area the techniques covered) like a human typically would.
I read Christopher Danielson’s article, and I think he made some good points. However, since this AI research indicates that most mathematics educators don’t understand how humans do math, I think the educational research he references (and the advice from mathematics educators he suggests Sal make use of) is of limited value.
Ted Kosan
Maha Bali (@Bali_Maha) says
hi Ted,
as a computer scientist (undergrad thesis used neural networks) turned educator (my PhD thesis is on critical thinking), i wanted to respond to you
Sal Khan is not teaching computers, but humans, nor do I think his technique is informed by AI research. Danielson’s point is not about just how ppl learn, but also their misconceptions. It is also not simply about performing math, but understanding its meaning and purpose. The first of these (misoceptions) might conceivably be learned via AI, named, and used to teach. The second, however, of understanding meaning and purpose, needs human interaction to socially construct the knowledge… imho
Ted Kosan says
@Bali Maha
I agree that understanding the meaning and purpose of mathematics is important. However, I think this meaning and purpose needs to be based upon a clear and accurate understanding of how mathematics works. A revolution in mathematics occurred around a century ago, but unfortunately most mathematics educators seem to be unaware of it. Frank Quinn describes this revolution in his article “A Revolution In Mathematics? What Really Happened A Century Ago, And Why It Matters Today”:
http://www.math.vt.edu/people/quinn/education/revolution.pdf
Here are two passages from the article:
“The main point of this article is not that a revolution [in mathematics] occurred [between about 1890 and 1930], but that there are penalties for not being aware of it. First, pre-college mathematics education is still based on nineteenth century methodology, and it seems to me that we will not get satisfactory outcomes until this changes.”
“The point briefly addressed here is that modern methods were adopted because they are much more effective at advanced levels. If the reasons for their success are clearly understood then some of these methods might be adaptable to elementary levels. This is the meaning of ‘brought into the twentieth century’ in the discussion above, and at the very least it would improve K-12/college articulation. But it might do far more. To be specific, consider fractions. Currently these are introduced in the old-fashioned way, through connections with physical experience. This is philosophically attractive and ‘easy’, but follows the historical pattern (see the discussion in `Drawbacks’) of being dysfunctional for most students. If we want students to be able to actually use fractions then core experience points a way: use a precise definition that looks obscure at first but can be internalized by working with it, and is far more effective once it is learned.”
By “core” he means the core of mathematics that was the focus of the revolution. The understanding of the meaning and purpose of math that Danielson refers to is based upon a nineteenth century methodology that Frank Quinn states is “dysfunctional for most students”.
The AI research that I referenced earlier is based upon the revolutionized mathematics core, and (as Quinn indicates) the modern methods that are part of this core have the potential to make the teaching of mathematics at the K-12 level much more effective than it is now. For the past few years I have been working on creating software that is based upon this AI research. Here is a link to some experimental examples of the explanations the software generates for how to solve equations that have a single unknown:
http://www.patternmatics.com/examples/expressions/stepbystep/single_unknown/
This past spring, a high school math teacher showed these examples to his students, and then he asked them to provide feedback on what they thought about the techniques the examples contained. Here are the student’s responses:
http://www.patternmatics.com/research/student_feedback/student_feedback_spring_2013.html
This feedback was mostly positive, and I think it supports Quinn’s idea that clear and precise explanations that are based on the revolutionized core of mathematics have the potential to be significantly more effective than the current explanations that are based on nineteenth century methodology 🙂