Thursday, December 13, 2012

Comparing modes of video instruction

Derek Muller, creator of the Veritasium science videos, did his dissertation research on videos for physics education.  He describes in his research and in this video that when presented with a clear video explanation, students really liked the video but their scores did not improve from pre-test to post-test.  But, students given videos that address common misconceptions did see improvement, even though they found the videos confusing.

I figured the same was likely true in math, and certainly Derek implies that he believes it is.  I set out, mostly for fun, to try to replicate the experiment with a math topic.  I chose "adding fractions with unlike denominators" because it's a topic that students of all levels struggle with, from my arithmetic students up through my calculus students, and a topic where there are a lot of longstanding misconceptions.  My hope was that by addressing those misconceptions, it would help students more than just showing them the correct method.

While I was at it, I decided to also compare a pure arithmetic approach to one supplemented with a manipulative demonstration - fraction bars in this case.  So, I created three videos, one purely arithmetic, a second that builds on the first with a manipulative demonstration, and a third that builds on the first by addressing two common misconceptions up front, showing why they don't work visually with fraction bars.

What follows is the methodology and results.  The TL;DR is "no significant difference."

I solicited subjects through Twitter and through WAMAP.org and MyOpenMath.com.  The majority of the subjects were community college students whose teachers asked their students to participate, some offering extra credit or other incentives.  The survey began with some demographic questions, then launched into a pre-test consisting of 4 questions adding fractions (details below).  Students were also asked to rate their confidence in their answers.  They were then presented one of the three videos.  After the video, the students took a post-test, again rating their confidence.  The final page of the survey asked them to rate the clarify of the video on a Likert scale, and gave a free response box for leaving feedback.  Order of the pre and post test and video assignment were randomized.

I received about 270 responses.  After filtering out incomplete surveys and participants who clearly didn't watch the video, I was left with 197 useable results.  For each student I computed their improvement from pre-test to post-test (scores out of 4), then computed the mean improvement for each experimental group.  The mean improvements where 0.25, 0.242, and 0.265 respectively, each with standard deviation around 0.8.  Long story short, the data did not provide evidence that the video shown made a significant difference.

I must, of course, admit that my study design is not ideal, and that results might be seen if the study had been done with the assessments in a controlled environment, where students were forced to complete watching the video, or with a more targeted subject group.  While I am disappointed in the results, since I do believe addressing misconceptions is a good idea, it does also raise the question of whether the video was just completely worthless.  If I repeat the study, I may add a control group that is asked to watch some non-math video between the two tests.

For those curious the pre/post tests contained these questions:
Version A:  1/9 + 4/9,  1/4 + 1/8,  1/3 + 1/5,  2/3 + 1/6
Version B:  1/7 + 2/7,  1/2 + 1/4,  1/3 + 1/4,  2/3 + 1/9

No comments:

Post a Comment