I hadn’t planned to write another post on massive online open courses, but John Drummond posted a link to a “scathing” review of Sebastian Trun’s introductory statistics class that grabbed my attention and forced me to take another look. The blogger at AngryMath who wrote the review pulled no punches:
In brief, here is my overall assessment: the course is amazingly, shockingly awful. It is poorly structured; it evidences an almost complete lack of planning for the lectures; it routinely fails to properly define or use standard terms or notation; it necessitates occasional massive gaps where “magic” happens; and it results in nonstandard computations that would not be accepted in normal statistical work.
The AngryMath blogger then builds a fairly convincing list of specific problems with the course, but then he goes off with a set of claims about the superiority of face-to-face teaching that are much harder to embrace.
In normal college teaching, a truly dedicated instructor will go through a never-ending process of constant refinement and improvement for their courses, based on two-way interaction and feedback from live students…. But will that happen at Udacity, or any other massive online academic program? I strongly suspect not — likely, the entire attraction for someone like Thrun (and the business case for institutions like his) is to be able to record basic lectures once and then never have to revisit them again.
A couple of days after the AngryMath post, Thrun announced that, based on feedback from students and others, the course would offer a major update.
Our online classes are being revised frequently. We use the data and feedback in this medium to adapt and further optimize. In the next weeks we will majorly update the content of this class, making it more coherent, fixing errors, and adding missing content. I believe that Udacity owes all of our students the hardest and finest work in making amazing classes. We are very grateful for any feedback that we receive. These are the early days of online education, and sometimes our experimentation gets in the way of a coherent class.
As I’ve indicated in previous posts (here and here) I think these new methods of delivering content have great potential to replace or significantly enhance some of the most outmoded of our traditional practices — particularly the 100-250 person lecture that is delivered in fixed time blocks and with fixed seats that don’t allow for interaction. Many of our own large lecture courses at W&M cover domains of knowledge that are well understood — such as the content of introductory, intermediate, and advanced statistics courses. If we could find ways to effectively introduce students to those domains of knowledge more efficiently, we could redirect thousands of hours every year to more creative, empathetic, or productive endeavors that apply that learning. Making that learning more effective requires institutions to be much more reflective, analytical, and intentional as we experiment with new ways of interacting with students.
From a human memory perspective, those courses are designed to help students organize and store a complex set of concepts, processes and procedures in long-term memory so that they can be retrieved and applied to unique and creative situations. Getting that information into long-term memory requires understanding, practice, spaced repetition, and other techniques to build the appropriate long-term knowledge structures in the brain. Computers are getting really good at providing that practice and they will likely get much better in the very near future. (See UVA cognitive psychologist Daniel Willingham’s book Why Don’t Students Like School for a good summary of the relationship between memory, thinking, and learning. Good reading even for an unrepentant social constructivist.)
Massively Individualized Courses
I’ve watched the TED Talks by Daphne Koller and Peter Norig, both of whom offer a vision of education that is massively individualized. (Even though their visions are constrained by the artificial boundaries of a “course,” but that’s a topic for a different post.) Both of them cite Benjamin Bloom’s research on masterly learning in which he showed that students who were individually tutored scored two standard deviations higher than students who participated in traditional lecture classes. The central problem identified in 1984 was clear — as great as those learning gains were, society could never afford to tutor each student individually. Koller, Thrun, Norig and their colleagues at Coursera, Udacity, and edX are challenging that assumption. Maybe the time has come when cheap storage, super powerful processors, better algorithms, and fast networks might allow us to come closer to that ideal than we ever could before.
If we’re going to approach that ideal, we’re going to need to make some changes, beginning with moderating the rhetoric and allowing experiments a little chance to grow. We need to take advantage of the transparency allowed by these open courses to understand what works and what doesn’t. Teaching has always been a private activity with the professor and a few hundred students participating in the experience. MOOC’s expose the professor’s teaching to hundreds of thousands of students and hundreds of critics and potential collaborators, which could have tremendous value for the improvement of courses. The MOOCs are opening up a new generation of learning tools, techniques, and questions to explore. Maybe this isn’t the final last word after all.
If you’re interested in reading more about MOOCs on this blog, here are a few posts that you may want to take a look at: Thoughts from a MOOC Pioneer, What We Can Learn from Bryn Mawr’s Online Learning Experiment, Three Reasons MOOCs Should Include Digital Humanities Projects, Inconvenient Truths about MOOCs, and The MOOCs that (Almost) Ate UVA.