http://www.stanford.edu/~jhuang11/research/pubs/moocshop13/codeweb.html
Project team: Jonathan Huang, Chris Piech, Andy Nguyen, and Leonidas Guibas,
with special thanks to Andrew Ng and Daphne Koller for making the data available to us.
Abstract Art?
Not quite! The above figure is the landscape of ~40,000 student submissions to the same programming assignment on Coursera's Machine Learning course. Nodes represent submissions and edges are drawn between syntactically similar submissions. Colors correspond to performance on a battery of unit tests (with red submissions passing all unit tests).
In particular, clusters of similarly colored nodes correspond to multiple similar implementations that behaved in the same way (under unit tests).
** (For those curious, this particular programming assignment asked students to implement gradient descent for linear regression in Octave).
Here's how we made it:
- We parsed each of 40,000 student submissions into an Abstract Syntax Tree (AST) data structure by adapting the parsing module from the Octave source code. ASTs allow us to capture the structure of each student's code while ignoring irrelevant information such as whitespace and comments (as well as, to some extent, variable names).
- We next computed the tree edit distance between every pair of unique trees, which counts the minimum number of edit operations (e.g., deletes, inserts, replaces) required to transform one tree into the other
- Reasoning that small edit distances between ASTs are meaningful while larger distances less so, we finally dropped edges whose edit distances were above a threshold and and used gephi to visualize the resulting graph.
See our recent paper at MOOCshop for more details:
- Syntactic and Functional Variability of a Million Code Submissions in a Machine Learning MOOC,
Jonathan Huang, Chris Piech, Andy Nguyen, Leonidas Guibas.
In the 16th International Conference on Artificial Intelligence in Education (AIED 2013) Workshop on Massive Open Online Courses (MOOCshop)
Memphis, TN, USA, July 2013.
Also see Ben Lorica's blog post about our work!
But what is it good for?
Well we have a lot of ideas! One thing that we did, for example, was to apply clustering to discover the ``typical'' approaches to this problem. This allowed us to discover common failure modes in the class, but also gave us a way to find multiple correct approaches to the same problem. Stay tuned for more results from the codewebs team!
Comments
Post a Comment