> Back

Projects

Wrex: A Unified Programming-by-Example Interaction for Synthesizing Readable Code for Data Scientists

We propose a unified interaction model based on programming-by-example that generates readable code for a variety of useful data transformations, implemented as a Jupyter notebook extension called Wrex. User study results demonstrate that data scientists are significantly more effective and efficient at data wrangling with Wrex over manual programming. Qualitative participant feedback indicates that Wrex was useful and reduced barriers in having to recall or lookup the usage of various data transform functions
> Paper Link
*Best Paper Award* CHI 2020

Aiding Collaborative Reuse of Computational Notebooks with Annotated Cell Folding

We present the design and evaluation of a Jupyter Notebook extension providing facilities for annotated cell folding. Through a lab study and multi-week deployment we find cell folding aids notebook navigation and comprehension, not only by the original author, but also by collaborators viewing the notebook in a meeting or revising it on their own. These findings extend our understanding of code folding’s trade-offs to a new medium and demonstrate its benefits for everyday collaboration
> Paper Link
CSCW 2018

Comparing developer-provided to user-provided tests for fault localization and automated program repair

We compared, both quantitatively and qualitatively, the developer-provided tests committed along with fixes (as found in the version control repository) versus the user-provided tests extracted from bug reports (as found in the issue tracker). We provided evidence that developer-provided tests are more targeted toward the defect and encode more information than user-provided tests, which can skew results for fault localization techniques and automated program repair
> Paper Link
ISSTA 2018

HappyFace: Identifying and predicting frustrating obstacles for learning programming at scale

HappyFace aims to discover frustrating experiences felt by learners during the programming process. We performed a large-scale collection of code snippets from PythonTutor, and collected a frustration rating through a light-weight feedback mechanism. We then devised a technique that is able to automatically identify sources of frustration based on participants labeling frustrating learning experiences
> Paper Link
VLHCC 2017