Andrew Head is a Postdoctoral Scholar in the Computer Science Division at UC Berkeley. In his research, Andrew builds IDEs for Ideas: systems to help programmers, data scientists, and scientists read and write complex information artifacts like tutorials, computational notebooks, and scientific articles. Towards this goal, he conducts systems research at the intersection of human-computer interaction, software engineering, and applied artificial intelligence.

For his Ph.D. thesis, Andrew studied with professors Björn Hartmann and Marti Hearst at UC Berkeley. His thesis proposed innovative extensions to computational notebooks, interactive code editors for authoring code examples, and mixed-initiative systems for providing programming feedback at scale in massive classes. Each of these systems blended novel interaction design with tailored algorithms for program analysis. The research was supported by an NDSEG Fellowship and research internships at Google and Microsoft Research.

In his latest research, Andrew is redesigning the user experience of reading scientific papers with interactive tools that define confusing terms and symbols. This research is supported by the Alfred P. Sloan Foundation and the Allen Institute for AI.

Andrew has received best paper awards and nominations at premier conferences in human-computer interaction like ACM CHI. His project nbgather was adopted by Microsoft as an extension to the popular VSCode programming editor, and has been installed over 4,000 times.

I'm on the job market! I'm looking for academic research positions for Fall 2021.

CV Research Statement Teaching Statement Diversity Statement

Research Highlights

An interactive reading interface that exposes definitions of terms and symbols in scientific papers via tooltips, equation diagrams, and visual filters.
Under review
A new kind of notebook. Supports live programming of programming tutorials with flexible code organization.
CHI '20
Tools for cleaning, finding, and comparing versions of code in computational notebooks.
CHI '19
Mixed-initiative interfaces for propagating teacher feedback in massive CS classrooms.
Learning@Scale '17

News

October 2020: Invited talk at Arizona State's Digital Culture Speaker Series. The talk is "Tools for Transforming Creative Coding Messes into Helpful Example Programs." Watch the live stream here.

July 2020: The 'gather' feature from our CHI '19 paper is going mainstream! The VSCode Python team announced 'gather' as a feature in their July release of the Python extension.

May 2020: I have a Ph.D. now! See my dissertation and watch my thesis talk.

April 2020: So honored to receive an Outstanding Graduate Student Instructor award for co-teaching UC Berkeley's human-computer interaction course last summer 💙💛.

March 2020: Our CHI '20 paper on authoring programming tutorials was nominated for a best paper award!

December 2019: Our paper on tutorial authoring was accepted to CHI '20! Composing Step-by-Step Tutorials from Linked Source Code, Snippets, and Outputs.

December 2019: Invited talk at Apple: Notebooks, Narratives, and 'Nteractions.

October 2019: Spoke on the Write the Docs podcast on the episode, "Researching how developers use API docs". Watch the podcast here, and then read more in our ICSE '18 paper.

March 2019: Managing Messes got a CHI Best Paper Award!

Publications

Dissertation

Interactive Program Distillation
UC Berkeley Doctoral Dissertation, 2020

Introduces interactive systems for transforming existing code into sample programs. Presents methods for implementing the tools, and usability studies verifying their power.

Under review

Augmenting Scientific Papers with Just-in-Time, Position-Sensitive Definitions of Terms and Symbols
Andrew Head, Kyle Lo, Dongyeop Kang, Raymond Fok, Sam Skjonsberg, Daniel S. Weld, and Marti A. Hearst
arXiv preprint, 2020

Presents ScholarPhi, a reading interface for scientific papers that reveals definitions of terms and symbols. The design is grounded in an observational study and 4 pilot studies. A controlled study with 27 researchers showed the tool is useful and desired.

Peer-Reviewed Publications

Composing Flexibly-Organized Step-by-Step Tutorials from Linked Source Code, Snippets, and Outputs
Andrew Head, Jason Jiang, James Smith, Marti A. Hearst, and Björn Hartmann
ACM Conference on Human Factors in Computing Systems, 2020

Presents Torii, a new kind of computational notebook for authoring programming tutorials. The design is grounded in interviews with authors and a content analysis of 200 tutorials. In a lab study, 12 tutorial authors created flexibly-organized tutorials with the tool.

Nominated for Best Paper Award

Managing Messes in Computational Notebooks
Andrew Head, Fred Hohman, Titus Barik, Steven M. Drucker, and Robert DeLine
ACM Conference on Human Factors in Computing Systems, 2019

Presents code gathering tools, interactive extensions to computational notebooks that help analysts find, clean, recover, and compare versions of code. In a lab study, 12 data analysts quickly appropriated the tools to support exploratory data analysis.

Best Paper Award

Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models
Fred Hohman, Andrew Head, Rich Caruana, Robert DeLine, and Steven M. Drucker
ACM Conference on Human Factors in Computing Systems, 2019

Presents Gamut, a visual analytics system. Documents the use of Gamut as a design probe to study how interactivity can help data scientists interpret models.

Interactive Extraction of Examples from Existing Code
Andrew Head, Elena L. Glassman, Björn Hartmann, and Marti A. Hearst
ACM Conference on Human Factors in Computing Systems, 2018

Presents CodeScoop, an interactive tool for extracting executable code snippets from tangled programs using static and dynamic analysis. Grounded in an observation study of programmers creating snippets. Validated in a controlled study with 19 programmers.

Nominated for Best Paper Award

WiFröst: Bridging the Information Gap for Debugging of Networked Embedded Systems
Will McGrath, Jeremy Warner, Mitchell Karchemsky, Andrew Head, Daniel Drew, and Björn Hartmann
ACM User Interfaces Software and Technology Symposium, 2018

Presents WiFröst, an interactive visualization for debugging networking issues that cut across servers, routers and devices in embedded systems prototypes.

When Not to Comment: Questions and Tradeoffs with API Documentation for C++ Projects
Andrew Head, Caitlin Sadowski, Emerson Murphy-Hill, and Andrea Knight
ACM International Conference on Software Engineering, 2018

Describes gaps in API documentation and why they exist. Findings are grounded in an experience sampling study with hundreds of professional developers, and qualitative interviews with 18 developers and 8 API maintainers.

Writing Reusable Code Feedback at Scale with Mixed-Initiative Program Synthesis
Andrew Head, Elena Glassman, Gustavo Soares, Ryo Suzuki, Lucas Figueredo, Loris D'Antoni, and Björn Hartmann
ACM Conference on Learning at Scale, 2017

Introduces two tools for scaling programming composition feedback using mixed-initiative program synthesis. Lab studies with teachers show the tools' effectiveness.

Can Human Development be Measured with Satellite Imagery?
Andrew Head, Mélanie Manguin, Nhat Tran, and Joshua E. Blumenstock
International Conference on Information and Communication Technologies and Development, 2017

Explores the extent to which neural networks can predict human development measures from satellite imagery. Promising results are found for predicting poverty in a variety of countries, though the technique does not generalize trivially to many other measures.

TraceDiff: Debugging Unexpected Code Behavior Using Trace Divergences
Ryo Suzuki, Gustavo Soares, Andrew Head, Elena Glassman, Ruan Reis, Melina Mongiovi, Loris D'Antoni, and Björn Hartmann
IEEE Symposium on Visual Languages and Human-Centric Computing, 2017

Introduces TraceDiff, a novel interface to help novice programmers debug incorrect assignments by visualizing where execution diverges from the closest correct program.

Tutorons: Generating Context-Relevant, On-Demand Explanations and Demonstrations of Online Code
Andrew Head, Codanda Appachu, Marti A. Hearst, and Björn Hartmann
IEEE Symposium on Visual Languages and Human-Centric Computing, 2015

Introduces Tutorons, programs that generate on-demand, context-relevant explanations of code in the web browser. A quantitative study characterized the accuracy of 3 example Tutorons. In a lab study, Tutorons reduced the need to read reference documentation.

Nominated for Best Paper Award

Lamello: Passive Acoustic Sensing for Tangible Input Components
Valkyie Savage, Andrew Head, Wilmot Li, Gautham Mysore, Dan B Goldman, and Björn Hartmann
ACM Conference on Human Factors in Computing Systems, 2015

Presents Lamello, an approach for creating tangible input components that can recognize user interaction using passive acoustic sensing.

ToneWars: Connecting Language Learners and Native Speakers through Collaborative Mobile Games
Andrew Head, Yi Xu, and Jingtao Wang
International Conference on Intelligent Tutoring Systems, 2014

Presents ToneWars, a collaborative mobile game for learning Chinese as a second language. A 24-participant usability study showed that ToneWars provides learning benefits for second-language learners and engages native speakers.

Teaching and Mentoring

In the summer of 2019, Andrew co-taught Computer Science 160, UC Berkeley’s undergraduate course on human-computer interaction. His co-instructor was Sarah Sterman. The theme for the student design project was authoring tools. 76 students designed and implemented interactive authoring tools in groups, for user groups including choreographers, screenwriters, beat-boxers on the go.

All course materials are online and public. Learn more about the course by perusing the syllabus and the showcase of authoring tools that student groups designed.

From fall 2018 through summer 2020, Andrew served as the CS area coordinator for EECS Peers, a graduate student peer support group. The group was founded in 2013 by Kristin Stephens-Martinez to help grad students navigate the tricky issues they encounter during grad school, from barriers encountered in research to the many stresses experienced inside and outside the lab. EECS Peers is still active today, and always looking for caring members of the EECS community to join the next cohort of Peers.