Accepted TOCHI Papers


Exploring and Understanding Unintended Touch during Direct Pen Interaction

Michelle Annett, Anoop Gupta, Walter F Bischof

Devices are often unable to distinguish between intended and unintended touch. We present a data collection experiment where natural tablet and stylus behaviors were analyzed from both digitizer and behavioral perspectives.


How Can Automatic Feedback Help Students Construct Automata?

Loris D'Antoni, Dileep Kini, Rajeev Alur, Sumit Gulwani, Mahesh Viswanathan, Björn Hartmann

We describe (a) techniques for automated generation of feedback for automata construction problems and (b) the evaluation of their effectiveness by conducting a user study with 377 student participants.

Accepted Papers


Impacto: Simulating Physical Impact by Combining Tactile Stimulation with Electrical Muscle Stimulation

Pedro Lopes, Alexandra Ion, Patrick Baudisch

Impacto is a wearable device that renders impact sensations in virtual reality by decomposing the stimulus: it taps the skin with a solenoid (tactile) and moves the body via electrical muscle stimulation (impulse).


Pin-and-Cross: A Unimanual Multitouch Technique Combining Static Touches with Crossing Selection

Yuexing Luo, Daniel Vogel

Introduces a new multitouch interaction space called “pin-and-cross” where one or more static touches (“pins”) are combined with crossing a radial target, all performed with one hand.


Self-Calibrating Head-Mounted Eye Trackers Using Egocentric Visual Saliency

Yusuke Sugano, Andreas Bulling

We analyse the calibration drift issue of head-mounted eye trackers, and present a robust method for automatic self-calibration based on a computational model of bottom-up visual saliency.


Webstrates: Shareable Dynamic Media

Clemens Klokmose, James Eagan, Siemen Baader, Wendy Mackay, Michel Beaudouin-Lafon

Introduces shareable dynamic media: collaborative content that blurs the distinction between applications and documents. Presents Webstrates, web-based dynamic media that supports real-time sharing across devices and end-user malleability.


Anger-based BCI Using fNIRS Neurofeedback

Gabor Aranyi, Fred Charles, Marc Cavazza

We describe a Brain-Computer Interface (BCI) using fNIRS to measure prefrontal cortex activity, which is correlated to the affective dimension of approach. Our experiments confirm its potential for affective BCI.


Procedural Modeling Using Autoencoder Networks

Mehmet Ersin Yumer, Paul Asente, Radomir Mech, Levent Burak Kara

This paper describes a method to enable intuitive exploration of high dimensional procedural modeling spaces within a lower dimensional space learned through autoencoder network training.


Candid Interaction: Revealing Hidden Mobile and Wearable Computing Activities

Barrett Ens, Tovi Grossman, Fraser Anderson, Justin Matejka, George Fitzmaurice

We introduce candid interaction, techniques for providing awareness about mobile and wearable device activity to others. We define a detailed design space which we explore with several novel prototypes.


CyclopsRing: Enabling Whole-Hand and Context-Aware Interactions Through a Fisheye Ring

Liwei Chan, Yi-Ling Chen, Chi-Hao Hsieh, Rong-Hao Liang, Bing-Yu Chen

CyclopsRing is a ring-style fisheye imaging wearable device worn on hand webbings. Benefiting from the egocentric and extremely wide field-of-view, CyclopsRing can enable many whole-hand and context-aware interactions.


ReForm: Integrating Physical and Digital Design through Bidirectional Fabrication

Christian Weichel, John Hardy, Jason Alexander, Hans Gellersen

We introduce "bidirectional fabrication" which lets users design objects alternatingly in physical and digital space. To this end, we combine a custom CNC milling machine, clay 3D printer, 3D scanner and augmented-reality user interface.


LineFORM: Actuated Curve Interfaces for Display, Interaction, and Constraint

Ken Nakagaki, Sean Follmer, Hiroshi Ishii

We explore the design space of actuated curve interfaces, and propose applications such as mobile devices, shape changing cords, and dynamic body constraints, through prototypes based on serpentine robotics technology.


Kinetic Blocks: Actuated Constructive Assembly for Interaction and Display

Philipp Schoessler, Daniel Windham, Daniel Leithinger, Sean Follmer, Hiroshi Ishii

We demonstrate how a shape display can assemble 3D structures from passive building blocks and introduce special modules that extend a shape display's input and output capabilities.


Looking through the Eye of the Mouse: A Simple Method for Measuring End-to-end Latency using an Optical Mouse

Géry Casiez, Stéphane Conversy, Matthieu Falce, Stéphane Huot, Nicolas Roussel

Presents a simple method for measuring end-to-end latency. This method allows accurate and real-time latency measures up to 5 times per second.


cLuster: Smart Clustering of Free-Hand Sketches on Large Interactive Surfaces

Florian Perteneder, Martin Bresler, Eva-Maria Beatrix Grossauer, Joanne Leong, Michael Haller

cLuster is a flexible, domain-independent clustering approach for free-hand sketches. It considers different user expectations based on an initial manual selection for faster structuring and rearrangement of sketched content.


FlexiBend: Enabling Interactivity of Multi-Part, Deformable Fabrications Using Single Shape-Sensing Strip

Chin-yu Chien, Rong-Hao Liang, Long-Fei Lin, Liwei Chan, Bing-Yu Chen

FlexiBend is an easily installable shape-sensing strip that enables interactivity of multi-part, deformable fabrications. The flexible sensor strip is composed of a dense linear array of strain gauges, therefore it has shape sensing capability.


Push-Push: A Drag-like Operation Overlapped with a Page Transition Operation on Touch Interfaces

Jaehyun Han, Geehyuk Lee

We proposed Push-Push that is a new drag-like operation overlapped with page transition operations on touch interfaces. We explored alternative design options and showed the feasibility of Push-Push by conducting two experiments.


Biometric Touch Sensing: Seamlessly Augmenting Each Touch with Continuous Authentication

Christian Holz, Marius Knaust

We seamlessly authenticate users during each touch. Our prototype Bioamp senses biometric features and modulates a signal onto the user's body, which commodity touchscreens sense upon touch and identify users.


User Interaction Models for Disambiguation in Programming by Example

Mikaël Mayer, Gustavo Soares, Maxim Grechkin, Vu Le, Mark Marron, Oleksandr Polozov, Rishabh Singh, Ben Zorn, Sumit Gulwani

FlashProg features program navigation and conversational clarification for disambiguating Programming by Example in data manipulation. It helps reducing the numbers of errors and increases the user's confidence.


Gaze-Shifting: Direct-Indirect Input with Pen and Touch Modulated by Gaze

Ken Pfeuffer, Jason Alexander, Ming Ki Chong, Yanxia Zhang, Hans Gellersen

Gaze-shifting is a technique where a device's direct/indirect input is dynamically modulated by gaze. The paper illustrates the utility of the technique through example pen and touch interactions.


Gaze vs. Mouse: A Fast and Accurate Gaze-Only Click Alternative

Christof Lutteroth, Abdul Moiz Penkar, Gerald Weber

We present a fast and reliable interaction method that makes it possible to click small targets using only gaze.


BackHand: Sensing Hand Gestures via Back of the Hand

Jhe-Wei Lin, Chiuan Wang, Yi Yao Huang, Kuan-Ting Chou, Hsuan-Yu Chen, Wei-Luan Tseng, Mike Y. Chen

BackHand is a prototype that explores a new signal source, back of hand, for hand gesture recognition. \ We identified the most promising location to sense multi-gestures at high accuracy.


TurkDeck: Physical Virtual Reality Based on People

Lung-Pan Cheng, Thijs Jan Roumen, Hannes Rantzsch, Sven Köhler, Patrick Schmidt, Robert Kovacs, Johannes Jasper, Jonas Kemper, Patrick Baudisch

TurkDeck allows prop-based virtual reality to scale to arbitrarily large virtual worlds in finite space and with a finite amount of props by using human actuators to re-arrange props.


Capture-Time Feedback for Recording Scripted Narration

Steve Rubin, Floraine Berthouzoz, Gautham J Mysore, Maneesh Agrawala

Our system, Narration Coach helps novice voiceover actors record and improve narrations. Our tool matches recordings to a script and provides feedback and automatic resyntheses for common speaking problems.


On Sounder Ground – CAAT, a Viable Widget for Affective Reaction Assessment

Bruno Cardoso, Teresa Romão, Osvaldo Santos

Our work describes the CAAT, a widget for the assessment of emotional reactions, and the results of a study that evidences its overall viability along several important dimensions.


PERCs: Persistently Trackable Tangibles on Capacitive Multi-Touch Displays

Simon Voelker, Christian Cherek, Jan Thar, Thorsten Karrer, Christian B Thoresen, Kjell Ivar Øvergård, Jan Borchers

PERCs use the touch probing signal from capacitive touch screens to determine whether the tangibles are on- or off-screen. This allows robust capacitive tangible widgets regardless of being touched.


Patching Physical Objects

Alexander Teibrich, Stefanie Mueller, François V Guimbretière, Robert Kovacs, Stefan Neubert, Patrick Baudisch

To minimize material consumption and to reduce waste during design iteration, we patch existing objects rather than reprinting them from scratch.


Unravel: Rapid Web Application Reverse Engineering via Interaction Recording, Source Tracing, and Library Detection

Joshua Hibschman, Haoqi Zhang

Unravel extends the Chrome Developer Tools to help developers discover how UI features work through HTML change observation, JavaScript execution tracing, and library detection.


Codeopticon: Real-Time, One-To-Many Human Tutoring for Computer Programming

Philip J Guo

We present Codeopticon, an interface that enables a computer programming tutor to monitor and chat with dozens of learners in real time, which is useful in large classes and MOOCs.


Tactile Animation by Direct Manipulation of Grid Displays

Oliver S. Schneider, Ali Israr, Karon E MacLean

Tactile animation enables artists to quickly and easily create rich haptic sensations on body-sized tactile displays. We describe the design and implementation of our tool, evaluated with professional animators.


Tiltcasting: 3D Interaction on Large Displays using a Mobile Device

Krzysztof Pietroszek, James Wallace, Edward Lank

We develop and formally evaluate a metaphor for smartphone interaction with 3D environments: Tiltcasting. Users interact within a rotatable 2D plane that is ‘cast’ from their phone’s interactive display into 3D space.


uniMorph - Fabricating Thin Film Composites for Shape-Changing Interfaces

Felix Heibeck, Basheer Tome, Clark David Della Silva, Hiroshi Ishii

uniMorph is an enabling technology for rapid digital fabrication of customized thin film shape-changing interfaces.


Explaining Visual Changes in Web Interfaces

Brian Burg, Andrew J Ko, Michael Ernst

Scry visualizes web interface changes on a timeline; a user selects two snapshots to see state differences between them; and can jump to the responsible JavaScript code.


Improving Haptic Feedback on Wearable Devices through Accelerometer Measurements

Jeffrey R. Blum, Ilja Frissen, Jeremy R Cooperstock

We demonstrate that motion just prior to a smartwatch vibration stimulus, as measured by the watch's accelerometer, significantly improves a model that predicts whether the vibration will be perceived.


Tracko: Ad-hoc Mobile 3D Tracking Using Bluetooth Low Energy and Inaudible Signals for Cross-Device Interaction

Haojian Jin, Christian Holz, Kasper Hornbæk

Tracko is a 3D tracking system between two or more mobile devices. Tracko requires no added components or cross-device synchronization. Tracko fuses inaudible stereo signals, Bluetooth low energy, and IMU sensing to achieve 3D tracking.


RevoMaker: Enabling multi-directional and functionally-embedded 3D printing using a rotational cuboidal platform

Wei Gao, Yunbo Zhang, Diogo Nazzetta, Karthik Ramani, Raymond Cipra

We present RevoMaker, a multi-directional 3D printing process that creates direct out-of-the-printer functional prototypes.


Joint 5D Pen Input for Light Field Displays

James Tompkin, Samuel Muff, James McCann, Hanspeter Pfister, Jan Kautz, Marc Alexa, Wojciech Matusik

We demonstrate 5D pen input for light field displays with joint or through-the-lens sensing, with pen precision equal to the human hand.


DataTone: Managing Ambiguity in Natural Language Interfaces for Data Visualization

Tong Gao, Mira Dontcheva, Eytan Adar, Zhicheng Liu, Karrie G Karahalios

We explore a natural language interface for data visualization. To address language ambiguity, we use a mixed-initiative approach that combines algorithmic disambiguation and interactive ambiguity widgets.


GazeProjector: Accurate Gaze Estimation and Seamless Gaze Interaction Across Multiple Displays

Christian Lander, Sven Gehring, Antonio Krüger, Sebastian Boring, Andreas Bulling

GazeProjector is an approach for accurate gaze estimation and seamless interaction with multiple displays using mobile eye trackers. It only requires one a priori calibration performed with an arbitrary display.


Tomo: Wearable, Low-Cost Electrical Impedance Tomography for Hand Gesture Recognition

Yang Zhang, Chris Harrison

We present Tomo, a wearable, low-cost system using Electrical Impedance Tomography (EIT) to recover the interior impedance geometry of a user’s arm. We use this system for hand gesture recognition.


EM-Sense: Touch Recognition of Uninstrumented, Electrical and Electromechanical Objects

Gierad Laput, Chouchang Yang, Robert Xiao, Alanson P Sample, Chris Harrison

We propose a novel sensing approach for passive object detection. Our technique utilizes low-cost, commodity hardware and is small enough to be worn on the wrist.


These Aren't the Commands You're Looking For: Addressing False Feedforward in Feature-Rich Software

Benjamin Lafreniere, Parmit K Chilana, Adam Fourney, Michael A Terry

Presents interaction techniques for exploring alternative commands before, during, and after an incorrect command has been executed. Can help novice users of feature-rich software to avoid false-feedforward errors.


MoveableMaker: Facilitating the Design, Generation, and Assembly of Moveable Papercraft

Michelle Annett, Tovi Grossman, Daniel Wigdor, George Fitzmaurice

We present MoveableMaker, a novel software tool that assists with the design, generation, and assembly of eleven different types of moveable papercraft. Preliminary workshops illustrated that MoveableMaker supports fun and creativity.


3D Printed Hair: Fused Deposition Modeling of Soft Strands, Fibers, and Bristles

Gierad Laput, Xiang 'Anthony' Chen, Chris Harrison

We introduce a technique for furbricating 3D printed hair, fibers and bristles, by exploiting the stringing phenomena inherent in 3D printers using fused deposition modeling.


Corona: Positioning Adjacent Device with Asymmetric Bluetooth Low Energy RSSI Distributions

Haojian Jin, Cheng Xu, Kent Lyons

Corona is a novel spatial sensing technique that implicitly locates adjacent mobile devices in the same plane by examining asymmetric Bluetooth Low Energy RSSI distributions.


Projectibles: Optimizing Surface Color For Projection

Brett R Jones, Rajinder S Sodhi, Pulkit Budhiraja, Kevin Karsch, Brian P Bailey, David Forsyth

In this work, we present Projectibles which use a spatially varying display surface color for video projection, greatly increasing the contrast and resolution of the display.


GelTouch: Localized Tactile Feedback Through Thin, Programmable Gel

Viktor Miruchna, Robert Walter, David Lindlbauer, Maren Lehmann, Regine von Klitzing, Jörg Müller

GelTouch is a thin gel-based layer that can selectively transition between soft and stiff (up to 25 times stiffer) to provide multi-touch tactile feedback.


SmartTokens: Embedding Motion and Grip Sensing in Small Tangible Objects

Mathieu Le Goc, Pierre Dragicevic, Samuel Huron, Jeremy Boy, Jean-Daniel Fekete

SmartTokens are small-sized tangible tokens that can sense multiple \ types of motions and grips, and transmit events wirelessly. We describe their design and illustrate possible usage scenarios.


Gunslinger: Subtle Arms-down Mid-air Interaction

Mingyu Liu, Mathieu Nancel, Daniel Vogel

Introduces Gunslinger, a mid-air barehand interaction technique using thigh-mounted sensors and hand postures to explore an ‘arms down’ body stance. This makes input more subtle and more compatible with touch input on large displays.


GravitySpot: Guiding Users in Front of Public Displays Using On-Screen Visual Cues

Florian Alt, Andreas Bulling, Gino Gravanis, Daniel Buschek

We contribute a concept and implementation for guiding users to a particular location in front of large displays using on-screen visual cues and discuss implications for designers.


Improving Virtual Keyboards When All Finger Positions Are Known

Daewoong Choi, Hyeonjoong Cho, Joono Cheong

This paper discusses how to improve virtual keyboards if all finger positions are known. We then implement a novel virtual keyboard design that outperforms existing virtual keyboards.


HapticPrint: Designing Feel Aesthetics for Digital Fabrication

Cesar A Torres, Tim Campbell, Neil Kumar, Eric Paulos

Discover HapticPrint, a haptic design tool for adding tactile cues, compliance, and weight to 3D printed artifacts.


FoveAR: Combining an Optically See-Through Near-Eye Display with Projector-Based Spatial Augmented Reality

Hrvoje Benko, Eyal Ofek, Feng Zheng, Andrew D Wilson

FoveAR is a novel augmented reality display which combines an optically-see-through near-eye glasses with projection-based spatial display to extend the user's field-of-view and immersion in the mixed reality experience.


Foldio: Digital Fabrication of Interactive and Shape-­Changing Objects With Foldable Printed Electronics

Simon Olberding, Sergio Soto Ortega, Klaus Hildebrandt, Jürgen Steimle

Foldio is a new design and fabrication approach for custom interactive objects. The user defines a 3D model and assigns interactive controls; a fold layout containing printable electronics is auto-generated


Foobaz: Variable Name Feedback for Student Code at Scale

Elena L Glassman, Lyla J Fischer, Jeremy Scott, Robert C. Miller

Current traditional feedback methods, such as hand-grading student code for style, are labor intensive and do not scale. Our UI lets teachers give feedback on students’ variable names at scale.


SHOCam: A 3D Orbiting Algorithm

Michael Ortega, Wolfgang Stuerzlinger, Doug Scheurich

We present a method for smoother interactive orbiting, a new way to avoid collisions with surrounding objects during orbiting, and a new method for multi-scale orbiting, i.e., orbiting of object groups. \


Virtual Replicas for Remote Assistance in Virtual and Augmented Reality

Ohan Oda, Carmine Elvezio, Mengu Sukan, Steven K Feiner, Barbara Tversky

We present VR and AR techniques that let a remote expert use virtual replicas of physical objects to guide a local user to perform 6DOF tasks through demonstrations and annotations.


Encore: 3D Printed Augmentation of Everyday Objects with Printed-Over, Affixed and Interlocked Attachments

Xiang 'Anthony' Chen, Stelian Coros, Jennifer Mankoff, Scott E Hudson

Why does 3D printing have to start from scratches? Our techniques can print new attachments directly over, affixed to or through existing objects, avoiding unnecessarily replacing and reprinting them.


Orbits: Gaze Interaction for Smart Watches using Smooth Pursuit Eye Movements

Augusto Esteves, Eduardo Velloso, Andreas Bulling, Hans Gellersen

Orbits is an hands-free, calibration-free gaze interaction technique for smart watches using smooth pursuit eye movements. We will present three studies that demonstrate the technique's robustness and applicability.


NanoStylus: Enhancing Input on Ultra-Small Displays with a Finger-Mounted Stylus

Haijun Xia, Tovi Grossman, George Fitzmaurice

We present NanoStylus – a finger-mounted fine-tip stylus that enables fast and accurate pointing on a smartwatch with almost no occlusion.


Improving Automated Email Tagging with Implicit Feedback

Mohammad S Sorower, Michael Slater, Thomas G Dietterich

Tagging email helps users manage information overload. Machine learning can make this task easier by automatically predicting tags. Leveraging the user's implicit feedback produces important increases in tag prediction performance.


SceneSkim: Searching and Browsing Movies Using Synchronized Captions, Scripts and Plot Summaries

Amy Pavel, Dan B Goldman, Björn Hartmann, Maneesh Agrawala

We introduce SceneSkim, a tool for searching and browsing movies using synchronized captions, scripts and plot summaries.


Capricate: A Fabrication Pipeline to Design and 3D Print Capacitive Touch Sensors for Interactive Objects

Martin Schmitz, Mohammadreza Khalilbeigi, Matthias Balwierz, Roman Lissermann, Max Mühlhäuser, Jürgen Steimle

We present Capricate, a fabrication pipeline that enables users to easily design and 3D print customized objects with embedded capacitive multi-touch sensing. Objects are printed in a single pass using a commodity multi-material 3D printer.


ATK: Enabling Ten-Finger Freehand Typing in Air Based on 3D Hand Tracking Data

Xin Yi, Chun Yu, Mingrui Zhang, Sida Gao, Ke Sun, Yuanchun Shi

This paper proposes a novel Bayesian model to enable ten-finger freehand typing in the air based on an empirical study of users’ air typing behavior.


Protopiper: Physically Sketching Room-Sized Objects at Actual Scale

Harshit Agrawal, Udayan Umapathi, Robert Kovacs, Johannes Frohnhofen, Hsiang-Ting Chen, Stefanie Mueller, Patrick Baudisch

Protopiper is a hand-held, computer aided fabrication device for prototyping room-sized objects at actual scale. It forms adhesive tape into tubes as its main building material. \ \


Leveraging Dual-Observable Input for Fine-Grained Thumb Interaction Using Forearm EMG

Donny Huang, Xiaoyi Zhang, T. Scott Saponas, James Fogarty, Shyamnath Gollakota

We introduce the first forearm-based EMG input system that can recognize fine-grained thumb gestures, as trained through dual-observable input during a person's normal interactions with their phone.


SensorTape: Modular and Programable 3D-Aware Dense Sensor Network on a Tape

Artem Dementyev, Hsin-Liu (Cindy) Kao, Joseph A. Paradiso

We present SensorTape, a sensor network in a shape of a tape. SensorTape can sense its deformations and proximity, and can be cut and rejoined


Makers’ Marks: Physical Markup for Designing and Fabricating Functional Objects

Valkyrie A Savage, Sean Follmer, Jingyi Li, Björn Hartmann

With Makers' Marks, user tangibly design interactive devices, annotating locations for functional components. Makers' Marks identifies user annotations and automatically generates printable files which include mounting geometry for indicated components.


Codo: Fundraising with Conditional Donations

Juan Felipe Beltran, Aysha Siddique, Azza Abouzied, Jay Chen

In this work, we investigate the idea of empowering donors by allowing them to specify conditions for their crowdfunding contribution: when these conditions are met, Codo accepts their contribution.


Sensing Tablet Grasp + Micro-mobility for Active Reading

Dongwook Yoon, Ken Hinckley, Hrvoje Benko, François V Guimbretière, Pourang P Irani, Michel Pahud, Marcel Gavriliu

Sensing how users grasp a tablet, as well as its fine-grained motions—known as micro-mobility—yields a design space of interactions for both collaborative and individual tasks in active reading.


Printem: Instant Printed Circuit Boards with Standard Office Printers & Inks

Varun Perumal C, Daniel Wigdor

A fabrication tool that enables end-users to quickly produce copper PCB's using standard office printers and inks.


LaserStacker: Fabricating 3D Objects by Laser Cutting and Welding

Udayan Umapathi, Hsiang-Ting Chen, Stefanie Mueller, Ludwig Wall, Anna Seufert, Patrick Baudisch

LaserStacker allows users to fabricate 3D objects with an ordinary laser cutter through a cut-weld-heal-release process.