Devices are often unable to distinguish between intended and unintended touch. We present a data collection experiment where natural tablet and stylus behaviors were analyzed from both digitizer and behavioral perspectives.
We describe (a) techniques for automated generation of feedback for automata construction problems and (b) the evaluation of their effectiveness by conducting a user study with 377 student participants.
Impacto is a wearable device that renders impact sensations in virtual reality by decomposing the stimulus: it taps the skin with a solenoid (tactile) and moves the body via electrical muscle stimulation (impulse).
Introduces a new multitouch interaction space called “pin-and-cross” where one or more static touches (“pins”) are combined with crossing a radial target, all performed with one hand.
We analyse the calibration drift issue of head-mounted eye trackers, and present a robust method for automatic self-calibration based on a computational model of bottom-up visual saliency.
Introduces shareable dynamic media: collaborative content that blurs the distinction between applications and documents. Presents Webstrates, web-based dynamic media that supports real-time sharing across devices and end-user malleability.
We describe a Brain-Computer Interface (BCI) using fNIRS to measure prefrontal cortex activity, which is correlated to the affective dimension of approach. Our experiments confirm its potential for affective BCI.
This paper describes a method to enable intuitive exploration of high dimensional procedural modeling spaces within a lower dimensional space learned through autoencoder network training.
We introduce candid interaction, techniques for providing awareness about mobile and wearable device activity to others. We define a detailed design space which we explore with several novel prototypes.
CyclopsRing is a ring-style fisheye imaging wearable device worn on hand webbings. Benefiting from the egocentric and extremely wide field-of-view, CyclopsRing can enable many whole-hand and context-aware interactions.
We introduce "bidirectional fabrication" which lets users design objects alternatingly in physical and digital space. To this end, we combine a custom CNC milling machine, clay 3D printer, 3D scanner and augmented-reality user interface.
We explore the design space of actuated curve interfaces, and propose applications such as mobile devices, shape changing cords, and dynamic body constraints, through prototypes based on serpentine robotics technology.
We demonstrate how a shape display can assemble 3D structures from passive building blocks and introduce special modules that extend a shape display's input and output capabilities.
Presents a simple method for measuring end-to-end latency. This method allows accurate and real-time latency measures up to 5 times per second.
cLuster is a flexible, domain-independent clustering approach for free-hand sketches. It considers different user expectations based on an initial manual selection for faster structuring and rearrangement of sketched content.
FlexiBend is an easily installable shape-sensing strip that enables interactivity of multi-part, deformable fabrications. The flexible sensor strip is composed of a dense linear array of strain gauges, therefore it has shape sensing capability.
We proposed Push-Push that is a new drag-like operation overlapped with page transition operations on touch interfaces. We explored alternative design options and showed the feasibility of Push-Push by conducting two experiments.
We seamlessly authenticate users during each touch. Our prototype Bioamp senses biometric features and modulates a signal onto the user's body, which commodity touchscreens sense upon touch and identify users.
FlashProg features program navigation and conversational clarification for disambiguating Programming by Example in data manipulation. It helps reducing the numbers of errors and increases the user's confidence.
Gaze-shifting is a technique where a device's direct/indirect input is dynamically modulated by gaze. The paper illustrates the utility of the technique through example pen and touch interactions.
We present a fast and reliable interaction method that makes it possible to click small targets using only gaze.
BackHand is a prototype that explores a new signal source, back of hand, for hand gesture recognition. \ We identified the most promising location to sense multi-gestures at high accuracy.
TurkDeck allows prop-based virtual reality to scale to arbitrarily large virtual worlds in finite space and with a finite amount of props by using human actuators to re-arrange props.
Our system, Narration Coach helps novice voiceover actors record and improve narrations. Our tool matches recordings to a script and provides feedback and automatic resyntheses for common speaking problems.
Our work describes the CAAT, a widget for the assessment of emotional reactions, and the results of a study that evidences its overall viability along several important dimensions.
PERCs use the touch probing signal from capacitive touch screens to determine whether the tangibles are on- or off-screen. This allows robust capacitive tangible widgets regardless of being touched.
To minimize material consumption and to reduce waste during design iteration, we patch existing objects rather than reprinting them from scratch.
Unravel extends the Chrome Developer Tools to help developers discover how UI features work through HTML change observation, JavaScript execution tracing, and library detection.
We present Codeopticon, an interface that enables a computer programming tutor to monitor and chat with dozens of learners in real time, which is useful in large classes and MOOCs.
Tactile animation enables artists to quickly and easily create rich haptic sensations on body-sized tactile displays. We describe the design and implementation of our tool, evaluated with professional animators.
We develop and formally evaluate a metaphor for smartphone interaction with 3D environments: Tiltcasting. Users interact within a rotatable 2D plane that is ‘cast’ from their phone’s interactive display into 3D space.
uniMorph is an enabling technology for rapid digital fabrication of customized thin film shape-changing interfaces.
Scry visualizes web interface changes on a timeline; a user selects two snapshots to see state differences between them; and can jump to the responsible JavaScript code.
We demonstrate that motion just prior to a smartwatch vibration stimulus, as measured by the watch's accelerometer, significantly improves a model that predicts whether the vibration will be perceived.
Tracko is a 3D tracking system between two or more mobile devices. Tracko requires no added components or cross-device synchronization. Tracko fuses inaudible stereo signals, Bluetooth low energy, and IMU sensing to achieve 3D tracking.
We present RevoMaker, a multi-directional 3D printing process that creates direct out-of-the-printer functional prototypes.
We demonstrate 5D pen input for light field displays with joint or through-the-lens sensing, with pen precision equal to the human hand.
We explore a natural language interface for data visualization. To address language ambiguity, we use a mixed-initiative approach that combines algorithmic disambiguation and interactive ambiguity widgets.
GazeProjector is an approach for accurate gaze estimation and seamless interaction with multiple displays using mobile eye trackers. It only requires one a priori calibration performed with an arbitrary display.
We present Tomo, a wearable, low-cost system using Electrical Impedance Tomography (EIT) to recover the interior impedance geometry of a user’s arm. We use this system for hand gesture recognition.
We propose a novel sensing approach for passive object detection. Our technique utilizes low-cost, commodity hardware and is small enough to be worn on the wrist.
Presents interaction techniques for exploring alternative commands before, during, and after an incorrect command has been executed. Can help novice users of feature-rich software to avoid false-feedforward errors.
We present MoveableMaker, a novel software tool that assists with the design, generation, and assembly of eleven different types of moveable papercraft. Preliminary workshops illustrated that MoveableMaker supports fun and creativity.
We introduce a technique for furbricating 3D printed hair, fibers and bristles, by exploiting the stringing phenomena inherent in 3D printers using fused deposition modeling.
Corona is a novel spatial sensing technique that implicitly locates adjacent mobile devices in the same plane by examining asymmetric Bluetooth Low Energy RSSI distributions.
In this work, we present Projectibles which use a spatially varying display surface color for video projection, greatly increasing the contrast and resolution of the display.
GelTouch is a thin gel-based layer that can selectively transition between soft and stiff (up to 25 times stiffer) to provide multi-touch tactile feedback.
SmartTokens are small-sized tangible tokens that can sense multiple \ types of motions and grips, and transmit events wirelessly. We describe their design and illustrate possible usage scenarios.
Introduces Gunslinger, a mid-air barehand interaction technique using thigh-mounted sensors and hand postures to explore an ‘arms down’ body stance. This makes input more subtle and more compatible with touch input on large displays.
We contribute a concept and implementation for guiding users to a particular location in front of large displays using on-screen visual cues and discuss implications for designers.
This paper discusses how to improve virtual keyboards if all finger positions are known. We then implement a novel virtual keyboard design that outperforms existing virtual keyboards.
Discover HapticPrint, a haptic design tool for adding tactile cues, compliance, and weight to 3D printed artifacts.
FoveAR is a novel augmented reality display which combines an optically-see-through near-eye glasses with projection-based spatial display to extend the user's field-of-view and immersion in the mixed reality experience.
Foldio is a new design and fabrication approach for custom interactive objects. The user defines a 3D model and assigns interactive controls; a fold layout containing printable electronics is auto-generated
Current traditional feedback methods, such as hand-grading student code for style, are labor intensive and do not scale. Our UI lets teachers give feedback on students’ variable names at scale.
We present a method for smoother interactive orbiting, a new way to avoid collisions with surrounding objects during orbiting, and a new method for multi-scale orbiting, i.e., orbiting of object groups. \
We present VR and AR techniques that let a remote expert use virtual replicas of physical objects to guide a local user to perform 6DOF tasks through demonstrations and annotations.
Why does 3D printing have to start from scratches? Our techniques can print new attachments directly over, affixed to or through existing objects, avoiding unnecessarily replacing and reprinting them.
Orbits is an hands-free, calibration-free gaze interaction technique for smart watches using smooth pursuit eye movements. We will present three studies that demonstrate the technique's robustness and applicability.
We present NanoStylus – a finger-mounted fine-tip stylus that enables fast and accurate pointing on a smartwatch with almost no occlusion.
Tagging email helps users manage information overload. Machine learning can make this task easier by automatically predicting tags. Leveraging the user's implicit feedback produces important increases in tag prediction performance.
We introduce SceneSkim, a tool for searching and browsing movies using synchronized captions, scripts and plot summaries.
We present Capricate, a fabrication pipeline that enables users to easily design and 3D print customized objects with embedded capacitive multi-touch sensing. Objects are printed in a single pass using a commodity multi-material 3D printer.
This paper proposes a novel Bayesian model to enable ten-finger freehand typing in the air based on an empirical study of users’ air typing behavior.
Protopiper is a hand-held, computer aided fabrication device for prototyping room-sized objects at actual scale. It forms adhesive tape into tubes as its main building material. \ \
We introduce the first forearm-based EMG input system that can recognize fine-grained thumb gestures, as trained through dual-observable input during a person's normal interactions with their phone.
We present SensorTape, a sensor network in a shape of a tape. SensorTape can sense its deformations and proximity, and can be cut and rejoined
With Makers' Marks, user tangibly design interactive devices, annotating locations for functional components. Makers' Marks identifies user annotations and automatically generates printable files which include mounting geometry for indicated components.
In this work, we investigate the idea of empowering donors by allowing them to specify conditions for their crowdfunding contribution: when these conditions are met, Codo accepts their contribution.
Sensing how users grasp a tablet, as well as its fine-grained motions—known as micro-mobility—yields a design space of interactions for both collaborative and individual tasks in active reading.
A fabrication tool that enables end-users to quickly produce copper PCB's using standard office printers and inks.
LaserStacker allows users to fabricate 3D objects with an ordinary laser cutter through a cut-weld-heal-release process.