Keywords
UIST2.0 Archive - 20 years of UIST
Back
Back to keywords index

interaction

3d interaction

In Proceedings of UIST 1995
Article Picture

3-dimensional pliable surfaces: for the effective presentation of visual information (p. 217-226)

In Proceedings of UIST 2001
Article Picture

A framework for unifying presentation space (p. 61-70)

In Proceedings of UIST 2004
Article Picture

Multi-finger gestural interaction with 3d volumetric displays (p. 61-70)

In Proceedings of UIST 2006
Article Picture

The design and evaluation of selection techniques for 3D volumetric displays (p. 3-12)

3d interaction feedback

In Proceedings of UIST 1996
Article Picture

Penumbrae for 3D interactions (p. 165-166)

3d interaction technique

3d user interaction

In Proceedings of UIST 2005
Article Picture

Supporting interaction in augmented reality in the presence of uncertain spatial knowledge (p. 111-114)

ad hoc interaction

In Proceedings of UIST 2008
Article Picture

Scratch input: creating large, inexpensive, unpowered and mobile finger input surfaces (p. 205-208)

Abstract plus

We present Scratch Input, an acoustic-based input technique that relies on the unique sound produced when a fingernail is dragged over the surface of a textured material, such as wood, fabric, or wall paint. We employ a simple sensor that can be easily coupled with existing surfaces, such as walls and tables, turning them into large, unpowered and ad hoc finger input surfaces. Our sensor is sufficiently small that it could be incorporated into a mobile device, allowing any suitable surface on which it rests to be appropriated as a gestural input surface. Several example applications were developed to demonstrate possible interactions. We conclude with a study that shows users can perform six Scratch Input gestures at about 90% accuracy with less than five minutes of training and on wide variety of surfaces.

advanced interaction technique

In Proceedings of UIST 2000
Article Picture

The architecture and implementation of CPN2000, a post-WIMP graphical application (p. 181-190)

ambient interaction

In Proceedings of UIST 2009
Article Picture

Bonfire: a nomadic system for hybrid laptop-tabletop interaction (p. 129-138)

Abstract plus

We present Bonfire, a self-contained mobile computing system that uses two laptop-mounted laser micro-projectors to project an interactive display space to either side of a laptop keyboard. Coupled with each micro-projector is a camera to enable hand gesture tracking, object recognition, and information transfer within the projected space. Thus, Bonfire is neither a pure laptop system nor a pure tabletop system, but an integration of the two into one new nomadic computing platform. This integration (1) enables observing the periphery and responding appropriately, e.g., to the casual placement of objects within its field of view, (2) enables integration between physical and digital objects via computer vision, (3) provides a horizontal surface in tandem with the usual vertical laptop display, allowing direct pointing and gestures, and (4) enlarges the input/output space to enrich existing applications. We describe Bonfire's architecture, and offer scenarios that highlight Bonfire's advantages. We also include lessons learned and insights for further development and use.

asynchronous interaction

In Proceedings of UIST 1997
Article Picture

Designing and implementing asynchronous collaborative applications with Bayou (p. 119-128)

auditory interaction

bimanual interaction

In Proceedings of UIST 2003
Article Picture

A molecular architecture for creating advanced GUIs (p. 135-144)

In Proceedings of UIST 2006
Article Picture

Robust computer vision-based detection of pinching for one and two-handed gesture input (p. 255-258)

continuous interaction

In Proceedings of UIST 2005
Article Picture

Informal prototyping of continuous graphical interactions by demonstration (p. 221-230)

cross-modal interaction

In Proceedings of UIST 2000
Article Picture

Cross-modal interaction using XWeb (p. 191-200)

direct touch interaction

eye free interaction

In Proceedings of UIST 2010
Article Picture

SqueezeBlock: using virtual springs in mobile devices for eyes-free interaction (p. 101-104)

Abstract plus

Haptic feedback provides an additional interaction channel when auditory and visual feedback may not be appropriate. We present a novel haptic feedback system that changes its elasticity to convey information for eyes-free interaction. SqueezeBlock is an electro-mechanical system that can realize a virtual spring having a programmatically controlled spring constant. It also allows for additional haptic modalities by altering the Hooke's Law linear-elastic force- displacement equation, such as non-linear springs, size changes, and spring length (range of motion) variations. This ability to program arbitrarily spring constants also allows for "click" and button-like feedback. We present several potential applications along with results from a study showing how well participants can distinguish between several levels of stiffness, size, and range of motion. We conclude with implications for interaction design.

eyes-free interaction

In Proceedings of UIST 2010
Article Picture

Sensing foot gestures from the pocket (p. 199-208)

Abstract plus

Visually demanding interfaces on a mobile phone can diminish the user experience by monopolizing the user's attention when they are focusing on another task and impede accessibility for visually impaired users. Because mobile devices are often located in pockets when users are mobile, explicit foot movements can be defined as eyes-and-hands-free input gestures for interacting with the device. In this work, we study the human capability associated with performing foot-based interactions which involve lifting and rotation of the foot when pivoting on the toe and heel. Building upon these results, we then developed a system to learn and recognize foot gestures using a single commodity mobile phone placed in the user's pocket or in a holster on their hip. Our system uses acceleration data recorded by a built-in accelerometer on the mobile device and a machine learning approach to recognizing gestures. Through a lab study, we demonstrate that our system can classify ten different foot gestures at approximately 86% accuracy.

fluid interaction

In Proceedings of UIST 2004
Article Picture

CrossY: a crossing-based drawing application (p. 3-12)

fluid interaction technique

In Proceedings of UIST 2003
Article Picture

Fluid interaction techniques for the control and annotation of digital video (p. 105-114)

freeform interaction

In Proceedings of UIST 1998
Article Picture

A dynamic grouping technique for ink and audio notes (p. 195-202)

gestural interaction

In Proceedings of UIST 2004
Article Picture

Combining crossing-based and paper-based interaction paradigms for dragging and dropping between overlapping windows (p. 193-196)

gesture-based interaction

In Proceedings of UIST 2010
Article Picture

Gesture search: a tool for fast mobile data access (p. 87-96)

Abstract plus

Modern mobile phones can store a large amount of data, such as contacts, applications and music. However, it is difficult to access specific data items via existing mobile user interfaces. In this paper, we present Gesture Search, a tool that allows a user to quickly access various data items on a mobile phone by drawing gestures on its touch screen. Gesture Search contributes a unique way of combining gesture-based interaction and search for fast mobile data access. It also demonstrates a novel approach for coupling gestures with standard GUI interaction. A real world deployment with mobile phone users showed that Gesture Search enabled fast, easy access to mobile data in their day-to-day lives. Gesture Search has been released to public and is currently in use by hundreds of thousands of mobile users. It was rated positively by users, with a mean of 4.5 out of 5 for over 5000 ratings.

hands-free interaction

In Proceedings of UIST 2007
Article Picture

Blui: low-cost localized blowable user interfaces (p. 217-220)

Abstract plus

We describe a unique form of hands-free interaction that can be implemented on most commodity computing platforms. Our approach supports blowing at a laptop or computer screen to directly control certain interactive applications. Localization estimates are produced in real-time to determine where on the screen the person is blowing. Our approach relies solely on a single microphone, such as those already embedded in a standard laptop or one placed near a computer monitor, which makes our approach very cost-effective and easy-to-deploy. We show example interaction techniques that leverage this approach.

In Proceedings of UIST 2010
Article Picture

Sensing foot gestures from the pocket (p. 199-208)

Abstract plus

Visually demanding interfaces on a mobile phone can diminish the user experience by monopolizing the user's attention when they are focusing on another task and impede accessibility for visually impaired users. Because mobile devices are often located in pockets when users are mobile, explicit foot movements can be defined as eyes-and-hands-free input gestures for interacting with the device. In this work, we study the human capability associated with performing foot-based interactions which involve lifting and rotation of the foot when pivoting on the toe and heel. Building upon these results, we then developed a system to learn and recognize foot gestures using a single commodity mobile phone placed in the user's pocket or in a holster on their hip. Our system uses acceleration data recorded by a built-in accelerometer on the mobile device and a machine learning approach to recognizing gestures. Through a lab study, we demonstrate that our system can classify ten different foot gestures at approximately 86% accuracy.

haptic interaction

In Proceedings of UIST 2006
Article Picture

Procedural haptic texture (p. 179-186)

high interaction

In Proceedings of UIST 1994
Article Picture

Data visualization sliders (p. 119-120)

home-network interaction

In Proceedings of UIST 2010
Article Picture

Eden: supporting home network management through interactive visual tools (p. 109-118)

Abstract plus

As networking moves into the home, home users are increasingly being faced with complex network management chores. Previous research, however, has demonstrated the difficulty many users have in managing their networks. This difficulty is compounded by the fact that advanced network management tools - such as those developed for the enterprise - are generally too complex for home users, do not support the common tasks they face, and are not a good fit for the technical peculiarities of the home. This paper presents Eden, an interactive, direct manipulation home network management system aimed at end users. Eden supports a range of common tasks, and provides a simple conceptual model that can help users understand key aspects of networking better. The system leverages a novel home network router that acts as a "dropin" replacement for users' current router. We demonstrate that Eden not only improves the user experience of networking, but also aids users in forming workable conceptual models of how the network works.

human computer interaction (hci)

human robot interaction

In Proceedings of UIST 2008

Living better with robots (p. 209-210)

Abstract plus

The emerging field of Human-Robot Interaction is undergoing rapid growth, motivated by important societal challenges and new applications for personal robotic technologies for the general public. In this talk, I highlight several projects from my research group to illustrate recent research trends to develop socially interactive robots that work and learn with people as partners. An important goal of this work is to use interactive robots as a scientific tool to understand human behavior, to explore the role of physical embodiment in interactive technology, and to use these insights to design robotic technologies that can enhance human performance and quality of life. Throughout the talk I will highlight synergies with HCI and connect HRI research goals to specific applications in healthcare, education, and communication.

human-computer interaction

information interaction design

In Proceedings of UIST 1994
Article Picture

Galaxy of news: an approach to visualizing and understanding expansive news landscapes (p. 3-12)

input and interaction technology

In Proceedings of UIST 2004
Article Picture

SketchREAD: a multi-domain sketch recognition engine (p. 23-32)

instrumental interaction

In Proceedings of UIST 2000
Article Picture

The architecture and implementation of CPN2000, a post-WIMP graphical application (p. 181-190)

interaction

In Proceedings of UIST 1996
Article Picture

Aperture based selection for immersive virtual environments (p. 95-96)

In Proceedings of UIST 1996
Article Picture

Using the multi-layer model for building interactive graphical applications (p. 109-118)

In Proceedings of UIST 2001
Article Picture

TSI (teething ring sound instrument): a design of the sound instrument for the baby (p. 157-158)

In Proceedings of UIST 2001
Article Picture

Pop through mouse button interactions (p. 195-196)

In Proceedings of UIST 2005
Article Picture

Interacting with large displays from a distance with vision-tracked multi-finger gestural input (p. 43-52)

In Proceedings of UIST 2006
Article Picture

Content-aware scrolling (p. 155-158)

In Proceedings of UIST 2006
Article Picture

Interactive environment-aware display bubbles (p. 245-254)

In Proceedings of UIST 2007
Article Picture

Eyepatch: prototyping camera-based interaction through examples (p. 33-42)

Abstract plus

Cameras are a useful source of input for many interactive applications, but computer vision programming is difficult and requires specialized knowledge that is out of reach for many HCI practitioners. In an effort to learn what makes a useful computer vision design tool, we created Eyepatch, a tool for designing camera-based interactions, and evaluated the Eyepatch prototype through deployment to students in an HCI course. This paper describes the lessons we learned about making computer vision more accessible, while retaining enough power and flexibility to be useful in a wide variety of interaction scenarios.

In Proceedings of UIST 2009
Article Picture

Enabling always-available input with muscle-computer interfaces (p. 167-176)

Abstract plus

Previous work has demonstrated the viability of applying offline analysis to interpret forearm electromyography (EMG) and classify finger gestures on a physical surface. We extend those results to bring us closer to using muscle-computer interfaces for always-available input in real-world applications. We leverage existing taxonomies of natural human grips to develop a gesture set covering interaction in free space even when hands are busy with other objects. We present a system that classifies these gestures in real-time and we introduce a bi-manual paradigm that enables use in interactive systems. We report experimental results demonstrating four-finger classification accuracies averaging 79% for pinching, 85% while holding a travel mug, and 88% when carrying a weighted bag. We further show generalizability across different arm postures and explore the tradeoffs of providing real-time visual feedback.

interaction by demonstration

In Proceedings of UIST 1996
Article Picture

Inductive groups (p. 193-199)

interaction context

In Proceedings of UIST 1999
Article Picture

PeopleGarden: creating data portraits for users (p. 37-44)

interaction design

In Proceedings of UIST 2004
Article Picture

The MaggLite post-WIMP toolkit: draw it, connect it and run it (p. 257-266)

interaction device

In Proceedings of UIST 1997
Article Picture

A finger-mounted, direct pointing device for mobile computing (p. 41-42)

interaction lens

In Proceedings of UIST 2002
Article Picture

The missing link: augmenting biology laboratory notebooks (p. 41-50)

interaction metaphor

In Proceedings of UIST 1996
Article Picture

Head-tracked orbital viewing: an interaction technique for immersive virtual environments (p. 81-82)

interaction model

In Proceedings of UIST 1994
Article Picture

A mark-based interaction paradigm for free-hand drawing (p. 185-192)

In Proceedings of UIST 1999
Article Picture

Using properties for uniform interaction in the Presto document system (p. 55-64)

interaction on paper

In Proceedings of UIST 1999
Article Picture

Linking and messaging from real paper in the Paper PDA (p. 179-186)

interaction style

In Proceedings of UIST 1994
Article Picture

Extending a graphical toolkit for two-handed interaction (p. 195-204)

interaction technique

In Proceedings of UIST 1992
Article Picture

Progress in building user interface toolkits: the world according to XIT (p. 181-190)

In Proceedings of UIST 1993
Article Picture

A graphics toolkit based on differential constraints (p. 109-120)

In Proceedings of UIST 1994
Article Picture

Reconnaissance support for juggling multiple processing options (p. 27-28)

In Proceedings of UIST 1994
Article Picture

An architecture for an extensible 3D interface toolkit (p. 59-67)

In Proceedings of UIST 1994
Article Picture

Translucent patches---dissolving windows (p. 121-130)

In Proceedings of UIST 1995
Article Picture

Retrieving electronic documents with real-world objects on InteractiveDESK (p. 37-38)

In Proceedings of UIST 1996
Article Picture

Tilting operations for small screen interfaces (p. 167-168)

In Proceedings of UIST 1998
Article Picture

Path drawing for 3D walkthrough (p. 173-174)

In Proceedings of UIST 1999
Article Picture

The VideoMouse: a camera-based multi-degree-of-freedom input device (p. 103-112)

In Proceedings of UIST 1999
Article Picture

Integrated manipulation: context-aware manipulation of 2D diagrams (p. 159-160)

In Proceedings of UIST 1999
Article Picture

The role of kinesthetic reference frames in two-handed input performance (p. 171-178)

In Proceedings of UIST 1999
Article Picture

Linking and messaging from real paper in the Paper PDA (p. 179-186)

In Proceedings of UIST 2000
Article Picture

Sensing techniques for mobile interaction (p. 91-100)

In Proceedings of UIST 2000
Article Picture

The reading assistant: eye gaze triggered auditory prompting for reading remediation (p. 101-107)

In Proceedings of UIST 2000
Article Picture

ToolStone: effective use of the physical manipulation vocabularies of input devices (p. 109-117)

In Proceedings of UIST 2001
Article Picture

Voice as sound: using non-verbal voice input for interactive control (p. 155-156)

In Proceedings of UIST 2001
Article Picture

A suggestive interface for 3D drawing (p. 173-181)

In Proceedings of UIST 2002
Article Picture

StyleCam: interactive stylized 3D navigation using integrated spatial & temporal controls (p. 101-110)

In Proceedings of UIST 2002
Article Picture

TiltType: accelerometer-supported text entry for very small devices (p. 201-204)

In Proceedings of UIST 2002
Article Picture

WebThumb: interaction techniques for small-screen browsers (p. 205-208)

In Proceedings of UIST 2003
Article Picture

VisionWand: interaction techniques for large displays using a passive wand tracked in 3D (p. 173-182)

In Proceedings of UIST 2004
Article Picture

Navigating documents with the virtual scroll ring (p. 57-60)

In Proceedings of UIST 2004
Article Picture

A remote control interface for large displays (p. 127-136)

In Proceedings of UIST 2004
Article Picture

Interacting with hidden content using content-aware free-space transparency (p. 189-192)

In Proceedings of UIST 2004
Article Picture

The MaggLite post-WIMP toolkit: draw it, connect it and run it (p. 257-266)

In Proceedings of UIST 2006
Article Picture

Multi-layer interaction for digital tables (p. 269-272)

In Proceedings of UIST 2007
Article Picture

Shadow reaching: a new perspective on interaction for large displays (p. 53-56)

Abstract plus

We introduce Shadow Reaching, an interaction technique that makes use of a perspective projection applied to a shadow representation of a user. The technique was designed to facilitate manipulation over large distances and enhance understanding in collaborative settings. We describe three prototype implementations that illustrate the technique, examining the advantages of using shadows as an interaction metaphor to support single users and groups of collaborating users. Using these prototypes as a design probe, we discuss how the three components of the technique (sensing, modeling, and rendering) can be accomplished with real (physical) or computed (virtual) shadows, and the benefits and drawbacks of each approach.

In Proceedings of UIST 2007
Article Picture

Blui: low-cost localized blowable user interfaces (p. 217-220)

Abstract plus

We describe a unique form of hands-free interaction that can be implemented on most commodity computing platforms. Our approach supports blowing at a laptop or computer screen to directly control certain interactive applications. Localization estimates are produced in real-time to determine where on the screen the person is blowing. Our approach relies solely on a single microphone, such as those already embedded in a standard laptop or one placed near a computer monitor, which makes our approach very cost-effective and easy-to-deploy. We show example interaction techniques that leverage this approach.

In Proceedings of UIST 2009
Article Picture

Disappearing mobile devices (p. 101-110)

Abstract plus

In this paper, we extrapolate the evolution of mobile devices in one specific direction, namely miniaturization. While we maintain the concept of a device that people are aware of and interact with intentionally, we envision that this concept can become small enough to allow invisible integration into arbitrary surfaces or human skin, and thus truly ubiquitous use. This outcome assumed, we investigate what technology would be most likely to provide the basis for these devices, what abilities such devices can be expected to have, and whether or not devices that size can still allow for meaningful interaction. We survey candidate technologies, drill down on gesture-based interaction, and demonstrate how it can be adapted to the desired form factors. While the resulting devices offer only the bare minimum in feedback and only the most basic interactions, we demonstrate that simple applications remain possible. We complete our exploration with two studies in which we investigate the affordance of these devices more concretely, namely marking and text entry using a gesture alphabet.

In Proceedings of UIST 2009
Article Picture

Abracadabra: wireless, high-precision, and unpowered finger input for very small mobile devices (p. 121-124)

Abstract plus

We present Abracadabra, a magnetically driven input technique that offers users wireless, unpowered, high fidelity finger input for mobile devices with very small screens. By extending the input area to many times the size of the device's screen, our approach is able to offer a high C-D gain, enabling fine motor control. Additionally, screen occlusion can be reduced by moving interaction off of the display and into unused space around the device. We discuss several example applications as a proof of concept. Finally, results from our user study indicate radial targets as small as 16 degrees can achieve greater than 92% selection accuracy, outperforming comparable radial, touch-based finger input.

In Proceedings of UIST 2010
Article Picture

PhoneTouch: a technique for direct phone interaction on surfaces (p. 13-16)

Abstract plus

PhoneTouch is a novel technique for integration of mobile phones and interactive surfaces. The technique enables use of phones to select targets on the surface by direct touch, facilitating for instance pick&drop-style transfer of objects between phone and surface. The technique is based on separate detection of phone touch events by the surface, which determines location of the touch, and by the phone, which contributes device identity. The device-level observations are merged based on correlation in time. We describe a proof-of-concept implementation of the technique, using vision for touch detection on the surface (including discrimination of finger versus phone touch) and acceleration features for detection by the phone.

interaction technology

In Proceedings of UIST 1995
Article Picture

An experimental evaluation of transparent user interface tools and information content (p. 81-90)

In Proceedings of UIST 2000
Article Picture

Dual touch: a two-handed interface for pen-based PDAs (p. 211-212)

interaction with gesture

In Proceedings of UIST 2004
Article Picture

A gesture-based authentication scheme for untrusted public terminals (p. 157-160)

interspecy interaction

In Proceedings of UIST 2005
Article Picture

Supporting interspecies social awareness: using peripheral displays for distributed pack awareness (p. 253-258)

lens interaction technique

In Proceedings of UIST 1997
Article Picture

Debugging lenses: a new class of transparent tools for user interface debugging (p. 179-187)

mark-based interaction

In Proceedings of UIST 1994
Article Picture

A mark-based interaction paradigm for free-hand drawing (p. 185-192)

mobile device interaction

In Proceedings of UIST 2008
Article Picture

SideSight: multi-"touch" interaction around small devices (p. 201-204)

Abstract plus

Interacting with mobile devices using touch can lead to fingers occluding valuable screen real estate. For the smallest devices, the idea of using a touch-enabled display is almost wholly impractical. In this paper we investigate sensing user touch around small screens like these. We describe a prototype device with infra-red (IR) proximity sensors embedded along each side and capable of detecting the presence and position of fingers in the adjacent regions. When this device is rested on a flat surface, such as a table or desk, the user can carry out single and multi-touch gestures using the space around the device. This gives a larger input space than would otherwise be possible which may be used in conjunction with or instead of on-display touch input. Following a detailed description of our prototype, we discuss some of the interactions it affords.

mobile interaction

In Proceedings of UIST 2000
Article Picture

Sensing techniques for mobile interaction (p. 91-100)

multi-layer interaction

In Proceedings of UIST 2006
Article Picture

Multi-layer interaction for digital tables (p. 269-272)

multi-user interaction

In Proceedings of UIST 2007
Article Picture

Multi-user interaction using handheld projectors (p. 43-52)

Abstract plus

Recent research on handheld projector interaction has expanded the display and interaction space of handheld devices by projecting information onto the physical environment around the user, but has mainly focused on single-user scenarios. We extend this prior single-user research to co-located multi-user interaction using multiple handheld projectors. We present a set of interaction techniques for supporting co-located collaboration with multiple handheld projectors, and discuss application scenarios enabled by them.

multi-user multi-hand interaction

In Proceedings of UIST 2004
Article Picture

Visual tracking of bare fingers for interactive surfaces (p. 119-122)

multimodal interaction

In Proceedings of UIST 1992
Article Picture

Two-handed gesture in multi-modal natural dialog (p. 7-14)

In Proceedings of UIST 1994
Article Picture

Extending a graphical toolkit for two-handed interaction (p. 195-204)

In Proceedings of UIST 2001
Article Picture

Join and capture: a model for nomadic interaction (p. 131-140)

In Proceedings of UIST 2005
Article Picture

Dial and see: tackling the voice menu navigation problem with cross-device user experience integration (p. 187-190)

music interaction

In Proceedings of UIST 2003
Article Picture

SmartMusicKIOSK: music listening station with chorus-search function (p. 31-40)

network interaction

In Proceedings of UIST 2000
Article Picture

Cross-modal interaction using XWeb (p. 191-200)

novel interaction technique

In Proceedings of UIST 2001
Article Picture

Cursive: a novel interaction technique for controlling expressive avatar gesture (p. 151-152)

pen based interaction

In Proceedings of UIST 2006
Article Picture

ModelCraft: capturing freehand annotations and edits on physical 3D models (p. 13-22)

physical interaction

In Proceedings of UIST 2002
Article Picture

The actuated workbench: computer-controlled actuation in tabletop tangible interfaces (p. 181-190)

In Proceedings of UIST 2005
Article Picture

Moveable interactive projected displays using projector based tracking (p. 63-72)

In Proceedings of UIST 2007
Article Picture

Hybrid infrared and visible light projection for location tracking (p. 57-60)

Abstract plus

A number of projects within the computer graphics, computer vision, and human-computer interaction communities have recognized the value of using projected structured light patterns for the purposes of doing range finding, location dependent data delivery, projector adaptation, or object discovery and tracking. However, most of the work exploring these concepts has relied on visible structured light patterns resulting in a caustic visual experience. In this work, we present the first design and implementation of a high-resolution, scalable, general purpose invisible near-infrared projector that can be manufactured in a practical manner. This approach is compatible with simultaneous visible light projection and integrates well with future Digital Light Processing (DLP) projector designs -- the most common type of projectors today. By unifying both the visible and non-visible pattern projection into a single device, we can greatly simply the implementation and execution of interactive projection systems. Additionally, we can inherently provide location discovery and tracking capabilities that are unattainable using other approaches.

screen interaction

In Proceedings of UIST 2004
Article Picture

C-blink: a hue-difference-based light signal marker for large screen interaction via any mobile terminal (p. 147-156)

situationally appropriate interaction

In Proceedings of UIST 2008
Article Picture

Lightweight material detection for placement-aware mobile computing (p. 279-282)

Abstract plus

Numerous methods have been proposed that allow mobile devices to determine where they are located (e.g., home or office) and in some cases, predict what activity the user is currently engaged in (e.g., walking, sitting, or driving). While useful, this sensing currently only tells part of a much richer story. To allow devices to act most appropriately to the situation they are in, it would also be very helpful to know about their placement - for example whether they are sitting on a desk, hidden in a drawer, placed in a pocket, or held in one's hand - as different device behaviors may be called for in each of these situations. In this paper, we describe a simple, small, and inexpensive multispectral optical sensor for identifying materials in proximity to a device. This information can be used in concert with e.g., location information, to estimate, for example, that the device is "sitting on the desk at home", or "in the pocket at work". This paper discusses several potential uses of this technology, as well as results from a two-part study, which indicates that this technique can detect placement at 94.4% accuracy with real-world placement sets.

subtle interaction

In Proceedings of UIST 2004
Article Picture

Interactive public ambient displays: transitioning from implicit to explicit, public to personal, interaction with multiple users (p. 137-146)

symmetric interaction

In Proceedings of UIST 2005
Article Picture

Bimanual and unimanual image alignment: an evaluation of mouse-based techniques (p. 123-131)

tabletop interaction

In Proceedings of UIST 2003
Article Picture

Multi-finger and whole hand gestural interaction techniques for multi-user tabletop displays (p. 193-202)

In Proceedings of UIST 2010
Article Picture

Madgets: actuating widgets on interactive tabletops (p. 293-302)

Abstract plus

We present a system for the actuation of tangible magnetic widgets (Madgets) on interactive tabletops. Our system combines electromagnetic actuation with fiber optic tracking to move and operate physical controls. The presented mechanism supports actuating complex tangibles that consist of multiple parts. A grid of optical fibers transmits marker positions past our actuation hardware to cameras below the table. We introduce a visual tracking algorithm that is able to detect objects and touches from the strongly sub-sampled video input of that grid. Six sample Madgets illustrate the capabilities of our approach, ranging from tangential movement and height actuation to inductive power transfer. Madgets combine the benefits of passive, untethered, and translucent tangibles with the ability to actuate them with multiple degrees of freedom.

tangible interaction

In Proceedings of UIST 2006
Article Picture

ModelCraft: capturing freehand annotations and edits on physical 3D models (p. 13-22)

two-handed interaction

In Proceedings of UIST 1994
Article Picture

Extending a graphical toolkit for two-handed interaction (p. 195-204)

In Proceedings of UIST 1996
Article Picture

A new direct manipulation technique for aligning objects in drawing programs (p. 157-164)

In Proceedings of UIST 2005
Article Picture

Bimanual and unimanual image alignment: an evaluation of mouse-based techniques (p. 123-131)

user interaction

In Proceedings of UIST 2002
Article Picture

An annotated situation-awareness aid for augmented reality (p. 213-216)

video interaction

In Proceedings of UIST 2008
Article Picture

Video object annotation, navigation, and composition (p. 3-12)

Abstract plus

We explore the use of tracked 2D object motion to enable novel approaches to interacting with video. These include moving annotations, video navigation by direct manipulation of objects, and creating an image composite from multiple video frames. Features in the video are automatically tracked and grouped in an off-line preprocess that enables later interactive manipulation. Examples of annotations include speech and thought balloons, video graffiti, path arrows, video hyperlinks, and schematic storyboards. We also demonstrate a direct-manipulation interface for random frame access using spatial constraints, and a drag-and-drop interface for assembling still images from videos. Taken together, our tools can be employed in a variety of applications including film and video editing, visual tagging, and authoring rich media such as hyperlinked video.

whole hand interaction

In Proceedings of UIST 2005
Article Picture

Distant freehand pointing and clicking on very large, high resolution displays (p. 33-42)

whole-body interaction

In Proceedings of UIST 2010
Article Picture

Jogging over a distance between Europe and Australia (p. 189-198)

Abstract plus

Exertion activities, such as jogging, require users to invest intense physical effort and are associated with physical and social health benefits. Despite the benefits, our understanding of exertion activities is limited, especially when it comes to social experiences. In order to begin understanding how to design for technologically augmented social exertion experiences, we present "Jogging over a Distance", a system in which spatialized audio based on heart rate allowed runners as far apart as Europe and Australia to run together. Our analysis revealed how certain aspects of the design facilitated a social experience, and consequently we describe a framework for designing augmented exertion activities. We make recommendations as to how designers could use this framework to aid the development of future social systems that aim to utilize the benefits of exertion.