Keywords
UIST2.0 Archive - 20 years of UIST
Back
Back to keywords index

computer

computer access

In Proceedings of UIST 2003
Article Picture

EdgeWrite: a stylus-based text entry method designed for high accuracy and stability of motion (p. 61-70)

computer algebra

In Proceedings of UIST 1993
Article Picture

User interfaces for symbolic computation: a case study (p. 1-10)

computer augmented environment

In Proceedings of UIST 1995
Article Picture

The world through the computer: computer augmented interaction with real world environments (p. 29-36)

In Proceedings of UIST 1997
Article Picture

Pick-and-drop: a direct manipulation technique for multiple computer environments (p. 31-39)

computer mediated communication

In Proceedings of UIST 2003
Article Picture

TalkBack: a conversational answering machine (p. 41-50)

computer supported collaborative work

In Proceedings of UIST 1997
Article Picture

A shared command line in a virtual space: the working man's MOO (p. 73-74)

computer supported cooperative work

In Proceedings of UIST 2002
Article Picture

The actuated workbench: computer-controlled actuation in tabletop tangible interfaces (p. 181-190)

computer vision

In Proceedings of UIST 1995
Article Picture

Retrieving electronic documents with real-world objects on InteractiveDESK (p. 37-38)

In Proceedings of UIST 1999
Article Picture

Implementing phicons: combining computer vision with infrared technology for interactive physical icons (p. 67-68)

In Proceedings of UIST 2001
Article Picture

The designers' outpost: a tangible interface for collaborative web site (p. 1-10)

In Proceedings of UIST 2006
Article Picture

Camera phone based motion sensing: interaction techniques, applications and performance study (p. 101-110)

In Proceedings of UIST 2006
Article Picture

Robust computer vision-based detection of pinching for one and two-handed gesture input (p. 255-258)

In Proceedings of UIST 2007
Article Picture

Eyepatch: prototyping camera-based interaction through examples (p. 33-42)

Abstract plus

Cameras are a useful source of input for many interactive applications, but computer vision programming is difficult and requires specialized knowledge that is out of reach for many HCI practitioners. In an effort to learn what makes a useful computer vision design tool, we created Eyepatch, a tool for designing camera-based interactions, and evaluated the Eyepatch prototype through deployment to students in an HCI course. This paper describes the lessons we learned about making computer vision more accessible, while retaining enough power and flexibility to be useful in a wide variety of interaction scenarios.

In Proceedings of UIST 2009
Article Picture

Activity analysis enabling real-time video communication on mobile phones for deaf users (p. 79-88)

Abstract plus

We describe our system called MobileASL for real-time video communication on the current U.S. mobile phone network. The goal of MobileASL is to enable Deaf people to communicate with Sign Language over mobile phones by compressing and transmitting sign language video in real-time on an off-the-shelf mobile phone, which has a weak processor, uses limited bandwidth, and has little battery capacity. We develop several H.264-compliant algorithms to save system resources while maintaining ASL intelligibility by focusing on the important segments of the video. We employ a dynamic skin-based region-of-interest (ROI) that encodes the skin at higher quality at the expense of the rest of the video. We also automatically recognize periods of signing versus not signing and raise and lower the frame rate accordingly, a technique we call variable frame rate (VFR).

We show that our variable frame rate technique results in a 47% gain in battery life on the phone, corresponding to an extra 68 minutes of talk time. We also evaluate our system in a user study. Participants fluent in ASL engage in unconstrained conversations over mobile phones in a laboratory setting. We find that the ROI increases intelligibility and decreases guessing. VFR increases the need for signs to be repeated and the number of conversational breakdowns, but does not affect the users' perception of adopting the technology. These results show that our sign language sensitive algorithms can save considerable resources without sacrificing intelligibility.

In Proceedings of UIST 2009
Article Picture

Bonfire: a nomadic system for hybrid laptop-tabletop interaction (p. 129-138)

Abstract plus

We present Bonfire, a self-contained mobile computing system that uses two laptop-mounted laser micro-projectors to project an interactive display space to either side of a laptop keyboard. Coupled with each micro-projector is a camera to enable hand gesture tracking, object recognition, and information transfer within the projected space. Thus, Bonfire is neither a pure laptop system nor a pure tabletop system, but an integration of the two into one new nomadic computing platform. This integration (1) enables observing the periphery and responding appropriately, e.g., to the casual placement of objects within its field of view, (2) enables integration between physical and digital objects via computer vision, (3) provides a horizontal surface in tandem with the usual vertical laptop display, allowing direct pointing and gestures, and (4) enlarges the input/output space to enrich existing applications. We describe Bonfire's architecture, and offer scenarios that highlight Bonfire's advantages. We also include lessons learned and insights for further development and use.

In Proceedings of UIST 2009
Article Picture

Interactions in the air: adding further depth to interactive tabletops (p. 139-148)

Abstract plus

Although interactive surfaces have many unique and compelling qualities, the interactions they support are by their very nature bound to the display surface. In this paper we present a technique for users to seamlessly switch between interacting on the tabletop surface to above it. Our aim is to leverage the space above the surface in combination with the regular tabletop display to allow more intuitive manipulation of digital content in three-dimensions. Our goal is to design a technique that closely resembles the ways we manipulate physical objects in the real-world; conceptually, allowing virtual objects to be 'picked up' off the tabletop surface in order to manipulate their three dimensional position or orientation. We chart the evolution of this technique, implemented on two rear projection-vision tabletops. Both use special projection screen materials to allow sensing at significant depths beyond the display. Existing and new computer vision techniques are used to sense hand gestures and postures above the tabletop, which can be used alongside more familiar multi-touch interactions. Interacting above the surface in this way opens up many interesting challenges. In particular it breaks the direct interaction metaphor that most tabletops afford. We present a novel shadow-based technique to help alleviate this issue. We discuss the strengths and limitations of our technique based on our own observations and initial user feedback, and provide various insights from comparing, and contrasting, our tabletop implementations

In Proceedings of UIST 2010
Article Picture

Imaginary interfaces: spatial interaction with empty hands and without visual feedback (p. 3-12)

Abstract plus

Screen-less wearable devices allow for the smallest form factor and thus the maximum mobility. However, current screen-less devices only support buttons and gestures. Pointing is not supported because users have nothing to point at. However, we challenge the notion that spatial interaction requires a screen and propose a method for bringing spatial interaction to screen-less devices.

We present Imaginary Interfaces, screen-less devices that allow users to perform spatial interaction with empty hands and without visual feedback. Unlike projection-based solutions, such as Sixth Sense, all visual "feedback" takes place in the user's imagination. Users define the origin of an imaginary space by forming an L-shaped coordinate cross with their non-dominant hand. Users then point and draw with their dominant hand in the resulting space.

With three user studies we investigate the question: To what extent can users interact spatially with a user interface that exists only in their imagination? Participants created simple drawings, annotated existing drawings, and pointed at locations described in imaginary space. Our findings suggest that users' visual short-term memory can, in part, replace the feedback conventionally displayed on a screen.

finger tracking with computer vision

In Proceedings of UIST 2004
Article Picture

Visual tracking of bare fingers for interactive surfaces (p. 119-122)

hand-held computer

In Proceedings of UIST 1995
Article Picture

A tool to support speech and non-speech audio feedback generation in audio interfaces (p. 171-179)

handheld computer

In Proceedings of UIST 2002
Article Picture

That one there! Pointing to establish device identity (p. 151-160)

human computer interaction (hci)

mobile computer

In Proceedings of UIST 2000
Article Picture

Dual touch: a two-handed interface for pen-based PDAs (p. 211-212)

In Proceedings of UIST 2003
Article Picture

Tactile interfaces for small touch screens (p. 217-220)

palmtop computer

In Proceedings of UIST 1995
Article Picture

The world through the computer: computer augmented interaction with real world environments (p. 29-36)

In Proceedings of UIST 1996
Article Picture

Tilting operations for small screen interfaces (p. 167-168)

pen-based computer

In Proceedings of UIST 1998
Article Picture

Quikwriting: continuous stylus-based text entry (p. 215-216)

portable computer

wristwatch computer

In Proceedings of UIST 2002
Article Picture

TiltType: accelerometer-supported text entry for very small devices (p. 201-204)