Keywords
UIST2.0 Archive - 20 years of UIST
Back
Back to keywords index

screen

focus plus context screen

In Proceedings of UIST 2001
Article Picture

Focus plus context screens: combining display technology with visualization techniques (p. 31-40)

large screen display

In Proceedings of UIST 2003
Article Picture

Classroom BRIDGE: using collaborative public and desktop timelines to support activity awareness (p. 21-30)

multi-touch screen

In Proceedings of UIST 2007
Article Picture

Two-finger input with a standard touch screen (p. 169-172)

Abstract plus

Most current implementations of multi-touch screens are still too expensive or too bulky for widespread adoption. To improve this situation, this work describes the electronics and software needed to collect more data than one pair of coordinates from a standard 4-wire touch screen. With this system, one can measure the pressure of a single touch and approximately sense the coordinates of two touches occurring simultaneously. Naturally, the system cannot offer the accuracy and versatility of full multi-touch screens. Nonetheless, several example applications ranging from painting to zooming demonstrate a broad spectrum of use.

on screen keyboard

In Proceedings of UIST 2000
Article Picture

The metropolis keyboard - an exploration of quantitative techniques for virtual keyboard design (p. 119-128)

screen capture

In Proceedings of UIST 2004
Article Picture

ScreenCrayons: annotating anything (p. 165-174)

screen interaction

In Proceedings of UIST 2004
Article Picture

C-blink: a hue-difference-based light signal marker for large screen interaction via any mobile terminal (p. 147-156)

screen layout

In Proceedings of UIST 1995
Article Picture

3-dimensional pliable surfaces: for the effective presentation of visual information (p. 217-226)

In Proceedings of UIST 2000
Article Picture

Cross-modal interaction using XWeb (p. 191-200)

In Proceedings of UIST 2001
Article Picture

A framework for unifying presentation space (p. 61-70)

screen reader

In Proceedings of UIST 2010
Article Picture

Mixture model based label association techniques for web accessibility (p. 67-76)

Abstract plus

An important aspect of making the Web accessible to blind users is ensuring that all important web page elements such as links, clickable buttons, and form fields have explicitly assigned labels. Properly labeled content is then correctly read out by screen readers, a dominant assistive technology used by blind users. In particular, improperly labeled form fields can critically impede online transactions such as shopping, paying bills, etc. with screen readers. Very often labels are not associated with form fields or are missing altogether, making form filling a challenge for blind users. Algorithms for associating a form element with one of several candidate labels in its vicinity must cope with the variability of the element's features including label's location relative to the element, distance to the element, etc. Probabilistic models provide a natural machinery to reason with such uncertainties. In this paper we present a Finite Mixture Model (FMM) formulation of the label association problem. The variability of feature values are captured in the FMM by a mixture of random variables that are drawn from parameterized distributions. Then, the most likely label to be paired with a form element is computed by maximizing the log-likelihood of the feature data using the Expectation-Maximization algorithm. We also adapt the FMM approach for two related problems: assigning labels (from an external Knowledge Base) to form elements that have no candidate labels in their vicinity and for quickly identifying clickable elements such as add-to-cart, checkout, etc., used in online transactions even when these elements do not have textual captions (e.g., image buttons w/o alternative text). We provide a quantitative evaluation of our techniques, as well as a user study with two blind subjects who used an aural web browser implementing our approach.

screen space

In Proceedings of UIST 2004
Article Picture

Interacting with hidden content using content-aware free-space transparency (p. 189-192)

small screen device

In Proceedings of UIST 2004
Article Picture

Collapse-to-zoom: viewing web pages on small screen devices by interactively removing irrelevant content (p. 91-94)

small screen interface

In Proceedings of UIST 1996
Article Picture

Tilting operations for small screen interfaces (p. 167-168)

small screens

In Proceedings of UIST 2009
Article Picture

Abracadabra: wireless, high-precision, and unpowered finger input for very small mobile devices (p. 121-124)

Abstract plus

We present Abracadabra, a magnetically driven input technique that offers users wireless, unpowered, high fidelity finger input for mobile devices with very small screens. By extending the input area to many times the size of the device's screen, our approach is able to offer a high C-D gain, enabling fine motor control. Additionally, screen occlusion can be reduced by moving interaction off of the display and into unused space around the device. We discuss several example applications as a proof of concept. Finally, results from our user study indicate radial targets as small as 16 degrees can achieve greater than 92% selection accuracy, outperforming comparable radial, touch-based finger input.

touch screen

In Proceedings of UIST 2003
Article Picture

Tactile interfaces for small touch screens (p. 217-220)

In Proceedings of UIST 2004
Article Picture

Haptic pen: a tactile feedback stylus for touch screens (p. 291-294)

In Proceedings of UIST 2009
Article Picture

SemFeel: a user interface with semantic tactile feedback for mobile touch-screen devices (p. 111-120)

Abstract plus

One of the challenges with using mobile touch-screen devices is that they do not provide tactile feedback to the user. Thus, the user is required to look at the screen to interact with these devices. In this paper, we present SemFeel, a tactile feedback system which informs the user about the presence of an object where she touches on the screen and can offer additional semantic information about that item. Through multiple vibration motors that we attached to the backside of a mobile touch-screen device, SemFeel can generate different patterns of vibration, such as ones that flow from right to left or from top to bottom, to help the user interact with a mobile device. Through two user studies, we show that users can distinguish ten different patterns, including linear patterns and a circular pattern, at approximately 90% accuracy, and that SemFeel supports accurate eyes-free interactions.

touch screens

In Proceedings of UIST 2004
Article Picture

The radial scroll tool: scrolling support for stylus- or touch-based document navigation (p. 53-56)

In Proceedings of UIST 2010
Article Picture

TeslaTouch: electrovibration for touch surfaces (p. 283-292)

Abstract plus

We present a new technology for enhancing touch interfaces with tactile feedback. The proposed technology is based on the electrovibration principle, does not use any moving parts and provides a wide range of tactile feedback sensations to fingers moving across a touch surface. When combined with an interactive display and touch input, it enables the design of a wide variety of interfaces that allow the user to feel virtual elements through touch. We present the principles of operation and an implementation of the technology. We also report the results of three controlled psychophysical experiments and a subjective user evaluation that describe and characterize users' perception of this technology. We conclude with an exploration of the design space of tactile touch screens using two comparable setups, one based on electrovibration and another on mechanical vibrotactile actuation.