Papers
UIST2.0 Archive - 20 years of UIST
Back
Back to proceedings index

UIST '10 - Proceedings of the 23th annual ACM Symposium on User Interface Software and Technology

2010 Proceedings cover
2010 Adjunct cover
New York, New York, USA (2010)
General Chair: Ken Perlin
Program Chairs: Mary Czerwinski, Rob Miller
http://www.acm.org/uist/uist2010/
Table of Contents:plus
Papers:

Article Picture

Intimacy versus privacy (p. 1-2)

Abstract plus

When you talk to a person, it's safe to assume that you both share large bodies of "common sense knowledge." But when you converse with a programmed computer, neither of you is likely to know much about what the other one knows.

Indeed, in some respects this is desirable - as when we're concerned with our privacy. We don't want strangers to know our most personal goals, or all the resources that we may control.

However, when we turn to our computers for help, we'll want that relationship to change - because now it is in our interest for those systems to understand our aims and goals, as well as our fears and phobias. Indeed, the extents to which those processes "know us as individuals".

Issues like these will always arise whenever we need a new interface - and as one of my teachers wrote long ago, "The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today."1

Indeed, the '60s and '70s saw substantial advancestowars this but it seems to me that then progress slowed down. If so, perhaps this was partly because the AI community moved from semantic and heuristic methods towards more formal (but less flexible) statistical schemes. So nowI'd like to see more researchers remedy this by developing systems that use more commonsense knowledge.

Session: Freeform input

Article Picture

Imaginary interfaces: spatial interaction with empty hands and without visual feedback (p. 3-12)

Abstract plus

Screen-less wearable devices allow for the smallest form factor and thus the maximum mobility. However, current screen-less devices only support buttons and gestures. Pointing is not supported because users have nothing to point at. However, we challenge the notion that spatial interaction requires a screen and propose a method for bringing spatial interaction to screen-less devices.

We present Imaginary Interfaces, screen-less devices that allow users to perform spatial interaction with empty hands and without visual feedback. Unlike projection-based solutions, such as Sixth Sense, all visual "feedback" takes place in the user's imagination. Users define the origin of an imaginary space by forming an L-shaped coordinate cross with their non-dominant hand. Users then point and draw with their dominant hand in the resulting space.

With three user studies we investigate the question: To what extent can users interact spatially with a user interface that exists only in their imagination? Participants created simple drawings, annotated existing drawings, and pointed at locations described in imaginary space. Our findings suggest that users' visual short-term memory can, in part, replace the feedback conventionally displayed on a screen.

Article Picture

PhoneTouch: a technique for direct phone interaction on surfaces (p. 13-16)

Abstract plus

PhoneTouch is a novel technique for integration of mobile phones and interactive surfaces. The technique enables use of phones to select targets on the surface by direct touch, facilitating for instance pick&drop-style transfer of objects between phone and surface. The technique is based on separate detection of phone touch events by the surface, which determines location of the touch, and by the phone, which contributes device identity. The device-level observations are merged based on correlation in time. We describe a proof-of-concept implementation of the technique, using vision for touch detection on the surface (including discrimination of finger versus phone touch) and acceleration features for detection by the phone.

Article Picture

Hands-on math: a page-based multi-touch and pen desktop for technical work and problem solving (p. 17-26)

Abstract plus

Students, scientists and engineers have to choose between the flexible, free-form input of pencil and paper and the computational power of Computer Algebra Systems (CAS) when solving mathematical problems. Hands-On Math is a multi-touch and pen-based system which attempts to unify these approaches by providing virtual paper that is enhanced to recognize mathematical notations as a means of providing in situ access to CAS functionality. Pages can be created and organized on a large pannable desktop, and mathematical expressions can be computed, graphed and manipulated using a set of uni- and bi-manual interactions which facilitate rapid exploration by eliminating tedious and error prone transcription tasks. Analysis of a qualitative pilot evaluation indicates the potential of our approach and highlights usability issues with the novel techniques used.

Article Picture

Pen + touch = new tools (p. 27-36)

Abstract plus

We describe techniques for direct pen+touch input. We observe people's manual behaviors with physical paper and notebooks. These serve as the foundation for a prototype Microsoft Surface application, centered on note-taking and scrapbooking of materials. Based on our explorations we advocate a division of labor between pen and touch: the pen writes, touch manipulates, and the combination of pen + touch yields new tools. This articulates how our system interprets unimodal pen, unimodal touch, and multimodal pen+touch inputs, respectively. For example, the user can hold a photo and drag off with the pen to create and place a copy; hold a photo and cross it in a freeform path with the pen to slice it in two; or hold selected photos and tap one with the pen to staple them all together. Touch thus unifies object selection with mode switching of the pen, while the muscular tension of holding touch serves as the "glue" that phrases together all the inputs into a unitary multimodal gesture. This helps the UI designer to avoid encumbrances such as physical buttons, persistent modes, or widgets that detract from the user's focus on the workspace.

Session: AI and toolkits

Article Picture

Gestalt: integrated support for implementation and analysis in machine learning (p. 37-46)

Abstract plus

We present Gestalt, a development environment designed to support the process of applying machine learning. While traditional programming environments focus on source code, we explicitly support both code and data. Gestalt allows developers to implement a classification pipeline, analyze data as it moves through that pipeline, and easily transition between implementation and analysis. An experiment shows this significantly improves the ability of developers to find and fix bugs in machine learning systems. Our discussion of Gestalt and our experimental observations provide new insight into general-purpose support for the machine learning process.

Article Picture

A framework for robust and flexible handling of inputs with uncertainty (p. 47-56)

Abstract plus

New input technologies (such as touch), recognition based input (such as pen gestures) and next-generation interactions (such as inexact interaction) all hold the promise of more natural user interfaces. However, these techniques all create inputs with some uncertainty. Unfortunately, conventional infrastructure lacks a method for easily handling uncertainty, and as a result input produced by these technologies is often converted to conventional events as quickly as possible, leading to a stunted interactive experience. We present a framework for handling input with uncertainty in a systematic, extensible, and easy to manipulate fashion. To illustrate this framework, we present several traditional interactors which have been extended to provide feedback about uncertain inputs and to allow for the possibility that in the end that input will be judged wrong (or end up going to a different interactor). Our six demonstrations include tiny buttons that are manipulable using touch input, a text box that can handle multiple interpretations of spoken input, a scrollbar that can respond to inexactly placed input, and buttons which are easier to click for people with motor impairments. Our framework supports all of these interactions by carrying uncertainty forward all the way through selection of possible target interactors, interpretation by interactors, generation of (uncertain) candidate actions to take, and a mediation process that decides (in a lazy fashion) which actions should become final.

Article Picture

TurKit: human computation algorithms on mechanical turk (p. 57-66)

Abstract plus

Mechanical Turk (MTurk) provides an on-demand source of human computation. This provides a tremendous opportunity to explore algorithms which incorporate human computation as a function call. However, various systems challenges make this difficult in practice, and most uses of MTurk post large numbers of independent tasks. TurKit is a toolkit for prototyping and exploring algorithmic human computation, while maintaining a straight-forward imperative programming style. We present the crash-and-rerun programming model that makes TurKit possible, along with a variety of applications for human computation algorithms. We also present case studies of TurKit used for real experiments across different fields.

Article Picture

Mixture model based label association techniques for web accessibility (p. 67-76)

Abstract plus

An important aspect of making the Web accessible to blind users is ensuring that all important web page elements such as links, clickable buttons, and form fields have explicitly assigned labels. Properly labeled content is then correctly read out by screen readers, a dominant assistive technology used by blind users. In particular, improperly labeled form fields can critically impede online transactions such as shopping, paying bills, etc. with screen readers. Very often labels are not associated with form fields or are missing altogether, making form filling a challenge for blind users. Algorithms for associating a form element with one of several candidate labels in its vicinity must cope with the variability of the element's features including label's location relative to the element, distance to the element, etc. Probabilistic models provide a natural machinery to reason with such uncertainties. In this paper we present a Finite Mixture Model (FMM) formulation of the label association problem. The variability of feature values are captured in the FMM by a mixture of random variables that are drawn from parameterized distributions. Then, the most likely label to be paired with a form element is computed by maximizing the log-likelihood of the feature data using the Expectation-Maximization algorithm. We also adapt the FMM approach for two related problems: assigning labels (from an external Knowledge Base) to form elements that have no candidate labels in their vicinity and for quickly identifying clickable elements such as add-to-cart, checkout, etc., used in online transactions even when these elements do not have textual captions (e.g., image buttons w/o alternative text). We provide a quantitative evaluation of our techniques, as well as a user study with two blind subjects who used an aural web browser implementing our approach.

Session: Input

Article Picture

Performance optimizations of virtual keyboards for stroke-based text entry on a touch-based tabletop (p. 77-86)

Abstract plus

Efficiently entering text on interactive surfaces, such as touch-based tabletops, is an important concern. One novel solution is shape writing - the user strokes through all the letters in the word on a virtual keyboard without lifting his or her finger. While this technique can be used with any keyboard layout, the layout does impact the expected performance. In this paper, I investigate the influence of keyboard layout on expert text-entry performance for stroke-based text entry. Based on empirical data, I create a model of stroking through a series of points based on Fitts's law. I then use that model to evaluate various keyboard layouts for both tapping and stroking input. While the stroke-based technique seems promising by itself (i.e., there is a predicted gain of 17.3% for a Qwerty layout), significant additional gains can be made by using a more-suitable keyboard layout (e.g., the OPTI II layout is predicted to be 29.5% faster than Qwerty).

Article Picture

Gesture search: a tool for fast mobile data access (p. 87-96)

Abstract plus

Modern mobile phones can store a large amount of data, such as contacts, applications and music. However, it is difficult to access specific data items via existing mobile user interfaces. In this paper, we present Gesture Search, a tool that allows a user to quickly access various data items on a mobile phone by drawing gestures on its touch screen. Gesture Search contributes a unique way of combining gesture-based interaction and search for fast mobile data access. It also demonstrates a novel approach for coupling gestures with standard GUI interaction. A real world deployment with mobile phone users showed that Gesture Search enabled fast, easy access to mobile data in their day-to-day lives. Gesture Search has been released to public and is currently in use by hundreds of thousands of mobile users. It was rated positively by users, with a mean of 4.5 out of 5 for over 5000 ratings.

Article Picture

MAI painting brush: an interactive device that realizes the feeling of real painting (p. 97-100)

Abstract plus

Many digital painting systems have been proposed and their quality is improving. In these systems, graphics tablets are widely used as input devices. However, because of its rigid nib and indirect manipulation, the operational feeling of a graphics tablet is different from that of real paint brush. We solved this problem by developing the MR-based Artistic Interactive (MAI) Painting Brush, which imitates a real paint brush, and constructed a mixed reality (MR) painting system that enables direct painting on physical objects in the real world.

Article Picture

SqueezeBlock: using virtual springs in mobile devices for eyes-free interaction (p. 101-104)

Abstract plus

Haptic feedback provides an additional interaction channel when auditory and visual feedback may not be appropriate. We present a novel haptic feedback system that changes its elasticity to convey information for eyes-free interaction. SqueezeBlock is an electro-mechanical system that can realize a virtual spring having a programmatically controlled spring constant. It also allows for additional haptic modalities by altering the Hooke's Law linear-elastic force- displacement equation, such as non-linear springs, size changes, and spring length (range of motion) variations. This ability to program arbitrarily spring constants also allows for "click" and button-like feedback. We present several potential applications along with results from a study showing how well participants can distinguish between several levels of stiffness, size, and range of motion. We conclude with implications for interaction design.

Session: Frameworks

Article Picture

Bringing the field into the lab: supporting capture and replay of contextual data for the design of context-aware applications (p. 105-108)

Abstract plus

When designing context-aware applications, it is difficult to for designers in the studio or lab to envision the contextual conditions that will be encountered at runtime. Designers need a tool that can create/re-create naturalistic contextual states and transitions, so that they can evaluate an application under expected contexts. We have designed and developed RePlay: a system for capturing and playing back sensor traces representing scenarios of use. RePlay contributes to research on ubicomp design tools by embodying a structured approach to the capture and playback of contextual data. In particular, RePlay supports: capturing naturalistic data through Capture Probes, encapsulating scenarios of use through Episodes, and supporting exploratory manipulation of scenarios through Transforms. Our experiences using RePlay in internal design projects illustrate its potential benefits for ubicomp design.

Article Picture

Eden: supporting home network management through interactive visual tools (p. 109-118)

Abstract plus

As networking moves into the home, home users are increasingly being faced with complex network management chores. Previous research, however, has demonstrated the difficulty many users have in managing their networks. This difficulty is compounded by the fact that advanced network management tools - such as those developed for the enterprise - are generally too complex for home users, do not support the common tasks they face, and are not a good fit for the technical peculiarities of the home. This paper presents Eden, an interactive, direct manipulation home network management system aimed at end users. Eden supports a range of common tasks, and provides a simple conceptual model that can help users understand key aspects of networking better. The system leverages a novel home network router that acts as a "dropin" replacement for users' current router. We demonstrate that Eden not only improves the user experience of networking, but also aids users in forming workable conceptual models of how the network works.

Article Picture

TwinSpace: an infrastructure for cross-reality team spaces (p. 119-128)

Abstract plus

We introduce TwinSpace, a flexible software infrastructure for combining interactive workspaces and collaborative virtual worlds. Its design is grounded in the need to support deep connectivity and flexible mappings between virtual and real spaces to effectively support collaboration. This is achieved through a robust connectivity layer linking heterogeneous collections of physical and virtual devices and services, and a centralized service to manage and control mappings between physical and virtual. In this paper we motivate and present the architecture of TwinSpace, discuss our experiences and lessons learned in building a generic framework for collaborative cross-reality, and illustrate the architecture using two implemented examples that highlight its flexibility and range, and its support for rapid prototyping.

Article Picture

D-Macs: building multi-device user interfaces by demonstrating, sharing and replaying design actions (p. 129-138)

Abstract plus

Multi-device user interface design mostly implies creating suitable interface for each targeted device, using a diverse set of design tools and toolkits. This is a time consuming activity, concerning a lot of repetitive design actions without support for reusing this effort in later designs. In this paper, we propose D-Macs: a design tool that allows designers to record their design actions across devices, to share these actions with other designers and to replay their own design actions and those of others. D-Macs lowers the burden in multi-device user interface design and can reduce the necessity for manually repeating design actions.

Session: Space and time

Article Picture

Content-aware dynamic timeline for video browsing (p. 139-142)

Abstract plus

When browsing a long video using a traditional timeline slider control, its effectiveness and precision degrade as a video's length grows. When browsing videos with more frames than pixels in the slider, aside from some frames being inaccessible, scrolling actions cause sudden jumps in a video's continuity as well as video frames to flash by too fast for one to assess the content. We propose a content-aware dynamic timeline control that is designed to overcome these limitations. Our timeline control decouples video speed and playback speed, and leverages video content analysis to allow salient shots to be presented at an intelligible speed. Our control also takes advantage of previous work on elastic sliders, which allows us to produce an accurate navigation control.

Article Picture

Chronicle: capture, exploration, and playback of document workflow histories (p. 143-152)

Abstract plus

We describe Chronicle, a new system that allows users to explore document workflow histories. Chronicle captures the entire video history of a graphical document, and provides links between the content and the relevant areas of the history. Users can indicate specific content of interest, and see the workflows, tools, and settings needed to reproduce the associated results, or to better understand how it was constructed to allow for informed modification. Thus, by storing the rich information regarding the document's history workflow, Chronicle makes any working document a potentially powerful learning tool. We outline some of the challenges surrounding the development of such a system, and then describe our implementation within an image editing application. A qualitative user study produced extremely encouraging results, as users unanimously found the system both useful and easy to use.

Article Picture

Enhanced area cursors: reducing fine pointing demands for people with motor impairments (p. 153-162)

Abstract plus

Computer users with motor impairments face major challenges with conventional mouse pointing. These challenges are mostly due to fine pointing corrections at the final stages of target acquisition. To reduce the need for correction-phase pointing and to lessen the effects of small target size on acquisition difficulty, we introduce four enhanced area cursors, two of which rely on magnification and two of which use goal crossing. In a study with motor-impaired and able-bodied users, we compared the new designs to the point and Bubble cursors, the latter of which had not been evaluated for users with motor impairments. Two enhanced area cursors, the Visual-Motor-Magnifier and Click-and-Cross, were the most successful new designs for users with motor impairments, reducing selection time for small targets by 19%, corrective submovements by 45%, and error rate by up to 82% compared to the point cursor. Although the Bubble cursor also improved performance, participants with motor impairments unanimously preferred the enhanced area cursors.

Article Picture

The satellite cursor: achieving MAGIC pointing without gaze tracking using multiple cursors (p. 163-172)

Abstract plus

We present the satellite cursor - a novel technique that uses multiple cursors to improve pointing performance by reducing input movement. The satellite cursor associates every target with a separate cursor in its vicinity for pointing, which realizes the MAGIC (manual and gaze input cascade) pointing method without gaze tracking. We discuss the problem of visual clutter caused by multiple cursors and propose several designs to mitigate it. Two controlled experiments were conducted to evaluate satellite cursor performance in a simple reciprocal pointing task and a complex task with multiple targets of varying layout densities. Results show the satellite cursor can save significant mouse movement and consequently pointing time, especially for sparse target layouts, and that satellite cursor performance can be accurately modeled by Fitts' Law.

Article Picture

UIMarks: quick graphical interaction with specific targets (p. 173-182)

Abstract plus

This paper reports on the design and evaluation of UIMarks, a system that lets users specify on-screen targets and associated actions by means of a graphical marking language. UIMarks supplements traditional pointing by providing an alternative mode in which users can quickly activate these marks. Associated actions can range from basic pointing facilitation to complex sequences possibly involving user interaction: one can leave a mark on a palette to make it more reachable, but the mark can also be configured to wait for a click and then automatically move the pointer back to its original location, for example. The system has been implemented on two different platforms, Metisse and OS X. We compared it to traditional pointing on a set of elementary and composite tasks in an abstract setting. Although pure pointing was not improved, the programmable automation supported by the system proved very effective.

Session: Artist talk

Article Picture

Connected environments (p. 183-184)

Abstract plus

Can new interfaces contribute to social and environmental improvement? For all the care, wit and brilliance that UIST innovations can contribute, can they actually make things better - better in the sense of public good - not merely lead to easier to use or more efficient consumer goods? This talk will explore the impact of interface technology on society and the environment, and examine engineered systems that invite participation, document change over time, and suggest alternative courses of action that are ethical and sustainable, drawing on examples from a diverse series of experimental designs and site-specific work Natalie has created throughout her career.

Session: Feet or TOE CHI

Article Picture

Gilded gait: reshaping the urban experience with augmented footsteps (p. 185-188)

Abstract plus

In this paper we describe Gilded Gait, a system that changes the perceived physical texture of the ground, as felt through the soles of users' feet. Ground texture, in spite of its potential as an effective channel of peripheral information display, has so far been paid little attention in HCI research. The system is designed as a pair of insoles with embedded actuators, and utilizes vibrotactile feedback to simulate the perceptions of a range of different ground textures. The discreet, low-key nature of the interface makes it particularly suited for outdoor use, and its capacity to alter how people experience the built environment may open new possibilities in urban design.

Article Picture

Jogging over a distance between Europe and Australia (p. 189-198)

Abstract plus

Exertion activities, such as jogging, require users to invest intense physical effort and are associated with physical and social health benefits. Despite the benefits, our understanding of exertion activities is limited, especially when it comes to social experiences. In order to begin understanding how to design for technologically augmented social exertion experiences, we present "Jogging over a Distance", a system in which spatialized audio based on heart rate allowed runners as far apart as Europe and Australia to run together. Our analysis revealed how certain aspects of the design facilitated a social experience, and consequently we describe a framework for designing augmented exertion activities. We make recommendations as to how designers could use this framework to aid the development of future social systems that aim to utilize the benefits of exertion.

Article Picture

Sensing foot gestures from the pocket (p. 199-208)

Abstract plus

Visually demanding interfaces on a mobile phone can diminish the user experience by monopolizing the user's attention when they are focusing on another task and impede accessibility for visually impaired users. Because mobile devices are often located in pockets when users are mobile, explicit foot movements can be defined as eyes-and-hands-free input gestures for interacting with the device. In this work, we study the human capability associated with performing foot-based interactions which involve lifting and rotation of the foot when pivoting on the toe and heel. Building upon these results, we then developed a system to learn and recognize foot gestures using a single commodity mobile phone placed in the user's pocket or in a holster on their hip. Our system uses acceleration data recorded by a built-in accelerometer on the mobile device and a machine learning approach to recognizing gestures. Through a lab study, we demonstrate that our system can classify ten different foot gestures at approximately 86% accuracy.

Article Picture

Multitoe: high-precision interaction with back-projected floors based on high-resolution multi-touch input (p. 209-218)

Abstract plus

Tabletop applications cannot display more than a few dozen on-screen objects. The reason is their limited size: tables cannot become larger than arm's length without giving up direct touch. We propose creating direct touch surfaces that are orders of magnitude larger. We approach this challenge by integrating high-resolution multitouch input into a back-projected floor. As the same time, we maintain the purpose and interaction concepts of tabletop computers, namely direct manipulation.

We base our hardware design on frustrated total internal reflection. Its ability to sense per-pixel pressure allows the floor to locate and analyze users' soles. We demonstrate how this allows the floor to recognize foot postures and identify users. These two functions form the basis of our system. They allow the floor to ignore users unless they interact explicitly, identify and track users based on their shoes, enable high-precision interaction, invoke menus, track heads, and allow users to control high-degree of freedom interactions using their feet. While we base our designs on a series of simple user studies, the primary contribution on this paper is in the engineering domain.

Session: Intelligence

Article Picture

Cosaliency: where people look when comparing images (p. 219-228)

Abstract plus

Image triage is a common task in digital photography. Determining which photos are worth processing for sharing with friends and family and which should be deleted to make room for new ones can be a challenge, especially on a device with a small screen like a mobile phone or camera. In this work we explore the importance of local structure changes?e.g. human pose, appearance changes, object orientation, etc.?to the photographic triage task. We perform a user study in which subjects are asked to mark regions of image pairs most useful in making triage decisions. From this data, we train a model for image saliency in the context of other images that we call cosaliency. This allows us to create collection-aware crops that can augment the information provided by existing thumbnailing techniques for the image triage task.

Article Picture

A conversational interface to web automation (p. 229-238)

Abstract plus

This paper presents CoCo, a system that automates web tasks on a user's behalf through an interactive conversational interface. Given a short command such as "get road conditions for highway 88," CoCo synthesizes a plan to accomplish the task, executes it on the web, extracts an informative response, and returns the result to the user as a snippet of text. A novel aspect of our approach is that we leverage a repository of previously recorded web scripts and the user's personal web browsing history to determine how to complete each requested task. This paper describes the design and implementation of our system, along with the results of a brief user study that evaluates how likely users are to understand what CoCo does for them.

Article Picture

Designing adaptive feedback for improving data entry accuracy (p. 239-248)

Abstract plus

Data quality is critical for many information-intensive applications. One of the best opportunities to improve data quality is during entry. Usher provides a theoretical, data-driven foundation for improving data quality during entry. Based on prior data, Usher learns a probabilistic model of the dependencies between form questions and values. Using this information, Usher maximizes information gain. By asking the most unpredictable questions first, Usher is better able to predict answers for the remaining questions. In this paper, we use Usher's predictive ability to design a number of intelligent user interface adaptations that improve data entry accuracy and efficiency. Based on an underlying cognitive model of data entry, we apply these modifications before, during and after committing an answer. We evaluated these mechanisms with professional data entry clerks working with real patient data from six clinics in rural Uganda. The results show that our adaptations have the potential to reduce error (by up to 78%), with limited effect on entry time (varying between -14% and +6%). We believe this approach has wide applicability for improving the quality and availability of data, which is increasingly important for decision-making and resource allocation.

Article Picture

Creating collections with automatic suggestions and example-based refinement (p. 249-258)

Abstract plus

To create collections, like music playlists from personal media libraries, users today typically do one of two things. They either manually select items one-by-one, which can be time consuming, or they use an example-based recommendation system to automatically generate a collection. While such automatic engines are convenient, they offer the user limited control over how items are selected. Based on prior research and our own observations of existing practices, we propose a semi-automatic interface for creating collections that combines automatic suggestions with manual refinement tools. Our system includes a keyword query interface for specifying high-level collection preferences (e.g., "some rock, no Madonna, lots of U2,") as well as three example-based collection refinement techniques: 1) a suggestion widget for adding new items in-place in the context of the collection; 2) a mechanism for exploring alternatives for one or more collection items; and 3) a two-pane linked interface that helps users browse their libraries based on any selected collection item. We demonstrate our approach with two applications. SongSelect helps users create music playlists, and PhotoSelect helps users select photos for sharing. Initial user feedback is positive and confirms the need for semi-automated tools that give users control over automatically created collections.

Session: Surface

Article Picture

The IR ring: authenticating users' touches on a multi-touch display (p. 259-262)

Abstract plus

Multi-touch displays are particularly attractive for collaborative work because multiple users can interact with applications simultaneously. However, unfettered access can lead to loss of data confidentiality and integrity. For example, one user can open or alter files of a second user, or impersonate the second user, while the second user is absent or not looking. Towards preventing these attacks, we explore means to associate the touches of a user with the user's identity in a fashion that is cryptographically sound as well as easy to use. We describe our current solution, which relies on a ring-like device that transmits a continuous pseudorandom bit sequence in the form of infrared light pulses. The multi-touch display receives and localizes the sequence, and verifies its authenticity. Each sequence is bound to a particular user, and all touches in the direct vicinity of the location of the sequence on the display are associated with that user.

Article Picture

Enabling beyond-surface interactions for interactive surface with an invisible projection (p. 263-272)

Abstract plus

This paper presents a programmable infrared (IR) technique that utilizes invisible, programmable markers to support interaction beyond the surface of a diffused-illumination (DI) multi-touch system. We combine an IR projector and a standard color projector to simultaneously project visible content and invisible markers. Mobile devices outfitted with IR cameras can compute their 3D positions based on the markers perceived. Markers are selectively turned off to support multi-touch and direct on-surface tangible input. The proposed techniques enable a collaborative multi-display multi-touch tabletop system. We also present three interactive tools: i-m-View, i-m-Lamp, and i-m-Flashlight, which consist of a mobile tablet and projectors that users can freely interact with beyond the main display surface. Early user feedback shows that these interactive devices, combined with a large interactive display, allow more intuitive navigation and are reportedly enjoyable to use.

Article Picture

Combining multiple depth cameras and projectors for interactions on, above and between surfaces (p. 273-282)

Abstract plus

Instrumented with multiple depth cameras and projectors, LightSpace is a small room installation designed to explore a variety of interactions and computational strategies related to interactive displays and the space that they inhabit. LightSpace cameras and projectors are calibrated to 3D real world coordinates, allowing for projection of graphics correctly onto any surface visible by both camera and projector. Selective projection of the depth camera data enables emulation of interactive displays on un-instrumented surfaces (such as a standard table or office desk), as well as facilitates mid-air interactions between and around these displays. For example, after performing multi-touch interactions on a virtual object on the tabletop, the user may transfer the object to another display by simultaneously touching the object and the destination display. Or the user may "pick up" the object by sweeping it into their hand, see it sitting in their hand as they walk over to an interactive wall display, and "drop" the object onto the wall by touching it with their other hand. We detail the interactions and algorithms unique to LightSpace, discuss some initial observations of use and suggest future directions.

Article Picture

TeslaTouch: electrovibration for touch surfaces (p. 283-292)

Abstract plus

We present a new technology for enhancing touch interfaces with tactile feedback. The proposed technology is based on the electrovibration principle, does not use any moving parts and provides a wide range of tactile feedback sensations to fingers moving across a touch surface. When combined with an interactive display and touch input, it enables the design of a wide variety of interfaces that allow the user to feel virtual elements through touch. We present the principles of operation and an implementation of the technology. We also report the results of three controlled psychophysical experiments and a subjective user evaluation that describe and characterize users' perception of this technology. We conclude with an exploration of the design space of tactile touch screens using two comparable setups, one based on electrovibration and another on mechanical vibrotactile actuation.

Article Picture

Madgets: actuating widgets on interactive tabletops (p. 293-302)

Abstract plus

We present a system for the actuation of tangible magnetic widgets (Madgets) on interactive tabletops. Our system combines electromagnetic actuation with fiber optic tracking to move and operate physical controls. The presented mechanism supports actuating complex tangibles that consist of multiple parts. A grid of optical fibers transmits marker positions past our actuation hardware to cameras below the table. We introduce a visual tracking algorithm that is able to detect objects and touches from the strongly sub-sampled video input of that grid. Six sample Madgets illustrate the capabilities of our approach, ranging from tangential movement and height actuation to inductive power transfer. Madgets combine the benefits of passive, untethered, and translucent tangibles with the ability to actuate them with multiple degrees of freedom.

Session: Social

Article Picture

Eddi: interactive topic-based browsing of social status streams (p. 303-312)

Abstract plus

Twitter streams are on overload: active users receive hundreds of items per day, and existing interfaces force us to march through a chronologically-ordered morass to find tweets of interest. We present an approach to organizing a user's own feed into coherently clustered trending topics for more directed exploration. Our Twitter client, called Eddi, groups tweets in a user's feed into topics mentioned explicitly or implicitly, which users can then browse for items of interest. To implement this topic clustering, we have developed a novel algorithm for discovering topics in short status updates powered by linguistic syntactic transformation and callouts to a search engine. An algorithm evaluation reveals that search engine callouts outperform other approaches when they employ simple syntactic transformation and backoff strategies. Active Twitter users evaluated Eddi and found it to be a more efficient and enjoyable way to browse an overwhelming status update feed than the standard chronological interface.

Article Picture

Soylent: a word processor with a crowd inside (p. 313-322)

Abstract plus

This paper introduces architectural and interaction patterns for integrating crowdsourced human contributions directly into user interfaces. We focus on writing and editing, complex endeavors that span many levels of conceptual and pragmatic activity. Authoring tools offer help with pragmatics, but for higher-level help, writers commonly turn to other people. We thus present Soylent, a word processing interface that enables writers to call on Mechanical Turk workers to shorten, proofread, and otherwise edit parts of their documents on demand. To improve worker quality, we introduce the Find-Fix-Verify crowd programming pattern, which splits tasks into a series of generation and review stages. Evaluation studies demonstrate the feasibility of crowdsourced editing and investigate questions of reliability, cost, wait time, and work time for edits.

Article Picture

Tag expression: tagging with feeling (p. 323-332)

Abstract plus

In this paper we introduce tag expression, a novel form of preference elicitation that combines elements from tagging and rating systems. Tag expression enables users to apply affect to tags to indicate whether the tag describes a reason they like, dislike, or are neutral about a particular item. We present a user interface for applying affect to tags, as well as a technique for visualizing the overall community's affect. By analyzing 27,773 tag expressions from 553 users entered in a 3-month period, we empirically evaluate our design choices. We also present results of a survey of 97 users that explores users' motivations in tagging and measures user satisfaction with tag expression.

Article Picture

VizWiz: nearly real-time answers to visual questions (p. 333-342)

Abstract plus

The lack of access to visual information like text labels, icons, and colors can cause frustration and decrease independence for blind people. Current access technology uses automatic approaches to address some problems in this space, but the technology is error-prone, limited in scope, and quite expensive. In this paper, we introduce VizWiz, a talking application for mobile phones that offers a new alternative to answering visual questions in nearly real-time - asking multiple people on the web. To support answering questions quickly, we introduce a general approach for intelligently recruiting human workers in advance called quikTurkit so that workers are available when new questions arrive. A field deployment with 11 blind participants illustrates that blind people can effectively use VizWiz to cheaply answer questions in their everyday lives, highlighting issues that automatic approaches will need to address to be useful. Finally, we illustrate the potential of using VizWiz as part of the participatory design of advanced tools by using it to build and evaluate VizWiz::LocateIt, an interactive mobile tool that helps blind people solve general visual search problems.

Article Picture

The engineering of personhood (p. 343-346)

Abstract plus

Any subset of reality can potentially be interpreted as a computer, so when we speak about a particular computer, we are merely speaking about a portion of reality we can understand computationally. That means that computation is only identifiable through the human experience of it. User interface is ultimately the only grounding for the abstractions of computation, in the same way that the measurement of physical phenomena provides the only legitimate basis for physics. But user interface also changes humans. As computation is perceived, the natures of self and personhood are transformed. This process, when designers are aware of it, can be understood as an emerging form of applied philosophy or even applied spirituality.