Papers
UIST2.0 Archive - 20 years of UIST
Back
Back to proceedings index

UIST '07 - Proceedings of the 20th annual ACM Symposium on User Interface Software and Technology

2007 Proceedings cover
2007 Adjunct cover
Newport, Rhode Island, USA (2007)
General Chairs: Chia Shen, Robert Jacob
General co-Chair: Robert Jacob
Program Chair: Ravin Balakrishnan
http://www.acm.org/uist/uist2007/
Table of Contents:plus
Papers:

Session: Keynote

Article Picture

Measuring how design changes cognition at work (p. 1-2)

Abstract plus

The various fields associated with interactive software systems engage in design activities to enable people who would use the resulting systems to meet goals, coordinate with others, find meaning, and express themselves in myriad ways. Yet many development projects fail, and we all have contact with clumsy software-based systems that force work-arounds and impose substantial attentional, knowledge and workload burdens. On the other hand, field observations reveal people re-shaping the artifacts they encounter and interact with as resources to cope with the demands of the situations they face as they seek to meet their goals. In this process some new devices are quickly seized upon and exploited in ways that transform the nature of human activity, connections, and expression.

The software intensive interactive systems and devices under development around us are valuable to the degree that they expand what people in various roles and organizations can achieve. How can we measure this value provided to others? Are current measures of usability adequate? Does creeping complexity wipe out incremental gains as products evolve? Do designers and developers mis-project the impact when systems-to-be-realized are fielded? Which technology changes will trigger waves of expansive adaptations that transform what people do and even why they do it.

Sponsors of projects to develop new interactive software systems are asking developers for tangible evidence of the value to be delivered to those people responsible for activities and goals in the world. Traditional measures of usability and human performance seem inadequate. Cycles of inflation in the claims development organizations make (and the legacy of disappointment and surprise) have left sponsors numb and eroded trust. Thus, we need to provide new forms of evidence about the potential of new interactive systems and devices to enhance human capability.

Luckily, this need has been accompanied by a period of innovation in ways to measure the impact of new designs on:

  • growth of expertise in roles,
  • synchronizing activities over wider scopes and ranges,
  • expanding adaptive capacities.
.

This talk reviews a few of the new measures being tested in each of these categories, points to some of the underlying science, and uses these examples to trigger discussion about how design of future interactive software provides will provide value to stakeholders.

Article Picture

Capturing the user's attention: insights from the study of human vision (p. 191-192)

Abstract plus

An effective user interface is a cooperative interaction between humans and their technology. For that interaction to work, it needs to recognize the limitations and exploit the strengths of both parties. In this talk, I will concentrate on the human side of the equation. What do we know about human visual perceptual abilities that might have an impact on the design of user interfaces? The world presents us with more information than we can process. Just try to read this abstract and the next piece of prose at the same time. We cope with this problem by using attentional mechanisms to select a subset of the input for further processing. An inter-face might be designed to .capture. attention, in order to induce a human to interact with it. Once the human is using an interface, that interface should .guide. the user.s atten-tion in an intelligent manner. In recent decades, many of the rules of attentional capture and guidance have been worked out in the laboratory. I will illustrate some of the basic principles. For example: Do some colors grab attention better than others? Are faces special? When and why do people fail to .see. things that are right in front of their eyes.

Session: Novel interaction

Article Picture

Eyepatch: prototyping camera-based interaction through examples (p. 33-42)

Abstract plus

Cameras are a useful source of input for many interactive applications, but computer vision programming is difficult and requires specialized knowledge that is out of reach for many HCI practitioners. In an effort to learn what makes a useful computer vision design tool, we created Eyepatch, a tool for designing camera-based interactions, and evaluated the Eyepatch prototype through deployment to students in an HCI course. This paper describes the lessons we learned about making computer vision more accessible, while retaining enough power and flexibility to be useful in a wide variety of interaction scenarios.

Article Picture

Multi-user interaction using handheld projectors (p. 43-52)

Abstract plus

Recent research on handheld projector interaction has expanded the display and interaction space of handheld devices by projecting information onto the physical environment around the user, but has mainly focused on single-user scenarios. We extend this prior single-user research to co-located multi-user interaction using multiple handheld projectors. We present a set of interaction techniques for supporting co-located collaboration with multiple handheld projectors, and discuss application scenarios enabled by them.

Article Picture

Shadow reaching: a new perspective on interaction for large displays (p. 53-56)

Abstract plus

We introduce Shadow Reaching, an interaction technique that makes use of a perspective projection applied to a shadow representation of a user. The technique was designed to facilitate manipulation over large distances and enhance understanding in collaborative settings. We describe three prototype implementations that illustrate the technique, examining the advantages of using shadows as an interaction metaphor to support single users and groups of collaborating users. Using these prototypes as a design probe, we discuss how the three components of the technique (sensing, modeling, and rendering) can be accomplished with real (physical) or computed (virtual) shadows, and the benefits and drawbacks of each approach.

Article Picture

Hybrid infrared and visible light projection for location tracking (p. 57-60)

Abstract plus

A number of projects within the computer graphics, computer vision, and human-computer interaction communities have recognized the value of using projected structured light patterns for the purposes of doing range finding, location dependent data delivery, projector adaptation, or object discovery and tracking. However, most of the work exploring these concepts has relied on visible structured light patterns resulting in a caustic visual experience. In this work, we present the first design and implementation of a high-resolution, scalable, general purpose invisible near-infrared projector that can be manufactured in a practical manner. This approach is compatible with simultaneous visible light projection and integrates well with future Digital Light Processing (DLP) projector designs -- the most common type of projectors today. By unifying both the visible and non-visible pattern projection into a single device, we can greatly simply the implementation and execution of interactive projection systems. Additionally, we can inherently provide location discovery and tracking capabilities that are unattainable using other approaches.

Session: Web

Article Picture

Relations, cards, and search templates: user-guided web data integration and layout (p. 61-70)

Abstract plus

We present three new interaction techniques for aiding users in collecting and organizing Web content. First, we demonstrate an interface for creating associations between websites, which facilitate the automatic retrieval of related content. Second, we present an authoring interface that allows users to quickly merge content from many different websites into a uniform and personalized representation, which we call a card. Finally, we introduce a novel search paradigm that leverages the relationships in a card to direct search queries to extract relevant content from multiple Web sources and fill a new series of cards instead of just returning a list of webpage URLs. Preliminary feedback from users is positive andvalidates our design.

Article Picture

OPA browser: a web browser for cellular phone users (p. 71-80)

Abstract plus

Cellular phones are widely used to access the WWW. However, most available Web pages are designed for desktop PCs. Cellular phones only have small screens and poor interfaces, and thus, it is inconvenient to browse such large sized pages. In addition, cellular phone users browse Web pages in various situations, so that appropriate presentation styles for Web pages depend on users' situations. In this paper, we propose a novel Web browsing system for cellular phones that allocates various functions for Web browsing on each numerical key of a cellular phone. Users can browse Web pages comfortably, selecting appropriate functions according to their situations by pushing a single button.

Article Picture

Smart bookmarks: automatic retroactive macro recording on the web (p. 81-90)

Abstract plus

We present a new web automation system that allows users to create a smart bookmark, consisting of a starting URL plus a script of commands that returns to a particular web page or state of a web application. A smart bookmark can be requested for any page, and the necessary commands are automatically extracted from the user's interaction history. Unlike other web macro recorders, which require the user to start recording before navigating to the desired page, smart bookmarks are generated retroactively, after the user has already reached a page, and the starting point of the macro is found automatically. Smart bookmarks have a rich graphical visualization that combines textual commands, web page screenshots, and animations to explain what the bookmark does. A bookmark's script consists of keyword commands, interpreted without strict reliance on syntax, allowing bookmarks to be easily edited and shared.

Session: Tagging, finding, and timing

Article Picture

Socially augmenting employee profiles with people-tagging (p. 91-100)

Abstract plus

Employee directories play a valuable role in helping people find others to collaborate with, solve a problem, or provide needed expertise. Serving this role successfully requires accurate and up-to-date user profiles, yet few users take the time to maintain them. In this paper, we present a system that enables users to tag other users with key words that are displayed on their profiles. We discuss how people-tagging is a form of social bookmarking that enables people to organize their contacts into groups, annotate them with terms supporting future recall, and search for people by topic area. In addition, we show that people-tagging has a valuable side benefit: it enables the community to collectively maintain each others' interest and expertise profiles. Our user studies suggest that people tag other people as a form of contact management and that the tags they have been given are accurate descriptions of their interests and expertise. Moreover, none of the people interviewed reported offensive or inappropriate tags. Based on our results, we believe that peopletagging will become an important tool for relationship management in an organization.

Article Picture

Continuum: designing timelines for hierarchies, relationships and scale (p. 101-110)

Abstract plus

Temporal events, while often discrete, also have interesting relationships within and across times: larger events are often collections of smaller more discrete events (battles within wars; artists' works within a form); events at one point also have correlations with events at other points (a play written in one period is related to its performance over a period of time). Most temporal visualisations, however, only represent discrete data points or single data types along a single timeline: this event started here and ended there; this work was published at this time; this tag was popular for this period. In order to represent richer, faceted attributes of temporal events, we present Continuum. Continuum enables hierarchical relationships in temporal data to be represented and explored; it enables relationships between events across periods to be expressed, and in particular it enables user-determined control over the level of detail of any facet of interest so that the person using the system can determine a focus point, no matter the level of zoom over the temporal space. We present the factors motivating our approach, our evaluation and implementation of this new visualisation which makes it easy for anyone to apply this interface to rich, large-scale datasets with temporal data.

Article Picture

QuME: a mechanism to support expertise finding in online help-seeking communities (p. 111-114)

Abstract plus

Help-seeking communities have been playing an increasingly critical role in the way people seek and share information. However, traditional help-seeking mechanisms of these online communities have some limitations. In this paper, we describe an expertise-finding mechanism that attempts to alleviate the limitations caused by not knowing users' expertise levels. As a result of using social network data from the online community, this mechanism can automatically infer expertise level. This allows, for example, a question list to be personalized to the user's expertise level as well as to keyword similarity. We believe this expertise location mechanism will facilitate the development of next generation help-seeking communities.

Article Picture

Rethinking the progress bar (p. 115-118)

Abstract plus

Progress bars are prevalent in modern user interfaces. Typically, a linear function is employed such that the progress of the bar is directly proportional to how much work has been completed. However, numerous factors cause progress bars to proceed at non-linear rates. Additionally, humans perceive time in a non-linear way. This paper explores the impact of various progress bar behaviors on user perception of process duration. The results are used to suggest several design considerations that can make progress bars appear faster and ultimately improve users' computing experience.

Session: Efficient Ul

Article Picture

SketchWizard: Wizard of Oz prototyping of pen-based user interfaces (p. 119-128)

Abstract plus

SketchWizard allows designers to create Wizard of Oz prototypes of pen-based user interfaces in the early stages of design. In the past, designers have been inhibited from participating in the design of pen-based interfaces because of the inadequacy of paper prototypes and the difficulty of developing functional prototypes. In SketchWizard, designers and end users share a drawing canvas between two computers, allowing the designer to simulate the behavior of recognition or other technologies. Special editing features are provided to help designers respond quickly to end-user input. This paper describes the SketchWizard system and presents two evaluations of our approach. The first is an early feasibility study in which Wizard of Oz was used to prototype a pen-based user interface. The second is a laboratory study in which designers used SketchWizard to simulate existing pen-based interfaces. Both showed that end users gave valuable feedback in spite of delays between end-user actions and wizard updates.

Article Picture

RubberEdge: reducing clutching by combining position and rate control with elastic feedback (p. 129-138)

Abstract plus

Position control devices enable precise selection, but significant clutching degrades performance. Clutching can be reduced with high control-display gain or pointer acceleration, but there are human and device limits. Elastic rate control eliminates clutching completely, but can make precise selection difficult. We show that hybrid position-rate control can outperform position control by 20% when there is significant clutching, even when using pointer acceleration. Unlike previous work, our RubberEdge technique eliminates trajectory and velocity discontinuities. We derive predictive models for position control with clutching and hybrid control, and present a prototype RubberEdge position-rate control device including initial user feedback.

Article Picture

Enabling efficient orienteering behavior in webmail clients (p. 139-148)

Abstract plus

Webmail clients provide millions of end users with convenient and ubiquitous access to electronic mail - the most successful collaboration tool ever. Web email clients are also the platform of choice for recent innovations on electronic mail and for integration of related information services into email. In the enterprise, however, webmail applications have been relegated to being a supplemental tool for mail access from home or while on the road. In this paper, we draw on recent research in the area of electronic mail to understand usage models and performance requirements for enterprise email applications. We then present an innovative architecture for a webmail client. By leveraging recent advances in web browser technology, we show that webmail clients can offer performance and responsiveness that rivals a desktop application while still retaining all the advantages of a browser based client.

Session: Sensing and recognition

Article Picture

Robust, low-cost, non-intrusive sensing and recognition of seated postures (p. 149-158)

Abstract plus

In this paper, we present a methodology for recognizing seatedpostures using data from pressure sensors installed on a chair.Information about seated postures could be used to help avoidadverse effects of sitting for long periods of time or to predictseated activities for a human-computer interface. Our system designdisplays accurate near-real-time classification performance on datafrom subjects on which the posture recognition system was nottrained by using a set of carefully designed, subject-invariantsignal features. By using a near-optimal sensor placement strategy,we keep the number of required sensors low thereby reducing costand computational complexity. We evaluated the performance of ourtechnology using a series of empirical methods including (1)cross-validation (classification accuracy of 87% for ten posturesusing data from 31 sensors), and (2) a physical deployment of oursystem (78% classification accuracy using data from 19sensors).

Article Picture

Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes (p. 159-168)

Abstract plus

Although mobile, tablet, large display, and tabletop computers increasingly present opportunities for using pen, finger, and wand gestures in user interfaces, implementing gesture recognition largely has been the privilege of pattern matching experts, not user interface prototypers. Although some user interface libraries and toolkits offer gesture recognizers, such infrastructure is often unavailable in design-oriented environments like Flash, scripting environments like JavaScript, or brand new off-desktop prototyping environments. To enable novice programmers to incorporate gestures into their UI prototypes, we present a "$1 recognizer" that is easy, cheap, and usable almost anywhere in about 100 lines of code. In a study comparing our $1 recognizer, Dynamic Time Warping, and the Rubine classifier on user-supplied gestures, we found that $1 obtains over 97% accuracy with only 1 loaded template and 99% accuracy with 3+ loaded templates. These results were nearly identical to DTW and superior to Rubine. In addition, we found that medium-speed gestures, in which users balanced speed and accuracy, were recognized better than slow or fast gestures for all three recognizers. We also discuss the effect that the number of templates or training examples has on recognition, the score falloff along recognizers' N-best lists, and results for individual gestures. We include detailed pseudocode of the $1 recognizer to aid development, inspection, extension, and testing.

Article Picture

Two-finger input with a standard touch screen (p. 169-172)

Abstract plus

Most current implementations of multi-touch screens are still too expensive or too bulky for widespread adoption. To improve this situation, this work describes the electronics and software needed to collect more data than one pair of coordinates from a standard 4-wire touch screen. With this system, one can measure the pressure of a single touch and approximately sense the coordinates of two touches occurring simultaneously. Naturally, the system cannot offer the accuracy and versatility of full multi-touch screens. Nonetheless, several example applications ranging from painting to zooming demonstrate a broad spectrum of use.

Session: Manipulation

Article Picture

Bubble clusters: an interface for manipulating spatial aggregation of graphical objects (p. 173-182)

Abstract plus

Spatial layout is frequently used for managing loosely organized information, such as desktop icons and digital ink. To help users organize this type of information efficiently, we propose an interface for manipulating spatial aggregations of objects. The aggregated objects are automatically recognized as a group, and the group structure is visualized as a two-dimensional bubble surface that surrounds the objects. Users can drag, copy, or delete a group by operating on the bubble. Furthermore, to help pick out individual objects in a dense aggregation, the system spreads the objects to avoid overlapping when requested. This paper describes the design of this interface and its implementation. We tested our technique in icon grouping and ink relocation tasks and observed improvements in user performance.

Article Picture

Dirty desktops: using a patina of magnetic mouse dust to make common interactor targets easier to select (p. 183-186)

Abstract plus

A common task in graphical user interfaces is controlling onscreen elements using a pointer. Current adaptive pointing techniques require applications to be built using accessibility libraries that reveal information about interactive targets, and most do not handle path/menu navigation. We present a pseudo-haptic technique that is OS and application independent, and can handle both dragging and clicking. We do this by associating a small force with each past click or drag. When a user frequently clicks in the same general area (e.g., on a button), the patina of past clicks naturally creates a pseudo-haptic magnetic field with an effect similar to that ofsnapping or sticky icons. Our contribution is a bottom-up approach to make targets easier to select without requiring prior knowledge of them.

Article Picture

Boomerang: suspendable drag-and-drop interactions based on a throw-and-catch metaphor (p. 187-190)

Abstract plus

We present the boomerang technique, which makes it possible to suspend and resume drag-and-drop operations. A throwing gesture while dragging an object suspends the operation, anytime and anywhere. A drag-and-drop interaction, enhanced with our technique, allows users to switch windows, invoke commands, and even drag other objects during a drag-and-drop operation without using the keyboard or menus. We explain how a throwing gesture can suspend drag-and-drop operations, and describe other features of our technique, including grouping, copying, and deleting dragged objects. We conclude by presenting prototype implementations and initial feedback on the proposed technique.

Session: Wither the GUI

Article Picture

Gui --- phooey!: the case for text input (p. 193-202)

Abstract plus

Information cannot be found if it is not recorded. Existing rich graphical application approaches interfere with user input in many ways, forcing complex interactions to enter simple information, requiring complex cognition to decide where the data should be stored, and limiting the kind of information that can be entered to what can fit into specific applications' data models. Freeform text entry suffers from none of these limitations but produces data that is hard to retrieve or visualize. We describe the design and implementation of Jourknow, a system that aims to bridge these two modalities, supporting lightweight text entry and weightless context capture that produces enough structure to support rich interactive presentation and retrieval of the arbitrary information entered.

Article Picture

Graphstract: minimal graphical help for computers (p. 203-212)

Abstract plus

We explore the use of abstracted screenshots as part of a new help interface. Graphstract, an implementation of a graphical help system, extends the ideas of textually oriented Minimal Manuals to the use of screenshots, allowing multiple small graphical elements to be shown in a limited space. This allows a user to get an overview of a complex sequential task as a whole. The ideas have been developed by three iterations of prototyping and evaluation. A user study shows that Graphstract helps users perform tasks faster on some but not all tasks. Due to their graphical nature, it is possible to construct Graphstracts automatically from pre-recorded interactions. A second study shows that automated capture and replay is a low-cost method for authoring Graphstracts, and the resultant help is as understandable as manually constructed help.

Article Picture

Gaze-enhanced scrolling techniques (p. 213-216)

Abstract plus

Scrolling is an essential part of our everyday computing experience. Contemporary scrolling techniques rely on the explicit initiation of scrolling by the user. The act of scrolling is tightly coupled with the user?s ability to absorb information via the visual channel. The use of eye gaze information is therefore a natural choice for enhancing scrolling techniques. We present several gaze-enhanced scrolling techniques for manual and automatic scrolling which use gaze information as a primary input or as an augmented input. We also introduce the use off-screen gaze-actuated buttons for document navigation and control.

Article Picture

Blui: low-cost localized blowable user interfaces (p. 217-220)

Abstract plus

We describe a unique form of hands-free interaction that can be implemented on most commodity computing platforms. Our approach supports blowing at a laptop or computer screen to directly control certain interactive applications. Localization estimates are produced in real-time to determine where on the screen the person is blowing. Our approach relies solely on a single microphone, such as those already embedded in a standard laptop or one placed near a computer monitor, which makes our approach very cost-effective and easy-to-deploy. We show example interaction techniques that leverage this approach.

Session: Adaptation & examples

Article Picture

Specifying label layout style by example (p. 221-230)

Abstract plus

Creating high-quality label layouts in a particular visual style is a time-consuming process. Although automated labeling algorithms can aid the layout process, expert design knowledge is required to tune these algorithms so that they produce layouts which meet the designer's expectations. We propose a system which can learn a labellayout style from a single example layout and then apply this style to new labeling problems. Because designers find it much easier to create example layouts than tune algorithmic parameters, our system provides a more natural workflow for graphic designers. We demonstrate that our system is capable of learning a variety of label layout styles from examples.

Article Picture

Automatically generating user interfaces adapted to users' motor and vision capabilities (p. 231-240)

Abstract plus

Most of today's GUIs are designed for the typical, able-bodied user; atypical users are, for the most part, left to adapt as best they can, perhaps using specialized assistive technologies as an aid. In this paper, we present an alternative approach: SUPPLE++ automatically generates interfaces which are tailored to an individual's motor capabilities and can be easily adjusted to accommodate varying vision capabilities. SUPPLE++ models users. motor capabilities based on a onetime motor performance test and uses this model in an optimization process, generating a personalized interface. A preliminary study indicates that while there is still room for improvement, SUPPLE++ allowed one user to complete tasks that she could not perform using a standard interface, while for the remaining users it resulted in an average time savings of 20%, ranging from an slowdown of 3% to a speedup of 43%.

Article Picture

Programming by a sample: rapidly creating web applications with d.mix (p. 241-250)

Abstract plus

Source-code examples of APIs enable developers to quickly gain a gestalt understanding of a library's functionality, and they support organically creating applications by incrementally modifying a functional starting point. As an increasing number of web sites provide APIs, significantlatent value lies in connecting the complementary representations between site and service - in essence, enabling sites themselves to be the example corpus. We introduce d.mix, a tool for creating web mashups that leverages this site-to-service correspondence. With d.mix, users browse annotated web sites and select elements to sample. d.mix's sampling mechanism generates the underlying service calls that yield those elements. This code can be edited, executed, and shared in d.mix's wiki-based hosting environment. This sampling approach leverages pre-existing web sites as example sets and supports fluid composition and modification of examples. An initial study with eight participants found d.mix to enable rapid experimentation, and suggested avenues for improving its annotation mechanism.

Article Picture

Evaluating user interface systems research (p. 251-258)

Abstract plus

The development of user interface systems has languished with the stability of desktop computing. Future systems, however, that are off-the-desktop, nomadic or physical in nature will involve new devices and new software systems for creating interactive applications. Simple usability testing is not adequate for evaluating complex systems. The problems with evaluating systems work are explored and a set of criteria for evaluating new UI systems work is presented.

Session: Novel displays and interaction

Article Picture

ThinSight: versatile multi-touch sensing for thin form-factor displays (p. 259-268)

Abstract plus

ThinSight is a novel optical sensing system, fully integrated into a thin form factor display, capable of detecting multi-ple fingers placed on or near the display surface. We describe this new hardware in detail, and demonstrate how it can be embedded behind a regular LCD, allowing sensing without degradation of display capability. With our approach, fingertips and hands are clearly identifiable through the display. The approach of optical sensing also opens up the exciting possibility for detecting other physical objects and visual markers through the display, and some initial experiments are described. We also discuss other novel capabilities of our system: interaction at a distance using IR pointing devices, and IR-based communication with other electronic devices through the display. A major advantage of ThinSight over existing camera and projector based optical systems is its compact, thin form-factor making such systems even more deployable. We therefore envisage using ThinSight to capture rich sensor data through the display which can be processed using computer vision techniques to enable both multi-touch and tangible interaction.

Article Picture

Lucid touch: a see-through mobile device (p. 269-278)

Abstract plus

Touch is a compelling input modality for interactive devices; however, touch input on the small screen of a mobile device is problematic because a user's fingers occlude the graphical elements he wishes to work with. In this paper, we present LucidTouch, a mobile device that addresses this limitation by allowing the user to control the application by touching the back of the device. The key to making this usable is what we call pseudo-transparency: by overlaying an image of the user's hands onto the screen, we create the illusion of the mobile device itself being semi-transparent. This pseudo-transparency allows users to accurately acquire targets while not occluding the screen with their fingers and hand. Lucid Touch also supports multi-touch input, allowing users to operate the device simultaneously with all 10 fingers. We present initial study results that indicate that many users found touching on the back to be preferable to touching on the front, due to reduced occlusion, higher precision, and the ability to make multi-finger input.

Article Picture

E-conic: a perspective-aware interface for multi-display environments (p. 279-288)

Abstract plus

Multi-display environments compose displays that can be at different locations from and different angles to the user; as a result, it can become very difficult to manage windows, read text, and manipulate objects. We investigate the idea of perspective as a way to solve these problems in multi-display environments. We first identify basic display and control factors that are affected by perspective, such as visibility, fracture, and sharing. We then present the design and implementation of E-conic, a multi-display multi-user environment that uses location data about displays and users to dynamically correct perspective. We carried out a controlled experiment to test the benefits of perspective correction in basic interaction tasks like targeting, steering, aligning, pattern-matching and reading. Our results show that perspective correction significantly and substantially improves user performance in all these tasks.