Keywords
UIST2.0 Archive - 20 years of UIST
Back
Back to keywords index

annotation

annotation

In Proceedings of UIST 1996
Article Picture

A viewer for PostScript documents (p. 31-32)

In Proceedings of UIST 1997
Article Picture

Supporting cooperative and personal surfing with a desktop assistant (p. 129-138)

In Proceedings of UIST 2001
Article Picture

View management for virtual and augmented reality (p. 101-110)

In Proceedings of UIST 2002
Article Picture

Moving markup: repositioning freeform annotations (p. 21-30)

In Proceedings of UIST 2002
Article Picture

Boom chameleon: simultaneous capture of 3D viewpoint, voice and gesture annotations on a spatially-aware display (p. 111-120)

In Proceedings of UIST 2003
Article Picture

Fluid interaction techniques for the control and annotation of digital video (p. 105-114)

In Proceedings of UIST 2004
Article Picture

Who cares?: reflecting who is reading what on distributed community bulletin boards (p. 109-118)

In Proceedings of UIST 2004
Article Picture

ScreenCrayons: annotating anything (p. 165-174)

freeform ink annotation

In Proceedings of UIST 2009
Article Picture

Perceptual interpretation of ink annotations on line charts (p. 233-236)

Abstract plus

Asynchronous collaborators often use freeform ink annotations to point to visually salient perceptual features of line charts such as peaks or humps, valleys, rising slopes and declining slopes. We present a set of techniques for interpreting such annotations to algorithmically identify the corresponding perceptual parts. Our approach is to first apply a parts-based segmentation algorithm that identifies the visually salient perceptual parts in the chart. Our system then analyzes the freeform annotations to infer the corresponding peaks, valleys or sloping segments. Once the system has identified the perceptual parts it can highlight them to draw further attention and reduce ambiguity of interpretation in asynchronous collaborative discussions.

image annotation

In Proceedings of UIST 2008
Article Picture

Annotating gigapixel images (p. 33-36)

Abstract plus

Panning and zooming interfaces for exploring very large images containing billions of pixels (gigapixel images) have recently appeared on the internet. This paper addresses issues that arise when creating and rendering auditory and textual annotations for such images. In particular, we define a distance metric between each annotation and any view resulting from panning and zooming on the image. The distance then informs the rendering of audio annotations and text labels. We demonstrate the annotation system on a number of panoramic images.

repositioning annotation

In Proceedings of UIST 2002
Article Picture

Moving markup: repositioning freeform annotations (p. 21-30)

video annotation

In Proceedings of UIST 2008
Article Picture

Video object annotation, navigation, and composition (p. 3-12)

Abstract plus

We explore the use of tracked 2D object motion to enable novel approaches to interacting with video. These include moving annotations, video navigation by direct manipulation of objects, and creating an image composite from multiple video frames. Features in the video are automatically tracked and grouped in an off-line preprocess that enables later interactive manipulation. Examples of annotations include speech and thought balloons, video graffiti, path arrows, video hyperlinks, and schematic storyboards. We also demonstrate a direct-manipulation interface for random frame access using spatial constraints, and a drag-and-drop interface for assembling still images from videos. Taken together, our tools can be employed in a variety of applications including film and video editing, visual tagging, and authoring rich media such as hyperlinked video.