Advance Program
The Final Program is now available.
Day-by-day session information is
available in our convenient, two-page advance
program.
Quick jump to:
Opening Keynote
Closing Keynote
Invited Surveys
Papers and Technotes
Opening Keynote
Ben Shneiderman will give the
opening keynote address, titled: Creativity Support Tools:
A Grand Challenge for Interface Designers.
The
challenge of supporting creative work is pushing user interface designers and
human-computer interaction researchers to develop improved models of creative
processes. This talk begins with a
comparison of creativity models and focuses on Czikszentmihalyi’s domain,
field, and individual, as a basis for software requirements.
These requirements lead to eight creative activities that could be
facilitated by improved interfaces:
-
searching
and browsing digital libraries,
-
visualizing
data and processes,
-
consulting
with peers and mentors,
-
thinking
by free associations,
-
exploring
solutions, what-if tools,
-
composing
artifacts and performances,
-
reviewing
and replaying session histories, and
-
disseminating
results.
These
activities can be supported in existing software applications, built into web
services, or inspire novel tools. However,
rapid performance, minimal interface distraction, and scalable solutions are
necessary for success. Smoother coordination across multiple windows and
better integration of tools is vital. A
second facilitating goal is compatible actions with consistent terminology, such
as the widely used cut-copy-paste or open-save-close. Higher
levels of actions that are closer to the task domain are candidates, such as
annotate-consult-revise, initiate-compose-evaluate, or
collect-explore-visualize. Adding to
the challenge of doing research in this area is the difficulty of doing
evaluation. Benchmark tasks can
hardly reveal the efficacy for creative work and discovery.
While case studies or ethnographic observations are useful as formative
design studies, they are weak in their capacity to provide rigorous validation.
BEN
SHNEIDERMAN is a Professor in the Department of Computer Science Founding
Director (1983-2000) of the Human-Computer Interaction Laboratory (http://www.cs.umd.edu/hcil/),
and Member of the Institutes for Advanced Computer Studies & for Systems
Research, all at the University of Maryland at College Park.
He was elected as a Fellow of the Association for Computing (ACM ) in
1997 and a Fellow of the American Association for the Advancement of Science
(AAAS) in 2001. He received the ACM
SIGCHI Lifetime Achievement Award in 2001.
Ben
is the author of Software Psychology: Human Factors in Computer and
Information Systems (1980) and Designing the User Interface: Strategies
for Effective Human-Computer Interaction (4th ed. 2004) http://www.awl.com/DTUI/
. He pioneered the highlighted textual link in 1983, and it became part of
Hyperties, a precursor to the web. His
move into information visualization helped spawn the successful company Spotfire
http://www.spotfire.com/ . He is a technical advisor for the HiveGroup, ILOG,
and Clockwise3D. With S Card and J.
Mackinlay, he co-authored "Readings in Information Visualization: Using
Vision to Think" (1999). Leonardo's
Laptop: Human Needs and the New Computing Technologies (MIT Press) appeared
in October 2002, and his newest book with B. Bederson, The Craft of
Information Visualization (Morgan Kaufmann) was published in April 2003.
Closing Keynote
Sandra Marshall will give the closing
keynote address, which is help jointly with ICMI-PUI's opening, titled: New
Techniques for Evaluating Innovative Interfaces with Eye Tracking
Computer interfaces are changing rapidly, as are the
cognitive demands on the operators using them.
Innovative applications of new technologies such as multimodal and
multimedia displays, haptic and pen-based interfaces, and natural language
exchanges bring exciting changes to conventional interface usage.
At the same time, their complexity may place overwhelming cognitive
demands on the user. As novel interfaces and software applications are
introduced into operational settings, it is imperative to evaluate them from a
number of different perspectives. One
important perspective examines the extent to which a new interface changes the
cognitive requirements for the operator.
The presentation describes a new approach to measuring
cognitive effort using metrics based on eye movements and pupil dilation.
It is well known that effortful cognitive processing is accompanied by
increases in pupil dilation, but measurement techniques were not previously
available that could supply results in real time or deal with data collected in
long-lasting interactions. We now have a metric—the Index of Cognitive
Activity—that is computed in real time as the operator interacts with the
interface. The Index can be used to
examine extended periods of usage or to assess critical events on an
individual-by-individual basis.
While dilation reveals when cognitive effort is highest,
eye movements provide evidence of why. Especially
during critical events, one wants to know whether the operator is confused by
the presentation or location of specific information, whether he is attending to
key information when necessary, or whether he is distracted by irrelevant
features of the display. Important
details of confusion, attention, and distraction are revealed by traces of his
eye movements and statistical analyses of time spent looking at various features
during critical events.
Together, the Index of Cognitive Activity and the various
analyses of eye movements provide essential information about how users interact
with new interface technologies. Their
use can aid designers of innovative hardware and software products by
highlighting those features that increase rather than decrease users’
cognitive effort.
In the presentation, the underlying mathematical basis of
the Index of Cognitive Activity will be described together with validating
research results from a number of experiments.
Eye movement analyses from the same studies give clues to the sources of
increases in cognitive workload. To
illustrate interface evaluation with the ICA and eye movement analysis, several
extended examples will be presented using commercial and military displays.
SANDRA
MARSHALL is President & CEO of EyeTracking, Inc. and Professor of Psychology
at San Diego State
University. Her research in
cognition and assessment has received federal funding for the past twenty years
and has had important theoretical and practical impact.
Early research on problem solving culminated in the book “Schemas in
Problem Solving.” Her recent work has focused on the use of eye tracking to
understand cognitive activity in training and performance on military
simulations. In research sponsored by the U.S. Office of Naval Research, the
U.S. Air Force Office of Scientific Research, and the U.S. Defense Advanced
Research Projects Agency (DARPA), Dr. Marshall developed new methods for
assessing cognitive strategies and cognitive workload based on eye measures.
The techniques are now being used to evaluate interfaces for military and
non-military applications.
Invited Surveys
UIST is pleased to have two exciting surveys. They
will be held in parallel sessions on Tuesday morning.
Computer Audition: A Survey of Techniques, Standards,
and Applications
Michael Casey, Department of Computing, City University
of London, UK
Computer Audition is concerned with capturing, processing,
and interpreting arbitrary sound; such as music, sports events, industrial
machine noises and environmental audio. Such technology is now being used
as an alternate or additional input modality in many applications. In this
survey I will summarize the key technologies behind computer audition, discuss
their inclusion in standards such as MPEG 7, and describe several practical
applications of this technology.
MICHAEL CASEY'S research includes general sound
recognition, auditory scene analysis, blind signal separation, acoustic fault
diagnosis and multimedia information systems. He is a member of the Moving
Pictures Experts Group (MPEG) committee of the International Standards
Organisation (ISO) and an editor for the MPEG-7 International Standard for
Multimedia Content Description, for which he has contributed several of the
standardized audio descriptors and description schemes.
Chemical Sensors: Linking Interactive Systems with the
Real World
Dermot Diamond, National Centre for Sensor Research
Dublin City University, Ireland
Leveraging recent advances in analytical chemistry,
materials science, and electronics, cost-effective chemical sensing is now
becoming available for a bewildering array of applications. These devices are
typically built on transducer platforms, with the key issue being how to couple
the variation of a chemically (or biologically) important parameter with the
signal-generation capability of the transducer. This is often achieved by
depositing a chemically sensitive film directly on the device, or by using the
transducer to indirectly probe the chemically sensitive film. The variety of
transducer platforms, and the increasing range of materials for generating the
chemically sensitive films have generated a wide range of routes to accessing
signals containing chemical and/or biological information.
The merging of chemical and biological sensing with digital
communications technologies is one of the most exciting opportunities for the
global research community today. In this survey I will summarize recent
technical trends, discuss several sample applications, and provide pointers for
UI researchers who would like to incorporate chemical sensors in their
interactive systems.
DERMOT DIAMOND received his Ph.D. from Queen’s University
Belfast (Chemical Sensors, 1987), and is currently Vice president for Research
at Dublin City University, Ireland. He has published over 100 peer reviewed
papers in international science journals, and is co-author and editor of two
books, ‘Spreadsheet Applications in Chemistry using Microsoft Excel’ (1997)
and ‘Principles of Chemical and Biological Sensors’, (1998) both published
by Wiley.
Papers & Technotes
Session: Collaborative Software
Extensible Interface Widgets for Augmented Collaboration in SCAPE
Leonard D. Brown (Beckman Institute, University of Illinois), Hong Hua
(University of Hawaii at Manoa), Chunyu Gao (Beckman Institute, University of
Illinois)
Rhythm Modeling, Visualizations and Applications
James “Bo” Begole, John C. Tang (Sun Microsystems Laboratories), Rosco
Hill (University of Waterloo)
Classroom BRIDGE: using collaborative public and desktop timelines to
support activity awareness
Craig H. Ganoe, Jacob P. Somervell, Dennis C. Neale, Philip L. Isenhour, John
M. Carroll, Mary Beth Rosson, D. Scott McCrickard (Virginia Polytechnic
Institute)
Session: Audio and Paper
SmartMusicKiosk: Music Listening Station with Chorus-Search Function
Masataka Goto (National Institute of Advanced Industrial Science and
Technology)
TalkBack: a conversational answering machine
Vidya Lakshmipathy, Chris Schmandt, Natalia Marmasse (MIT Media Lab)
Paper Augmented Digital Documents
François Guimbretière (University of Maryland)
Session: Input
EdgeWrite: A Stylus-Based Text Entry Method Designed for High Accuracy
and Stability of Motion
Jacob O. Wobbrock, Brad A. Myers, John A. Kembel (Carnegie Mellon University)
Tracking Menus
George Fitzmaurice, Azam Khan, Robert Pieké, Bill Buxton, Gordon Kurtenbach (Alias|wavefront)
TiltText: Using Tilt for Text Input to Mobile Phones
Daniel Wigdor, Ravin Balakrishnan (University of Toronto)
Considering the Direction of Cursor Movement for Efficient Traversal of
Cascading Menus (TechNote)
Masatomo Kobayashi, Takeo Igarashi (University of Tokyo)
Session: Images and Video
Automatic Thumbnail Cropping and its Effectiveness
Bongwon Suh, Haibin Ling, Benjamin B. Bederson, David W. Jacobs (University of
Maryland)
Fluid Interaction Techniques for the Control and Annotation of Digital
Video
Gonzalo Ramos, Ravin Balakrishnan (University of Toronto)
Rapid Serial Visual Presentation Techniques for Consumer Digital Video
Devices
Kent Wittenburg, Clifton Forlines, Tom Lanning, Alan Esenther (Mitsubishi
Electric Research Laboratories), Shigeo Harada, Taizo Miyachi (Mitsubishi
Electric Corporation -- Industrial Design Center)
Session: Architectures and Toolkits
GADGET: A Toolkit for Optimization-Based Approaches to Interface and
Display Generation
James Fogarty, Scott E. Hudson (Carnegie Mellon University)
A molecular architecture for creating advanced GUIs
Eric Lecolinet (GET / ENST and CNRS LTCI)
User Interface Continuations
Dennis Quan, David Huynh David R. Karger, Robert Miller (MIT Computer Science
and Artificial Intelligence Laboratory)
Session: Public and Multi-Screen Displays
Synchronous Gestures for Multiple Persons and Computers
Ken Hinckley (Microsoft Research)
Dynamo: A public interactive surface supporting the cooperative sharing
and exchange of media
Shahram Izadi*, Harry Brignull†, Tom Rodden*,
Yvonne Rogers†, Mia Underwood† (*University
of Nottingham, †University of Sussex),
A fast, interactive 3D paper-flier metaphor for digital bulletin boards
(TechNote)
Laurent Denoue, Les Nelson, Elizabeth Churchill (FX Palo Alto Laboratory)
Session: Joint Session with ICMI-PUI
VisionWand: Interaction Techniques for Large Displays using a Passive
Wand Tracked in 3D
Xiang Cao, Ravin Balakrishnan (University of Toronto)
Perceptually-Supported Image Editing of Text and Graphics
Eric Saund, David Fleet, Daniel Larner, James Mahoney (Palo Alto Research
Center)
Session: Novel Interaction
Multi-Finger and Whole Hand Gestural Interaction Techniques for
Multi-User Tabletop Displays
Mike Wu, Ravin Balakrishnan (University of Toronto)
PreSense: Interaction Techniques for Finger Sensing Input Devices
Jun Rekimoto (Sony Computer Science Laboratories), Takaaki Ishizawa (Keio
University), Carsten Schwesig, Haruo Oba (Sony Computer Science Laboratories)
Stylus Input and Editing Without Prior Selection of Mode (TechNote)
Eric Saund (Palo Alto Research Center), Edward Lank (San Francisco State
University)
Tactile Interfaces for Small Touch Screens (TechNote)
Ivan Poupyrev (Sony CSL), Shigeaki Maruyama (Micro Device Center, Sony EMCS)
|