Final Program
Feel free to download and print the PDF final program -- it is designed to print two-sided (landscape, flip on short side) and then to fold into a 12-page booklet. Of course, you'll get a copy on-site at the conference. If you prefer, all session information
and times are available in this HTML version:
Day-by-Day Schedule
All sessions in Pinnacle Ballroom unless otherwise noted.
| Sunday, November 2, 2003 |
| 8:00 am - 4:30 pm |
Doctoral Symposium
(invitation only)
Dundarave Room |
| 5:00 pm - 9:00 pm |
Conference Registration |
| 6:00 pm - 8:00 pm |
Welcoming Reception (including Doctoral posters) |
| Monday, November 3,
2003 |
| 7:45 am - 8:30 am |
Continental Breakfast |
| 8:30 am - 10:00 am |
Opening Keynote Address
(Ben Shneiderman) |
| 10:00 am - 10:30 am |
Break |
| 10:30 am - 12:00 pm |
Session:
Collaborative Software |
| 12:00 pm - 2:00 pm |
Lunch (on your own) |
| 2:00 pm - 3:30 pm |
Session: Audio and
Paper |
| 3:30 pm - 4:00 pm |
Break (posters on display
in Shaughnessy) |
| 4:00 pm - 5:45 pm |
Session: Input |
| 6:30 pm - |
Conference Banquet
Imperial Chinese Seafood Restaurant |
| Tuesday, November 4,
2003 |
| 7:45 am - 8:30 am |
Continental Breakfast |
| 8:30 am - 10:00 am |
Session: Images
and Video |
| 10:00 am - 10:30 am |
Break (posters on display
in Shaughnessy) |
| 10:30 am - 12:00 pm |
Session:
Architectures and Toolkits |
| 12:00 pm - 2:00 pm |
Lunch (on your own) |
| 2:00 pm - 3:00 pm |
Invited Surveys (in parallel tracks):
Computer Audition (Pinnacle
III)
Chemical Sensors (Pinnacle I/II) |
| 3:00 pm - 3:30 pm |
Break (posters on display
in Shaughnessy) |
| 3:30 pm - 4:45 pm |
Session: Public and
Multi-Screen Displays |
| 6:00 pm - 9:00 pm |
Demo Reception
at NewMIC -- the New Media Innovation
Centre |
| Wednesday, November 5,
2003 |
| 7:45 am - 8:30 am |
Continental Breakfast |
| 8:30 am - 10:00 am |
UIST/ICMI Joint Keynote
Address (Sandra Marshall) |
| 10:00 am - 10:30 am |
Break (posters on display
in Shaughnessy) |
| 10:30 am - 12:10 pm |
Joint Paper Session with
ICMI-PUI 2003 |
| 12:10 pm - 2:00 pm |
Poster Lunch (Pinnacle
Foyer, Shaughnessy) |
| 2:00 pm - 3:30 pm |
Session: Novel
Interaction |
| 3:30 pm - 4:00 pm |
Break |
| 4:00 pm - 5:30 pm |
ICMI-PUI Paper
Session: Attention and Integration |
| 7:00 pm - 10:00 pm |
UBC
Demo Reception (optional, additional fee) |
Detailed Program Information:
Opening
Keynote
Ben Shneiderman will give the opening
keynote address, titled: Creativity Support Tools: A Grand Challenge
for Interface Designers.
The
challenge of supporting creative work is pushing user interface designers and
human-computer interaction researchers to develop improved models of creative
processes. This talk begins with a
comparison of creativity models and focuses on Czikszentmihalyi’s domain,
field, and individual, as a basis for software requirements.
These requirements lead to eight creative activities that could be
facilitated by improved interfaces:
-
searching
and browsing digital libraries,
-
visualizing
data and processes,
-
consulting
with peers and mentors,
-
thinking
by free associations,
-
exploring
solutions, what-if tools,
-
composing
artifacts and performances,
-
reviewing
and replaying session histories, and
-
disseminating
results.
These
activities can be supported in existing software applications, built into web
services, or inspire novel tools. However,
rapid performance, minimal interface distraction, and scalable solutions are
necessary for success. Smoother coordination across multiple windows and
better integration of tools is vital. A
second facilitating goal is compatible actions with consistent terminology, such
as the widely used cut-copy-paste or open-save-close. Higher
levels of actions that are closer to the task domain are candidates, such as
annotate-consult-revise, initiate-compose-evaluate, or
collect-explore-visualize. Adding to
the challenge of doing research in this area is the difficulty of doing
evaluation. Benchmark tasks can
hardly reveal the efficacy for creative work and discovery.
While case studies or ethnographic observations are useful as formative
design studies, they are weak in their capacity to provide rigorous validation.
BEN
SHNEIDERMAN is a Professor in the Department of Computer Science Founding
Director (1983-2000) of the Human-Computer Interaction Laboratory (http://www.cs.umd.edu/hcil/),
and Member of the Institutes for Advanced Computer Studies & for Systems
Research, all at the University of Maryland at College Park.
He was elected as a Fellow of the Association for Computing (ACM ) in
1997 and a Fellow of the American Association for the Advancement of Science
(AAAS) in 2001. He received the ACM
SIGCHI Lifetime Achievement Award in 2001.
Ben
is the author of Software Psychology: Human Factors in Computer and
Information Systems (1980) and Designing the User Interface: Strategies
for Effective Human-Computer Interaction (4th ed. 2004) http://www.awl.com/DTUI/
. He pioneered the highlighted textual link in 1983, and it became part of
Hyperties, a precursor to the web. His
move into information visualization helped spawn the successful company Spotfire
http://www.spotfire.com/ . He is a technical advisor for the HiveGroup, ILOG,
and Clockwise3D. With S Card and J.
Mackinlay, he co-authored "Readings in Information Visualization: Using
Vision to Think" (1999). Leonardo's
Laptop: Human Needs and the New Computing Technologies (MIT Press) appeared
in October 2002, and his newest book with B. Bederson, The Craft of
Information Visualization (Morgan Kaufmann) was published in April 2003.
Closing
Keynote
Sandra Marshall will give the closing
keynote address, which is help jointly with ICMI-PUI's opening, titled: New
Techniques for Evaluating Innovative Interfaces with Eye Tracking
Computer interfaces are changing rapidly, as are the
cognitive demands on the operators using them.
Innovative applications of new technologies such as multimodal and
multimedia displays, haptic and pen-based interfaces, and natural language
exchanges bring exciting changes to conventional interface usage.
At the same time, their complexity may place overwhelming cognitive
demands on the user. As novel interfaces and software applications are
introduced into operational settings, it is imperative to evaluate them from a
number of different perspectives. One
important perspective examines the extent to which a new interface changes the
cognitive requirements for the operator.
The presentation describes a new approach to measuring
cognitive effort using metrics based on eye movements and pupil dilation.
It is well known that effortful cognitive processing is accompanied by
increases in pupil dilation, but measurement techniques were not previously
available that could supply results in real time or deal with data collected in
long-lasting interactions. We now have a metric—the Index of Cognitive
Activity—that is computed in real time as the operator interacts with the
interface. The Index can be used to
examine extended periods of usage or to assess critical events on an
individual-by-individual basis.
While dilation reveals when cognitive effort is highest,
eye movements provide evidence of why. Especially
during critical events, one wants to know whether the operator is confused by
the presentation or location of specific information, whether he is attending to
key information when necessary, or whether he is distracted by irrelevant
features of the display. Important
details of confusion, attention, and distraction are revealed by traces of his
eye movements and statistical analyses of time spent looking at various features
during critical events.
Together, the Index of Cognitive Activity and the various
analyses of eye movements provide essential information about how users interact
with new interface technologies. Their
use can aid designers of innovative hardware and software products by
highlighting those features that increase rather than decrease users’
cognitive effort.
In the presentation, the underlying mathematical basis of
the Index of Cognitive Activity will be described together with validating
research results from a number of experiments.
Eye movement analyses from the same studies give clues to the sources of
increases in cognitive workload. To
illustrate interface evaluation with the ICA and eye movement analysis, several
extended examples will be presented using commercial and military displays.
SANDRA
MARSHALL is President & CEO of EyeTracking, Inc. and Professor of Psychology
at San Diego State
University. Her research in
cognition and assessment has received federal funding for the past twenty years
and has had important theoretical and practical impact.
Early research on problem solving culminated in the book “Schemas in
Problem Solving.” Her recent work has focused on the use of eye tracking to
understand cognitive activity in training and performance on military
simulations. In research sponsored by the U.S. Office of Naval Research, the
U.S. Air Force Office of Scientific Research, and the U.S. Defense Advanced
Research Projects Agency (DARPA), Dr. Marshall developed new methods for
assessing cognitive strategies and cognitive workload based on eye measures.
The techniques are now being used to evaluate interfaces for military and
non-military applications.
Invited
Surveys
UIST is pleased to have two exciting surveys. They
will be held in parallel sessions on Tuesday morning.
Computer Audition: A Survey of
Techniques, Standards, and Applications
Michael Casey, Department of Computing, City University
of London, UK
Computer Audition is concerned with capturing, processing,
and interpreting arbitrary sound; such as music, sports events, industrial
machine noises and environmental audio. Such technology is now being used
as an alternate or additional input modality in many applications. In this
survey I will summarize the key technologies behind computer audition, discuss
their inclusion in standards such as MPEG 7, and describe several practical
applications of this technology.
MICHAEL CASEY'S research includes general sound
recognition, auditory scene analysis, blind signal separation, acoustic fault
diagnosis and multimedia information systems. He is a member of the Moving
Pictures Experts Group (MPEG) committee of the International Standards
Organisation (ISO) and an editor for the MPEG-7 International Standard for
Multimedia Content Description, for which he has contributed several of the
standardized audio descriptors and description schemes.
Chemical Sensors: Linking
Interactive Systems with the Real World
Dermot Diamond, National Centre for Sensor Research
Dublin City University, Ireland
Leveraging recent advances in analytical chemistry,
materials science, and electronics, cost-effective chemical sensing is now
becoming available for a bewildering array of applications. These devices are
typically built on transducer platforms, with the key issue being how to couple
the variation of a chemically (or biologically) important parameter with the
signal-generation capability of the transducer. This is often achieved by
depositing a chemically sensitive film directly on the device, or by using the
transducer to indirectly probe the chemically sensitive film. The variety of
transducer platforms, and the increasing range of materials for generating the
chemically sensitive films have generated a wide range of routes to accessing
signals containing chemical and/or biological information.
The merging of chemical and biological sensing with digital
communications technologies is one of the most exciting opportunities for the
global research community today. In this survey I will summarize recent
technical trends, discuss several sample applications, and provide pointers for
UI researchers who would like to incorporate chemical sensors in their
interactive systems.
DERMOT DIAMOND received his Ph.D. from Queen’s University
Belfast (Chemical Sensors, 1987), and is currently Vice president for Research
at Dublin City University, Ireland. He has published over 100 peer reviewed
papers in international science journals, and is co-author and editor of two
books, ‘Spreadsheet Applications in Chemistry using Microsoft Excel’ (1997)
and ‘Principles of Chemical and Biological Sensors’, (1998) both published
by Wiley.
Papers
& Technotes
Session: Collaborative Software
A Widget Framework for Augmented Interaction in SCAPE
Leonard D. Brown (Beckman Institute, University of Illinois), Hong Hua
(University of Hawaii at Manoa), Chunyu Gao (Beckman Institute, University of
Illinois)
Rhythm Modeling, Visualizations and Applications
James “Bo” Begole, John C. Tang (Sun Microsystems Laboratories), Rosco
Hill (University of Waterloo)
Classroom BRIDGE: using collaborative public and desktop timelines to
support activity awareness
Craig H. Ganoe, Jacob P. Somervell, Dennis C. Neale, Philip L. Isenhour, John
M. Carroll, Mary Beth Rosson, D. Scott McCrickard (Virginia Polytechnic
Institute)
Session: Audio and Paper
SmartMusicKiosk: Music Listening Station with Chorus-Search Function
Masataka Goto (PRESTO, JST / National Institute of Advanced Industrial Science and
Technology)
TalkBack: a conversational answering machine
Vidya Lakshmipathy, Chris Schmandt, Natalia Marmasse (MIT Media Lab)
Paper Augmented Digital Documents
François Guimbretière (University of Maryland)
Session: Input
EdgeWrite: A Stylus-Based Text Entry Method Designed for High Accuracy
and Stability of Motion
Jacob O. Wobbrock, Brad A. Myers, John A. Kembel (Carnegie Mellon University)
Tracking Menus
George Fitzmaurice, Azam Khan, Robert Pieké, Bill Buxton, Gordon Kurtenbach (Alias|wavefront)
TiltText: Using Tilt for Text Input to Mobile Phones
Daniel Wigdor, Ravin Balakrishnan (University of Toronto)
Considering the Direction of Cursor Movement for Efficient Traversal of
Cascading Menus (TechNote)
Masatomo Kobayashi, Takeo Igarashi (University of Tokyo)
Session: Images and Video
Automatic Thumbnail Cropping and its Effectiveness
Bongwon Suh, Haibin Ling, Benjamin B. Bederson, David W. Jacobs (University of
Maryland)
Fluid Interaction Techniques for the Control and Annotation of Digital
Video
Gonzalo Ramos, Ravin Balakrishnan (University of Toronto)
Rapid Serial Visual Presentation Techniques for Consumer Digital Video
Devices
Kent Wittenburg, Clifton Forlines, Tom Lanning, Alan Esenther (Mitsubishi
Electric Research Laboratories), Shigeo Harada, Taizo Miyachi (Mitsubishi
Electric Corporation -- Industrial Design Center)
Session: Architectures and Toolkits
GADGET: A Toolkit for Optimization-Based Approaches to Interface and
Display Generation
James Fogarty, Scott E. Hudson (Carnegie Mellon University)
A molecular architecture for creating advanced GUIs
Eric Lecolinet (GET / ENST and CNRS LTCI)
User Interface Continuations
Dennis Quan, David Huynh David R. Karger, Robert Miller (MIT Computer Science
and Artificial Intelligence Laboratory)
Session: Public and Multi-Screen Displays
Synchronous Gestures for Multiple Persons and Computers
Ken Hinckley (Microsoft Research)
Dynamo: A public interactive surface supporting the cooperative sharing
and exchange of media
Shahram Izadi*, Harry Brignull†, Tom Rodden*,
Yvonne Rogers†, Mia Underwood† (*University
of Nottingham, †University of Sussex),
A fast, interactive 3D paper-flier metaphor for digital bulletin boards
(TechNote)
Laurent Denoue, Les Nelson, Elizabeth Churchill (FX Palo Alto Laboratory)
Joint Session with ICMI-PUI
VisionWand: Interaction Techniques for Large Displays using a Passive
Wand Tracked in 3D
Xiang Cao, Ravin Balakrishnan (University of Toronto)
Perceptually-Supported Image Editing of Text and Graphics
Eric Saund, David Fleet, Daniel Larner, James Mahoney (Palo Alto Research
Center)
ICMI-PUI Paper: A System for
Fast Full-Text Entry for Small Electronic Devices
Saied Nesbat (ExIdeas, Inc.)
ICMI-PUI Paper: Mutual
Disambiguation of 3D Multimodal Interaction in Augmented and Virtual Reality
Ed Kaiser, Alex Olwal, David McGee, Hrvoje Benko, Andrea Corradini,
Xiaoguang Li, Phil Cohen, Steven Feiner (Oregon Health and Science University/OGI
School of Science & Engineering, Columbia University, and Pacific
Northwest National Laboratory)
Session: Novel Interaction
Multi-Finger and Whole Hand Gestural Interaction Techniques for
Multi-User Tabletop Displays
Mike Wu, Ravin Balakrishnan (University of Toronto)
PreSense: Interaction Techniques for Finger Sensing Input Devices
Jun Rekimoto (Sony Computer Science Laboratories), Takaaki Ishizawa (Keio
University), Carsten Schwesig, Haruo Oba (Sony Computer Science Laboratories)
Stylus Input and Editing Without Prior Selection of Mode (TechNote)
Eric Saund (Palo Alto Research Center), Edward Lank (San Francisco State
University)
Tactile Interfaces for Small Touch Screens (TechNote)
Ivan Poupyrev (Sony CSL), Shigeaki Maruyama (Micro Device Center, Sony EMCS)
ICMI-PUI Session: Attention and Integration
ICMI-PUI Paper: Learning and
Reasoning about Interruption
Eric Horvitz and Johnson Apacible (Microsoft Research)
ICMI-PUI Paper: Providing the
Basis for Human-Robot-Interaction: A Multimodal Attention System for a Mobile
Robot
Sebastian Lang, Marcus Kleinehagenbrock, Sascha Hohenner, Jannik Fritsch,
Gernot A. Fink, and Gerhard Sagerer (Bielefeld University, Faculty of
Technology)
ICMI-PUI Paper: Selective
Perception Policies for Limiting Computation in Multimodal Systems: A
Comparative Analysis
Nuria Oliver and Eric Horvitz (Microsoft Research)
ICMI-PUI Paper: Toward a Theory
of Organized Multimodal Integration Patterns during Human-Computer Interaction
Sharon Oviatt, Rachel Coulston, Stefanie Tomko, Benfang Xiao, Rebecca
Lunsford, Matt Wesson and Lesley Carmichael (Oregon Health and Science
University/OGI School of Science & Engineering, Carnegie Mellon University
and University of Washington)
Doctoral
Symposium
Understanding Images of Graphical User Interfaces: A new approach to
activity recognition for visual surveillance
Li Yu (Lehigh University)
Interaction in Context-Aware Mobile Handheld Devices
Jonna Häkkilä (University of Oulu)
Papier-Mâché: Toolkit support for tangible interaction
Scott Klemmer (UC Berkeley)
Intelligent groupware to support communication and persona management
Joe Tullio (Georgia Tech)
Territory-Based Interaction Techniques for Tabletop Collaboration
Stacey Scott (University of Calgary)
INCA: An Infrastructure to Support Novel Explorations of the Capture
& Access Design Space
Khai Truong (Georgia Institute of Technology)
A Multiscale Workspace for Managing and Exploring Personal Digital
Libraries
Daniel Bauer (University of California, San Diego)
Damask: A Tool for Early-Stage Design and Prototyping of Cross-Device
User Interfaces
James Lin (UC Berkeley)
Peer-Reviewed
Demonstration
Halo: Supporting Spatial Cognition on Small Screens
Patrick Baudisch (Microsoft Research)
Around the World in Seconds with Speed-Dependent Automatic Zooming
Andy Cockburn, Julian Looser, and Joshua Savage (University of Canterbury)
Programming for Multiple Touches and Multiple Users: A Toolkit for the DiamondTouch Hardware
Rob Diaz, Edward Tse, and Saul Greenberg (University of Calgary)
The InfoVis Toolkit
Jean-Daniel Fekete (INRIA Futurs/LRI)
MouseHaus Table, a Physical Interface for Urban Design
Chen-Je Huang, Ellen Yi-Luen Do, and Mark Gross (University of Washington)
FEEL Phone: Manipulating Endpoints of Audio, Video and Data Sessions
Michimune Kohno, Yuji Ayatsuka, and Jun Rekimoto (Sony Computer Science Laboratories)
Favorite Folders: A Configurable, Scalable File Browser
Bongshin Lee and Benjamin B. Bederson (University of Maryland at College Park)
DART: The Designers Augmented Reality Toolkit
Blair MacIntyre, Maribeth Gandy, Jay Bolter, Steven Dow, and Brendan Hannigan (Georgia Tech)
Haystack: Metadata-Enabled Information Management
Dennis Quan (MIT Artificial Intelligence Laboratory) and David Karger (MIT Laboratory for Computer Science)
Two-handed interaction in a tool-based environment
Robert St. Amant and Colin Butler (North Carolina State University)
EyePliances and EyeReason: Using Attention to Drive Interactions with Ubiquitous Appliances
Jeffrey S. Shell, Roel Vertegaal, Aadil Mamuji, Thanh Pham, Changuk Sohn, and Alexander Skaburskis (Human Media Lab Queen's University)
Calendar Navigator Agent and Dialog Tabs Demonstration
Cornelis Snoeck and Thad Starner (Georgia Institute of Technology)
The Vis-a-Vid Transparent Video Facetop
David Stotts, Jason Smith, and Dennis Jen (University of North Carolina)
ActiveInk
Hiroaki Tobita (Sony CSL Interaction Lab)
Animated Chat
Hua Wang and Takeo Igarashi (University of Tokyo)
WorldCursor: Pointing in Intelligent Environments with a Tele-operated Laser Pointer
Andy Wilson (Microsoft Research)
StoryTable: Computer Supported Collaborative Storytelling
Massimo Zancanaro, A. Cappelletti, and O. Stock (ITC-irst)
Peer-Reviewed
Posters
Body Mnemonics: Portable device interaction design concept
Jussi Angesleva, Ian Oakley, Stephen Hughes, and Sile O'Modhrain (Media Lab Europe)
Natural Gesture in Descriptive Monologues
Jacob Eisenstein and Randall Davis (MIT)
Video editing based on motion recognition using temporal templates
Kensaku Fujii and Kenichi Arakawa (NTT)
Form Interaction for Pen-based Devices
Jeffrey Green (Columbia University)
Free-Space Transparency: Exposing Hidden Content Through Unimportant Screen Space
Edward Ishak and Steven Feiner (Columbia University)
Digital Video Processing To Enhance ClearBoard: A Technique and Possibilities
Minoru Kobayashi (NTT Cyber Space Laboratories)
CoolPaint: Direct Interaction Painting
Dustin Lang, Leah Findlater, and Michael Shaver (University of British Columbia)
The Flexible Pointer: An Interaction Technique for Selection in Augmented and Virtual Reality
Alex Olwal and Steven Feiner (Columbia University)
Rubbing the Fisheye: Precise Touch-Screen Interaction with Gestures and Fisheye Views
Alex Olwal and Steven Feiner (Columbia University)
Better Transparent Overlays by Applying Illustration Techniques and Vision Findings
W. Bradford Paley (Digital Image Design; Columbia University)
Modelling non-Expert Text Entry Speed on Phone Keypads
Andriy Pavlovych and Wolfgang Stuerzlinger (York University)
A Re-Interpretation of Marking Menus: The Usage of Gestalt Principles as Cognitive Tools
Eva Soliz (Columbia University), W. Bradford Paley (Digital Image Design; Columbia University)
Spatial Layer Display Technique to Perceive What Remote Partner is interested in
Yoshihiro Shimada, Minoru Kobayashi, and Takashi Yagi (NTT Cyber Space Laboratories)
Arrayed Air Jet Based Haptic Display:Implementing An Untethered Interface
Yuriko Suzuki and Minoru Kobayashi (NTT Cyber Space Laboratories)
Tracking Multiple Laser Pointers for Large Screen Interaction
Florian Vogt, Justin Wong, Sid Fels (University of British Columbia), and Duncan Cavens (Swiss Federal Institute of Technology Zurich)
A Study of Semantics Synchronous Understanding on Speech Interface Design
Kuansan Wang (Microsoft Research)
Playing Well with Others: Applying Board Game Design to Tabletop Display Interfaces
Tara Whalen (Dalhousie University)
|