Keywords
UIST2.0 Archive - 20 years of UIST
Back
Back to keywords index

evaluation

evaluation

In Proceedings of UIST 1997
Article Picture

Immersion in desktop virtual reality (p. 11-19)

In Proceedings of UIST 2008
Article Picture

Iterative design and evaluation of an event architecture for pen-and-paper interfaces (p. 111-120)

Abstract plus

This paper explores architectural support for interfaces combining pen, paper, and PC. We show how the event-based approach common to GUIs can apply to augmented paper, and describe additions to address paper's distinguishing characteristics. To understand the developer experience of this architecture, we deployed the toolkit to 17 student teams for six weeks. Analysis of the developers' code provided insight into the appropriateness of events for paper UIs. The usage patterns we distilled informed a second iteration of the toolkit, which introduces techniques for integrating interactive and batched input handling, coordinating interactions across devices, and debugging paper applications. The study also revealed that programmers created gesture handlers by composing simple ink measurements. This desire for informal interactions inspired us to include abstractions for recognition. This work has implications beyond paper - designers of graphical tools can examine API usage to inform iterative toolkit development.

performance evaluation

In Proceedings of UIST 1996
Article Picture

Efficient distributed implementation of semi-replicated synchronous groupware (p. 1-10)

tool support for evaluation

In Proceedings of UIST 1999
Article Picture

A tool for creating predictive performance models from user interface demonstrations (p. 93-102)

usability evaluation

In Proceedings of UIST 1995
Article Picture

GLEAN: a computer-based tool for rapid GOMS model usability evaluation of user interface designs (p. 91-100)

user interface system evaluation

In Proceedings of UIST 2007
Article Picture

Evaluating user interface systems research (p. 251-258)

Abstract plus

The development of user interface systems has languished with the stability of desktop computing. Future systems, however, that are off-the-desktop, nomadic or physical in nature will involve new devices and new software systems for creating interactive applications. Simple usability testing is not adequate for evaluating complex systems. The problems with evaluating systems work are explored and a set of criteria for evaluating new UI systems work is presented.