Keywords
UIST2.0 Archive - 20 years of UIST
Back
Back to keywords index

optimization

numerical optimization

In Proceedings of UIST 2003
Article Picture

GADGET: a toolkit for optimization-based approaches to interface and display generation (p. 125-134)

optimization

In Proceedings of UIST 1995
Article Picture

Amortizing 3D graphics optimization across multiple frames (p. 13-19)

In Proceedings of UIST 1996
Article Picture

Using the multi-layer model for building interactive graphical applications (p. 109-118)

In Proceedings of UIST 2001
Article Picture

Aesthetic information collages: generating decorative displays that contain information (p. 141-150)

In Proceedings of UIST 2005
Article Picture

Preference elicitation for interface optimization (p. 173-182)

In Proceedings of UIST 2007
Article Picture

Automatically generating user interfaces adapted to users' motor and vision capabilities (p. 231-240)

Abstract plus

Most of today's GUIs are designed for the typical, able-bodied user; atypical users are, for the most part, left to adapt as best they can, perhaps using specialized assistive technologies as an aid. In this paper, we present an alternative approach: SUPPLE++ automatically generates interfaces which are tailored to an individual's motor capabilities and can be easily adjusted to accommodate varying vision capabilities. SUPPLE++ models users. motor capabilities based on a onetime motor performance test and uses this model in an optimization process, generating a personalized interface. A preliminary study indicates that while there is still room for improvement, SUPPLE++ allowed one user to complete tasks that she could not perform using a standard interface, while for the remaining users it resulted in an average time savings of 20%, ranging from an slowdown of 3% to a speedup of 43%.

In Proceedings of UIST 2009
Article Picture

A screen-space formulation for 2D and 3D direct manipulation (p. 69-78)

Abstract plus

Rotate-Scale-Translate (RST) interactions have become the de facto standard when interacting with two-dimensional (2D) contexts in single-touch and multi-touch environments. Because the use of RST has thus far focused almost entirely on 2D, there are not yet standard techniques for extending these principles into three dimensions. In this paper we describe a screen-space method which fully captures the semantics of the traditional 2D RST multi-touch interaction, but also allows us to extend these same principles into three-dimensional (3D) interaction. Just like RST allows users to directly manipulate 2D contexts with two or more points, our method allows the user to directly manipulate 3D objects with three or more points. We show some novel interactions, which take perspective into account and are thus not available in orthographic environments. Furthermore, we identify key ambiguities and unexpected behaviors that arise when performing direct manipulation in 3D and offer solutions to mitigate the difficulties each presents. Finally, we show how to extend our method to meet application-specific control objectives, as well as show our method working in some example environments.

space optimization