A common task in graphical user interfaces is controlling onscreen elements using a pointer. Current adaptive pointing techniques require applications to be built using accessibility libraries that reveal information about interactive targets, and most do not handle path/menu navigation. We present a pseudo-haptic technique that is OS and application independent, and can handle both dragging and clicking. We do this by associating a small force with each past click or drag. When a user frequently clicks in the same general area (e.g., on a button), the patina of past clicks naturally creates a pseudo-haptic magnetic field with an effect similar to that ofsnapping or sticky icons. Our contribution is a bottom-up approach to make targets easier to select without requiring prior knowledge of them.
In this paper we present novel input devices that combine the standard capabilities of a computer mouse with multi-touch sensing. Our goal is to enrich traditional pointer-based desktop interactions with touch and gestures. To chart the design space, we present five different multi-touch mouse implementations. Each explores a different touch sensing strategy, which leads to differing form-factors and hence interactive possibilities. In addition to the detailed description of hardware and software implementations of our prototypes, we discuss the relative strengths, limitations and affordances of these novel input devices as informed by the results of a preliminary user study.
We describe OctoPocus, an example of a dynamic guide that combines on-screen feedforward and feedback to help users learn, execute and remember gesture sets. OctoPocus can be applied to a wide range of single-stroke gestures and recognition algorithms and helps users progress smoothly from novice to expert performance. We provide an analysis of the design space and describe the results of two experi-ments that show that OctoPocus is significantly faster and improves learning of arbitrary gestures, compared to con-ventional Help menus. It can also be adapted to a mark-based gesture set, significantly improving input time compared to a two-level, four-item Hierarchical Marking menu.