Keywords
UIST2.0 Archive - 20 years of UIST
Back
Back to keywords index

audio

audio

In Proceedings of UIST 1997
Article Picture

Audio aura: light-weight audio augmented reality (p. 211-212)

In Proceedings of UIST 1998
Article Picture

A dynamic grouping technique for ink and audio notes (p. 195-202)

In Proceedings of UIST 2000
Article Picture

The AHI: an audio and haptic interface for contact interactions (p. 149-158)

In Proceedings of UIST 2010
Article Picture

Jogging over a distance between Europe and Australia (p. 189-198)

Abstract plus

Exertion activities, such as jogging, require users to invest intense physical effort and are associated with physical and social health benefits. Despite the benefits, our understanding of exertion activities is limited, especially when it comes to social experiences. In order to begin understanding how to design for technologically augmented social exertion experiences, we present "Jogging over a Distance", a system in which spatialized audio based on heart rate allowed runners as far apart as Europe and Australia to run together. Our analysis revealed how certain aspects of the design facilitated a social experience, and consequently we describe a framework for designing augmented exertion activities. We make recommendations as to how designers could use this framework to aid the development of future social systems that aim to utilize the benefits of exertion.

audio interface

In Proceedings of UIST 2009
Article Picture

User guided audio selection from complex sound mixtures (p. 89-92)

Abstract plus

In this paper we present a novel interface for selecting sounds in audio mixtures. Traditional interfaces in audio editors provide a graphical representation of sounds which is either a waveform, or some variation of a time/frequency transform. Although with these representations a user might be able to visually identify elements of sounds in a mixture, they do not facilitate object-specific editing (e.g. selecting only the voice of a singer in a song). This interface uses audio guidance from a user in order to select a target sound within a mixture. The user is asked to vocalize (or otherwise sonically represent) the desired target sound, and an automatic process identifies and isolates the elements of the mixture that best relate to the user's input. This way of pointing to specific parts of an audio stream allows a user to perform audio selections which would have been infeasible otherwise.

audio server

In Proceedings of UIST 1992
Article Picture

Tools for building asynchronous servers to support speech and audio applications (p. 71-78)

audio user interface

audio visualization

In Proceedings of UIST 2003
Article Picture

SmartMusicKIOSK: music listening station with chorus-search function (p. 31-40)

non-speech audio

In Proceedings of UIST 1993
Article Picture

SpeechSkimmer: interactively skimming recorded speech (p. 187-196)

In Proceedings of UIST 1994
Article Picture

ENO: synthesizing structured sound spaces (p. 49-57)

In Proceedings of UIST 1995
Article Picture

Hands-on demonstration: interacting with SpeechSkimmer (p. 71-72)

In Proceedings of UIST 1995
Article Picture

A tool to support speech and non-speech audio feedback generation in audio interfaces (p. 171-179)

real-time audio buffering

In Proceedings of UIST 2001
Article Picture

Real-time audio buffering for telephone applications (p. 193-194)

spatial audio

In Proceedings of UIST 1998
Article Picture

Audio hallway: a virtual acoustic environment for browsing (p. 163-170)