Sangho Suh (University of Toronto)
Hai Dang (University of Bayreuth)
Ryan Yen (Massachusetts Institute of Technology)
Josh M. Pollock (Massachusetts Institute of Technology)
Ian Arawjo (University of Montréal)
Rubaiat Habib Kazi (Adobe Research)
Hariharan Subramonyam (Stanford University)
Jingyi Li (Pomona College)
Dr. Nazmus Saquib (Tero Labs)
Arvind Satyanarayan (Massachusetts Institute of Technology)
This workshop aims to provide a forum for discussion, brainstorming, and prototyping of the next generation of interfaces that leverage the dynamic experiences enabled by recent advances in AI and the generative capabilities of foundation models. These models simplify complex tasks by generating outputs in various representations (e.g., text, images, videos) through natural language interactions and diverse input modalities like voice and sketch. They interpret user intent, generate, and transform representations, potentially changing how we interact with information and express ideas. Inspired by this potential, technologists, theorists, and researchers are exploring new forms of interaction by building demos and communities dedicated to concretizing and advancing this vision. Our workshop at UIST provides a timely space to discuss AI’s impact on designing and using cognitive tools (e.g., languages, notations, diagrams) dynamically. We will explore use cases, implications across various domains, and the associated challenges and opportunities.
Cedric Honnet (Massachusetts Institute of Technology)
Catherine Yu (Cornell)
Irmandy Wicaksono (Massachusetts Institute of Technology)
Tingyu Cheng (Georgia Tech)
Andreea Danielescu (Accenture Labs)
Cheng Zhang (Cornell)
Stefanie Mueller (Massachusetts Institute of Technology)
Joe Paradiso (Massachusetts Institute of Technology)
Yiyue Luo (Massachusetts Institute of Technology and University of Washington)
Wearables have long been integral to human culture and daily life. Recent advances in intelligent soft wearables have dramatically transformed how we interact with the world, enhancing our health, productivity, and overall well-being. These innovations, combining advanced sensor design, fabrication, and computational power, offer unprecedented opportunities for monitoring, assistance, and augmentation. However, the benefits of these advancements are not yet universally accessible. Economic and technical barriers often limit the reach of these technologies to domain-specific experts. There is a growing need for democratizing intelligent wearables that are scalable, seamlessly integrated, customized, and adaptive. By bringing researchers from relevant disciplines together, this workshop aims to identify the challenges and investigate opportunities for democratizing intelligent soft wearables within the HCI community via interactive demos, invited keynotes, and focused panel discussions.
Pragathi Praveena (Carnegie Mellon University)
Arissa J. Sato (University of Wisconsin-Madison)
Amy Koike (University of Wisconsin-Madison)
Ran Zhou (KTH Royal Institute of Technology)
Nathan White (University of Wisconsin-Madison)
Ken Nakagaki (University of Chicago)
Human-Robot Interaction (HRI) is a field of study that focuses on the understanding, design, and evaluation of interactions between humans and robots. This workshop aims to bring together researchers interested in exploring the intersection of UIST and HRI. Our goal is to provide attendees with a deeper understanding of the synergies between the two research communities and to inspire better alignment between technical advancements in UIST and their application to social HRI contexts. The workshop will feature interactive demos, prototyping sessions, and discussions to explore key HRI concepts and considerations for designing robot interfaces that facilitate social interactions with humans.
Alexandra Ion (Carnegie Mellon University)
Carmel Majidi (Carnegie Mellon University)
Lining Yao (University of California, Berkeley)
Amir H. Alavi (University of Pittsburgh)
Physical AI is extending models from the digital world into performing tasks in the real world. Robots and autonomous cars are common examples of physical AI agents. In this workshop, we aim to go beyond containing physical AI in those agents and ask what if all objects and materials we interact with were intelligent? We aim to form an understanding of the opportunities and challenges of extending agents to include objects and materials that can adapt to users needs, i.e., change shape, firmness, color, tactile properties, etc. This broad vision, which is challenging to achieve, is related to many active research areas, e.g., programmable matter, modular robotics, soft robotics, smart materials, shape-changing interfaces, or radical atoms, and has homes in many disciplines, incl. mechanical engineering, robotics, material science, computer science. Many new approaches are being developed in the individual disciplines that together might be the start of a new era for what we like to call extended physical AI. In this workshop, we bring perspectives from these different disciplines together to exchange new approaches to longstanding challenges (e.g., actuation, computational design, fabrication, control), exchange tacit knowledge, discuss visions for future applications, map the new grand challenges, and inspire the next generation of physical AI research.