My research in Human Computer Interaction focuses on predictive user interfaces, gesture-based
interaction, and novel tools and methods for creating mobile interaction.
In particular, this includes emerging input modalities (such
as gestures and cameras), cross-device interaction
and predictive user interfaces. I develop
software tool support and
recognition methods by drawing insights from
user behaviors and current practice, and
leveraging techniques such as machine learning,
computer vision and
crowdsourcing
to make complex tasks simple and intuitive. My work has made a real world impact and benefited millions of
end users and
developers.
Bio · Yang is a Senior Research Scientist at Google, and an affiliate faculty member in Computer Science & Engineering at the University of Washington. He earned a Ph.D. degree in Computer Science from the Chinese Academy of Sciences, and conducted postdoctoral research in EECS at the University of California at Berkeley. He has led the development of app launch prediction on Android that is in use by tens of millions of users, and wrote Gesture Search, a popular Google branded app on the Play store. He has published over 50 papers in the field of Human Computer Interaction, including 29 publications at CHI, UIST and TOCHI. He has constantly served on the program committees of top-tier HCI and mobile computing conferences.
Enhancing Cross-Device Interaction Scripting with Interactive Illustrations
Contributes a cross-device storyboard and interactive illustration mechanisms for scripting.
Weave: Scripting Cross-Device Wearable Interaction
Provides a set of high-level APIs, based on JavaScript, and integrated tool support for developers to easily distribute UI output and combine user input and sensing events across devices for cross-device interaction.
Gesture On: Always-On Touch Gestures for Fast Mobile Access from Device Standby Mode
Contributes a system that overrides the mobile platform kernel behavior to enable touchscreen gesture shortcuts in standby mode. A user can issue a gesture on the touchscreen before the screen is even turned on.
Optimistic Programming of Touch Interaction
Integrated tool and inference support that allows developers to easily create touch behaviors in their apps.
Reflection: Enabling Event Prediction As an On-Device Service for Mobile Interaction
An on-device infrastructure that provides event prediction as a service to mobile applications.
Detecting Tapping Motion on the Side of Mobile Devices By Probabilistically Combining Hand Postures
A method for detecting finger taps on the different sides of a smartphone, using the built-in motion sensors of the device.
HOBS: Head Orientation-Based Selection in Physical Spaces
Presented the iterative design and evaluation of a head orientation-based selection technique, which augments Google Glass with an infrared (IR) emitter for selecting IR-equipped smart appliances at a distance. (acceptance rate: 29%)
Gesturemote: interacting with remote displays through touch gestures
A technique for interacting with remote displays through touch gestures on a handheld touch surface, which supports a wide range of interaction behaviors, from low pixel-level interaction such as pointing, to medium-level interaction such as structured navigation, to high-level interaction such as shortcuts.
Video Best Paper Honorable Mention Award
Recognizing gestures and their properties using examples and parts-based scripting.
InkAnchor: Enhancing Informal Ink-Based Note Taking on Touchscreen Mobile Phones
Presented a tool for informal note-taking on the touchscreen by sketching.
GestKeyboard: Enabling Gesture-Based Interaction on Ordinary Physical Keyboard
Enabled gesturing on an ordinary physical keyboard.
Hierarchical Route Maps for Efficient Navigation
Video Best Paper Award
Interactive optimization of map routes visualization.
Teaching Motion Gestures via Recognizer Feedback
Explored mechanisms to teach end users motion gestures.
CrowdLearner: Rapidly Creating Mobile Recognizers Using Crowdsourcing
Presented a crowdsourcing platform for automatically generating recognizers that leverage built-in sensors on mobile devices, e.g., paying $10 for creating a usable stroke gesture recognizer in a few hours.
Open Project: A Lightweight Framework for Remote Sharing of Mobile Applications
Discussed an end-to-end framework that allows a user to project a native mobile application onto a display using a phone camera. Any display can become projectable instantaneously by accessing the Open Project web service.
Gesture Studio: Authoring Multi-Touch Interactions through Demonstration and Composition
Presented a tool that combines programming by demonstration and declaration, via a video-editing metaphor for creating multi-touch interaction.
FFitts Law: Modeling Finger Touch With Fitts Law
Proposed and experimented with a new model that extends Fitts' law with a dual-Gaussian distribution for modeling finger touch behaviors.
Gesture-Based Interaction: A New Dimension for Mobile User Interfaces
Investigated various aspects of gesture-based interaction on mobile devices, including gesture-based applications, recognition and tools for creating gesture-based behaviors.
Gesture Coder: A Tool for Programming Multi-Touch Gestures by Demonstration
Video Best Paper Honorable Mention Award
Present a tool that automatically generates code for recognizing each state of multi-touch gestures and invoking corresponding application actions, based on a few gesture examples given by the developer.
Bootstrapping Personal Gesture Shortcuts with the Wisdom of the Crowd and Handwriting Recognition
Contribute the approaches for bootstrapping a user’s personal gesture library, alleviating the need to define most gestures manually.
Gesture Search: Random Access to Smartphone Content
Present a tool for random access of smartphone content by drawing touchscreen gestures. It flattens the UI hierarchy of smartphone interfaces.
Tap, Swipe, or Move: Attentional Demands for Distracted Smartphone Input
Investigated attention demands of motion gestures in comparison with traditional interaction techniques for mobile devices.
Gesture Avatar: A Technique for Operating Mobile User Interfaces Using Gestures
Present Gesture Avatar, a novel interaction technique that allows users to operate existing arbitrary user interfaces using gestures. It leverages the visibility of graphical user interfaces and the casual interaction of gestures. It outperformed prior techniques especially while users are on the go.
Deep Shot: A Framework for Migrating Tasks Across Devices Using Mobile Phone Cameras
Presents a framework for migrating tasks across devices using mobile cameras. It supports two interaction techniques, Deep Shot and Posting, that enabled direct manipulation of information and work states in a multi-device environment.
DoubleFlip: A Motion Gesture Delimiter for Mobile Interaction
Designed a motion gesture for separating intended motion input from ambient motion of mobile phones. A DTW-based recognizer was built to recognize the gesture which had high precision and recall.
User-Defined Motion Gestures for Mobile Interaction
Present the results of a guessability study that elicits end-user motion gestures to invoke commands on a smartphone device, which led to the design of a taxonomy for motion gestures and an end-user inspired motion gesture set.
Experimental Analysis of Touch-Screen Gesture Designs in Mobile Environments
Investigates the impact of situational impairments on touchscreen interaction. Reveals that in the presence of environmental distractions, gestures can offer significant performance gains and reduced attentional load, while performing just as well as soft buttons when the user's attention is fully focused on the phone.
Gesture Search: A Tool for Fast Mobile Data Access
Describes a tool that allows users to access mobile phone data using touch screen gestures. Gesture Search flattens the deep UI hierarchy of mobile user interfaces and learns the mapping from gestures to data items.
FrameWire: A Tool for Automatically Extracting Interaction Logic from Paper Prototyping Tests
Presents a tool for automatically extracting interaction logic from the video recording of paper prototype tests. FrameWire generates interactive prototypes from extracted interaction logic.
Protractor: A Fast and Accurate Gesture Recognizer
Pseudo Code Java implementation in the Android core framework
Presents an algorithm for recognizing drawn gestures. Protractor employs a closed-form solution to find the best match of an unknown gesture given a set of templates.
Beyond Pinch and Flick: Enriching Mobile Gesture Interaction
Presents the design of a toolkit for gesture-based interaction for touchscreen mobile phones. Introduces the concept of gesture overlays.
Best Paper Honorable Mention Award Video Download
Presents a tool that allows designers to incorporate large-scale, long-term human activities as a basis for design, and speeds up ubicomp design by providing integrated support for modeling, prototyping, deployment and in situ testing.
Cascadia: A System for Specifying, Detecting, and Managing RFID Events
Cascadia is a system that provides RFID-based pervasive computing applications with an infrastructure for specifying, extracting and managing meaningful high-level events from raw RFID data.
Design Challenges and Principles for Wizard of Oz Testing of Ubicomp Applications
Presents the $1 algorithm for gesture recognition and a comprehensive study that evaluates $1 against two other popular gesture recognition algorithms: Dynamic Time Wrapping and Rubine Recognizer. The study indicated that the $1 recognizer though simple outperformed its peers in both accuracy and learnability.
Gestures without libraries, toolkits or Training: a $1.00 Recognizer for User Interface Prototypes
Invited to the SIGGRAPH UIST Reprise Session
Presents the $1 algorithm for gesture recognition and a comprehensive study that evaluates $1 against two other popular gesture recognition algorithms: Dynamic Time Wrapping and Rubine Recognizer. The study indicated that the $1 recognizer though simple outperformed its peers in both accuracy and learnability.
BrickRoad: A Light-Weight Tool for Spontaneous Design of Location-Enhanced Applications
Presents a tool for testing location-based behaviors without specifying interaction logic. The tool explores the extreme of Wizard of Oz approaches for designing field-oriented applications, i.e., testing with zero effort beforehand.
Design and Experimental Analysis of Continuous Location Tracking Techniques for Wizard of Oz Testing
Presents various Wizard of Oz techniques for continuously tracking user locations.
Informal Prototyping of Continuous Graphical Interactions by Demonstration
Invited to the SIGGRAPH UIST Reprise Session Video
Presents a tool for creating continuous interactions using examples. Discusses the algorithms for learning continuous interaction behaviors from discrete examples, without using any domain knowledge.
Experimental Analysis of Mode Switching Techniques in Pen-based User Interfaces
Conducted a study to compare different mode switching techniques for pen-based user interfaces. The study revealed that bi-manual based mode switching outperformed other techniques.
Topiary: A Tool for Prototyping Location-Enhanced Applications
Topiary is a tool for rapidly prototyping location-based applications. It introduces a Wizard of Oz approach for testing location-based applications in the field, without requiring a location infrastructure.