Bio · Yang is a Staff Research Scientist at Google Research & Machine Intelligence, and an affiliate faculty member in Computer Science & Engineering at the University of Washington. He earned a Ph.D. degree in Computer Science from the Chinese Academy of Sciences, and conducted postdoctoral research in EECS at the University of California at Berkeley. He has led the development of next app prediction on Android that is in use by tens of millions of users, which pioneered on-device interactive ML on Android. He wrote Gesture Search, a popular Google branded app on the Play store used by millions of users. He has published over 50 papers in the field of Human Computer Interaction, including 38 publications at CHI, UIST and TOCHI, 1 CHI Best Paper Awards and 3 Honorable Mentions, and 1 IUI Best Paper Award. He has constantly served on the program committees of top-tier HCI and mobile computing venues.
Adaptively modify the landmark points via online kmeans for kernel approximation and adjust the model accordingly via solving least square problem.
Introduced two time-dependent event representations and two time-based regularization methods for RNN concerning continuous time.
Presents a deep neural net to model and predict human performance in performing a sequence of UI tasks.
Devised a predictive model for human performance on 2D grids using both analytical and machine learning methods.
A variant of the marking menu allowing a constant and stationary space use.
Generates video illustration of UI behavior for code snippet from execution.
Low-overhead collection of performance data for mobile app designs using crowdsourcing.
Harvesting a dataset for advancing data-driven design of mobile apps from over 9.7K existing Android apps using a combination of human and automated crawling of apps.
Compose cross-device input by examples based on DOF of target interaction behavior.
Recognize motion gestures by combining offline learned representation and templated-based online learing.
Enables prototyping and testing multi-touch interactions based on video recordings of target application behaviors, without any programming.
Contributes a cross-device storyboard and interactive illustration mechanisms for scripting.
Investigates techniques for using audio characteristics to provide feedback on the system interpretation of user motion gesture input.
Contributes a system that overrides the mobile platform kernel behavior to enable touchscreen gesture shortcuts in standby mode. A user can issue a gesture on the touchscreen before the screen is even turned on.
Integrated tool and inference support that allows developers to easily create touch behaviors in their apps.
An on-device infrastructure that provides event prediction as a service to mobile applications.
A method for detecting finger taps on the different sides of a smartphone, using the built-in motion sensors of the device.
Presented the iterative design and evaluation of a head orientation-based selection technique, which augments Google Glass with an infrared (IR) emitter for selecting IR-equipped smart appliances at a distance. (acceptance rate: 29%)
A technique for interacting with remote displays through touch gestures on a handheld touch surface, which supports a wide range of interaction behaviors, from low pixel-level interaction such as pointing, to medium-level interaction such as structured navigation, to high-level interaction such as shortcuts.
Video Best Paper Honorable Mention AwardRecognizing gestures and their properties using examples and parts-based scripting.
Presented a tool for informal note-taking on the touchscreen by sketching.
Enabled gesturing on an ordinary physical keyboard.
Video Best Paper AwardInteractive optimization of map routes visualization.
Explored mechanisms to teach end users motion gestures.
Presented a crowdsourcing platform for automatically generating recognizers that leverage built-in sensors on mobile devices, e.g., paying $10 for creating a usable stroke gesture recognizer in a few hours.
Discussed an end-to-end framework that allows a user to project a native mobile application onto a display using a phone camera. Any display can become projectable instantaneously by accessing the Open Project web service.
Presented a tool that combines programming by demonstration and declaration, via a video-editing metaphor for creating multi-touch interaction.
Proposed and experimented with a new model that extends Fitts' law with a dual-Gaussian distribution for modeling finger touch behaviors.
Investigated various aspects of gesture-based interaction on mobile devices, including gesture-based applications, recognition and tools for creating gesture-based behaviors.
Video Best Paper Honorable Mention AwardPresent a tool that automatically generates code for recognizing each state of multi-touch gestures and invoking corresponding application actions, based on a few gesture examples given by the developer.
Contribute the approaches for bootstrapping a user’s personal gesture library, alleviating the need to define most gestures manually.
Present a tool for random access of smartphone content by drawing touchscreen gestures. It flattens the UI hierarchy of smartphone interfaces.
Investigated attention demands of motion gestures in comparison with traditional interaction techniques for mobile devices.
Present Gesture Avatar, a novel interaction technique that allows users to operate existing arbitrary user interfaces using gestures. It leverages the visibility of graphical user interfaces and the casual interaction of gestures. It outperformed prior techniques especially while users are on the go.
Presents a framework for migrating tasks across devices using mobile cameras. It supports two interaction techniques, Deep Shot and Posting, that enabled direct manipulation of information and work states in a multi-device environment.
Designed a motion gesture for separating intended motion input from ambient motion of mobile phones. A DTW-based recognizer was built to recognize the gesture which had high precision and recall.
Present the results of a guessability study that elicits end-user motion gestures to invoke commands on a smartphone device, which led to the design of a taxonomy for motion gestures and an end-user inspired motion gesture set.
Investigates the impact of situational impairments on touchscreen interaction. Reveals that in the presence of environmental distractions, gestures can offer significant performance gains and reduced attentional load, while performing just as well as soft buttons when the user's attention is fully focused on the phone.
Describes a tool that allows users to access mobile phone data using touch screen gestures. Gesture Search flattens the deep UI hierarchy of mobile user interfaces and learns the mapping from gestures to data items.
Presents a tool for automatically extracting interaction logic from the video recording of paper prototype tests. FrameWire generates interactive prototypes from extracted interaction logic.
Presents an algorithm for recognizing drawn gestures. Protractor employs a closed-form solution to find the best match of an unknown gesture given a set of templates.
Presents the design of a toolkit for gesture-based interaction for touchscreen mobile phones. Introduces the concept of gesture overlays.
Presents a tool that allows designers to incorporate large-scale, long-term human activities as a basis for design, and speeds up ubicomp design by providing integrated support for modeling, prototyping, deployment and in situ testing.
Cascadia is a system that provides RFID-based pervasive computing applications with an infrastructure for specifying, extracting and managing meaningful high-level events from raw RFID data.
Presents the $1 algorithm for gesture recognition and a comprehensive study that evaluates $1 against two other popular gesture recognition algorithms: Dynamic Time Wrapping and Rubine Recognizer. The study indicated that the $1 recognizer though simple outperformed its peers in both accuracy and learnability.
Invited to the SIGGRAPH UIST Reprise SessionPresents the $1 algorithm for gesture recognition and a comprehensive study that evaluates $1 against two other popular gesture recognition algorithms: Dynamic Time Wrapping and Rubine Recognizer. The study indicated that the $1 recognizer though simple outperformed its peers in both accuracy and learnability.
Presents a tool for testing location-based behaviors without specifying interaction logic. The tool explores the extreme of Wizard of Oz approaches for designing field-oriented applications, i.e., testing with zero effort beforehand.
Presents various Wizard of Oz techniques for continuously tracking user locations.
Invited to the SIGGRAPH UIST Reprise Session VideoPresents a tool for creating continuous interactions using examples. Discusses the algorithms for learning continuous interaction behaviors from discrete examples, without using any domain knowledge.
Conducted a study to compare different mode switching techniques for pen-based user interfaces. The study revealed that bi-manual based mode switching outperformed other techniques.
Topiary is a tool for rapidly prototyping location-based applications. It introduces a Wizard of Oz approach for testing location-based applications in the field, without requiring a location infrastructure.