top of page

Inclusive Spatial Computing for Everyone

ABOUT   ▶︎

As the co-founder and lead designer, I developed this project with two team members (one UXR and one engineer). We created the world's first customizable and inclusive interface for AR/VR.

AR/VR
HCI
Product Design
WHAT I DID  ▶︎

◆ Lead product & interaction design

◆ Lead the user research and experiments

◆ Lead project video

◆ Create prototypes with Unity/MRTK

◆ Executive Presentations

◆ Create and verify product vision and roadmap

SELECTED AWARDS & PUBLICATIONS

bestinventions-seal-2.png
DEZ-Logo-Awards-2020-AW.png
1_ssZ_OuK_0ixuhgYXzdlEjQ.png
1280px-Fast_Company_logo.svg.png
logo1.png
core77-2.png
logoe.png

DESIGN FILM

“Dots can bring real impacts on the disabled community globally, as it is the first time an XR product considers their unique conditions and help them get into the future digital world.”

-- Alex Lewis, founder of the Alex Lewis Trust

FUTURE SIGNAL OF HANDS INPUT

Over the last few years, the development of the Mixed Reality has revealed the possibility of a screen-free future, marked we are entering an era of spatial computing. There is a clear trend for mid-air hand gesture input to become the core input model in spatial computing.

9eea44ed-b642-48f8-b29a-e08b2b1867de_rw_1200-3.gif
giphy+(1).gif

CHALLENGES

Challenge 1

Spatial interaction requires more ability of body movement. However, most existing technologies only rely on limited body parts, mainly hands, to do spatial interaction, which essentialy decreased the technology accessibility. 

Challenge 2

The traditional gesture recognition method is based on supervised machine learning, which needs to be trained by a massive amount of similar data. It may fail in the inclusive interface area due to the different body condition between disabled people.

- Can you imagine a disabled person with prosthetics using AR glasses? -

ezgif.com-gif-maker-2.gif

How can we make future spatial interaction inclusive and fit different body conditions without hand-gesture recognition?

ELICITATION USER STUDY

Hypothesis

There exist specific interaction patterns about how people use their body-gesture to control digital products or convey their intentions to digital systems.

‘Wizard of Oz’ Experiment

We recruited 20 participants, including 3 disabled people, and set four tasks for them. We allow them to freely use their bodies to conduct 3D-object-manipulation, including selecting, scaling, rotating, and moving a cube in the computer interface. Since we discovered that people still tend to use their fingers or hands, we iterated our experiments and added a list of limitations.

Experiment Iteration

To better explore whether people will create their unique way to interact using their body gestures, we randomly assigned every participant two body parts, like the head and the elbow, and required them to conduct the same manipulation task as the 1st round.

6a669f17-e7ff-4fe8-8efb-955b46054ce5_rw_1920.png
sg.gif

KEY INSIGHTS

Two Points System

Any 3D interactions can be described as the relative movement of two points in the 3D space. We can learn people’s intentions just by tracking relative motions of two points. And this result has no difference between the disabled and non-disabled participants.

Customizable Design

To allow disabled people with various body conditions to benefit from our design, we should use the combination of different body parts to control the system. So that everybody can find their best way to use and interact with spatial technologies.

44.gif

Select

333.gif

Move

22.gif

Scale

11.gif

Rotate

DESIGN GOAL

Can we design a spatial computing input system that can make the user use different combinations of body parts and adapt to different body condition?

MEET DOTS

Dots is a two-point body gesture recognizing system composed of two attachable slices and one wireless charger. Each piece contains an IMU sensor, Bluetooth, and battery. 

3f6e5f3a-25bd-48d1-8fa5-c9a8c50e8048_rw_1920.jpg
8ec9641a-e14a-4834-b05d-e977c54cafd3_rw_1200.jpg
c4163253-0589-4e2f-a007-4372b7a5fa4b_rw_1200.jpg
Wear on Any Body Parts

Users can affixes two “dots”, or sensors, to any parts of the body they feel comfortable moving around, depending on their unique body conditions and the task they wish to perform. The relative motion between those points is captured and calculated via IMU sensor, allowing the person to control the software. It is also applicable to use the surrounding environment, like attaching one dot on the table and another on the arm to perform an AR drawing. ​​​​​​​

22e02c21-6d0a-4d16-8150-6f4eed50cda6_rw_1200.jpg
7062c8d5-fc7d-4557-b611-b592346547e4_rw_1200.jpg

DOTS INPUT MODEL CONCEPTS

A Hybrid Targeting Experience

Use eye-tracking as the default targeting technique as it is more inclusive. User with ability to move arm can still use ray-casting for smaller targets.

Select & Move Experience

User shake one dot for select / deselect, move the other one for moving / dragging.

USER DEMO

sdf.gif

SCENARIO 1

Use Dots to control the IoT devices in AR, working prototype made by connecting HoloLens and Philips Hue smart lighting.

SCENARIO 2

Use Dots for 3D manipulation in HoloLens, working prototype made with MRTK and Unity.

ags.gif
ag.gif

SCENARIO 3

Using Dots to typing in AR with eye gaze targeting and mouth movement for selecting.

bottom of page