Inclusive Spatial Computing for Everyone
ABOUT ▶︎
As the co-founder and lead designer, I developed this project with two team members (one UXR and one engineer). We created the world's first customizable and inclusive interface for AR/VR.
WHAT I DID ▶︎
◆ Lead product & interaction design
◆ Lead the user research and experiments
◆ Lead project video
◆ Create prototypes with Unity/MRTK
◆ Executive Presentations
◆ Create and verify product vision and roadmap
DESIGN FILM
“Dots can bring real impacts on the disabled community globally, as it is the first time an XR product considers their unique conditions and help them get into the future digital world.”
-- Alex Lewis, founder of the Alex Lewis Trust
FUTURE SIGNAL OF HANDS INPUT
Over the last few years, the development of the Mixed Reality has revealed the possibility of a screen-free future, marked we are entering an era of spatial computing. There is a clear trend for mid-air hand gesture input to become the core input model in spatial computing.
CHALLENGES
Challenge 1
Spatial interaction requires more ability of body movement. However, most existing technologies only rely on limited body parts, mainly hands, to do spatial interaction, which essentialy decreased the technology accessibility.
Challenge 2
The traditional gesture recognition method is based on supervised machine learning, which needs to be trained by a massive amount of similar data. It may fail in the inclusive interface area due to the different body condition between disabled people.
- Can you imagine a disabled person with prosthetics using AR glasses? -
How can we make future spatial interaction inclusive and fit different body conditions without hand-gesture recognition?
ELICITATION USER STUDY
Hypothesis
There exist specific interaction patterns about how people use their body-gesture to control digital products or convey their intentions to digital systems.
‘Wizard of Oz’ Experiment
We recruited 20 participants, including 3 disabled people, and set four tasks for them. We allow them to freely use their bodies to conduct 3D-object-manipulation, including selecting, scaling, rotating, and moving a cube in the computer interface. Since we discovered that people still tend to use their fingers or hands, we iterated our experiments and added a list of limitations.
Experiment Iteration
To better explore whether people will create their unique way to interact using their body gestures, we randomly assigned every participant two body parts, like the head and the elbow, and required them to conduct the same manipulation task as the 1st round.
KEY INSIGHTS
Two Points System
Any 3D interactions can be described as the relative movement of two points in the 3D space. We can learn people’s intentions just by tracking relative motions of two points. And this result has no difference between the disabled and non-disabled participants.
Customizable Design
To allow disabled people with various body conditions to benefit from our design, we should use the combination of different body parts to control the system. So that everybody can find their best way to use and interact with spatial technologies.
Select
Move
Scale
Rotate
DESIGN GOAL
Can we design a spatial computing input system that can make the user use different combinations of body parts and adapt to different body condition?
MEET DOTS
Dots is a two-point body gesture recognizing system composed of two attachable slices and one wireless charger. Each piece contains an IMU sensor, Bluetooth, and battery.
Wear on Any Body Parts
Users can affixes two “dots”, or sensors, to any parts of the body they feel comfortable moving around, depending on their unique body conditions and the task they wish to perform. The relative motion between those points is captured and calculated via IMU sensor, allowing the person to control the software. It is also applicable to use the surrounding environment, like attaching one dot on the table and another on the arm to perform an AR drawing.
DOTS INPUT MODEL CONCEPTS
A Hybrid Targeting Experience
Use eye-tracking as the default targeting technique as it is more inclusive. User with ability to move arm can still use ray-casting for smaller targets.
Select & Move Experience
User shake one dot for select / deselect, move the other one for moving / dragging.
USER DEMO
SCENARIO 1
Use Dots to control the IoT devices in AR, working prototype made by connecting HoloLens and Philips Hue smart lighting.
SCENARIO 2
Use Dots for 3D manipulation in HoloLens, working prototype made with MRTK and Unity.
SCENARIO 3
Using Dots to typing in AR with eye gaze targeting and mouth movement for selecting.