Manipulate 3D objects using Google Daydream VR Headset
Design Problem
Virtual Reality is quickly becoming the next groundbreaking technology, but designing VR experiences is really hard as it requires extensive knowledge of 3D modeling software, game engines, and computer programming. This makes it really hard to do quick design iteration for VR experiences as compared to designing 2D user interfaces for the mobile and web. There are a host of design tools for designing for 2D (Sketch, Figma, Origami, Framer, etc) but no such similar tool for designing for VR


Role
Design and code a design tool to enable quick design iteration for Virtual Reality experiences inside Virtual Reality.
 · User Research
 · Interaction Design
 · Unity/ C# programming
 · 3D Modeling
 · User Testing


Background Research
Literature Review
I conducted a holistic review of the existing literature on 3D interfaces in VR environments. I investigated input devices developed for 3D interfaces and the interaction techniques that were used to accomplish different manipulation tasks. The literature review is divided into three sections.
 · 3D interfaces using handheld props
 · 3D interfaces without handheld props
 · Tangible user interfaces
Conclusion: Handheld controllers are tiring to hold and use for longer periods of time.

(a) Drawing tablet and tracked pen (b) 6 DOF tracked hand-held prop (c) 6 DOF tracked pen and transparent pad (d) Hand held window (e) Tangible interface with vertical and horizontal display (f) Digital workbench with pen based input (g) Clipboard and Buttonball
Process Analysis
I analyzed the existing workflow of designers designing for Virtual Reality. The workflow consisted of three stages: 3D modeling and image asset creation, setting up the 3D environment, add interactivity using a game engine. This workflow is not well suited to create wireframes and mockups while designing for VR experiences due to two main reasons:
1. 3D modeling tools like 3DS Max, Blender and 3D game engines like Unity, UnReal have steep learning curves
2. Using desktop-based software tools has large up-front time cost for 3D modeling and setting up the 3D environment before you see any result in a VR headset
3. Making changes to the VR experience requires the user to take their VR headset off, make adjustments using a desktop screen and put on the VR headset on again to confirm that the changes were correct. This increases the time needed to quickly iterate on a design.
Desktop-based tools do not allow interactive editing of a VR environment.
Competitive Analysis
There are some tools that support creating a 3D environment in VR like UNREAL VR, Unity VR, NEOSVR, Tilt Brush. But they either have game engine like functionality in VR or they support creating 3D sketches in VR. No VR tool supports the requirements of a VR designer for mocking up VR experiences. But none of them focus on supporting quick iteration required by a VR designer to mockup experiences. Furthermore, all these tools require a powerful PC and an expensive VR headset which reduces their accessibility. Mobile VR headsets are relatively cheaper and more accessible. So I decided to develop a design tool for mobile VR headsets using the Daydream headset.
Notice that all the tools use a combination of floating 2D UI and 3D manipulation of primitives to create 3D assets
(a) Unity Editor VR (b) NeosVR (c) StoryBoard VR (d) UnReal Editor in VR
Concept Exploration
I investigated the basic functions required to create a 3D mockup of a VR environment. I took inspiration from 3D modeling tools and 3D game engines and settled on a small list of supported functions:
1. Create 3D objects / spawn primitive 3D objects 
2. Selecting 3D object to perform the operation on
3. Move/ Rotate/ Scale the 3D object in three dimensions
4. Delete a 3D object
5. Clone a 3D object
6. Change the color of a 3D object
I started investigating the capabilities and limitations of the Daydream controller. One of its main limitations was that it did not have positional tracking, which as you would later find out led to some creative interaction designs. I also investigated multi-modal user interfaces for creating VR experiences:
 · Gaze-Based interfaces
 · Hand Controllers
 · Speech Commands
 · Hand Gestures and tracking
(a) Gaze based interface (b) Hand tracking and gestures (c) Daydream 3DOF hand held Controller (d) Speech based UI
Gaze-based interfaces are suitable for discrete input, like selecting and clicking on objects. Users can click on objects or items by maintaining their gaze on the object for a pre-determined amount of time. Gaze-based interactions are inherently slow due to this interaction paradigm and are not suitable for fast 3D manipulation of 3D objects.
I also investigated speech/ voice interfaces. There are several limitations to this mode of interaction
1. Voice recognition is not highly accurate, its accuracy depends on ambient noise and the extent of training data used to train voice recognition models
2. Speech commands are not well suited to work environments. VR designers would end up interrupting or distracting their co-workers if they are using speech commands in the workplace.
3. Discoverability of speech commands for complex interactions is one of the major problems that needs to be tackled before the interaction paradigm becomes commonplace.
From the earlier literature review, I had concluded that it is tiring to use a hand-held controller for long periods of time. So I decided to use hand gestures as the input modality, tracking hands using Leap Motion. I had previous experience using Leap Motion and designing for hand gestures from the project PLAY3D.
I quickly discovered that only hand gestures would not work well for this project because of the extensive list of operations required for 3D manipulation (move, rotate, scale, select, delete, clone, etc.)


Design Iteration 1 - Hand tracking and Daydream Controller
I decided to investigate multi-modal interfaces including the daydream controller in addition to the hand tracking provided by Leap Motion. I decided to use the touchpad on the Daydream controller as a selector between different options: move, rotate and scale. I divided the touchpad into three zones, in a way that the dividing line of any of the zones did not lie along the longitudinal axis of the Daydream controller. I performed this arrangement as users have a strong sense of the longitudinal axis and are resting their finger somewhere along the middle of the controller. Instead of clicking down on the touchpad, I experimented with just tapping/ touching in the appropriate zone. In order to limit the complexity of the prototype, I decided to only track the position of the palm of the hand and not the fingers. The hand tracking is used to control the scale, rotation, and position of the 3D object.
Hardware Design
Daydream controller does not have native hand tracking hardware. So I had to use Leap Motion for hand tracking. One of the major complaints of Leap Motion is that it has low field of view where it can reliably and accurately track your hands. I wanted to experiment with a design to solve this problem as I had encountered it before while working on the GhostbustersVR virtual reality project that used hand tracking as the input modality. The solution was to tilt the Leap Motion 45 degree with respect to the vertical plane. The reason to make this change was that an average human being would almost invariably always keep their hands below their heads. In the exceptional case where the user was interacting with someone above their head level, they would be looking at the interactive element, changing the rotation of the Leap Motion sensor to make the hands visible again.
Daydream View Headset with a Leap Motion attached using a 3D printed mount
Prototype
Leap Motion does not have a native SDK for Android or a way to connect Leap Motion to an Android smartphone. So I tried to connect the Leap Motion to a MacBook Pro and send the hand tracking data to Google Pixel over an internet connection. Initially, I tried using Photon for sending the data over the internet but the latency was really large. I tried sending the data over a local area network connection, which led to a slight decrease in the latency.
User Testing (One user)
The user was unable to use the hand tracking because of limited Field of View of Leap Motion. I decided to not pursue hand gestures as it will take a while for mobile VR headsets to include reliable and accurate hand-tracking.
 · Limited field of view of the Leap Motion for hand tracking constantly lead to a loss in hand tracking
 · The cable connected to the MacBook Pro from the Leap Motion is cumbersome
 · The latency of transmission of hand tracking data between the MacBook Pro to Google Pixel leads to a bad user experience
On the controller side, dividing the touchpad into zones didn’t work because:
 · The user didn't look at the daydream controller as it was in their peripheral view. This made it difficult to change between different modes (move, scale, rotate) without shifting their attention from the task at hand.
 · There was no haptic feedback while moving their thumb between the different zones on the touchpad


Design Iteration 2 - Daydream Controller
Next step was to just use the daydream controller as an interaction method. I started with implementing the positioning and rotation of a 3D object using the Daydream controller. The controller has 5 degrees of continuous input: pitch, yaw, roll of the controller and x, y, coordinate of the touchpad. I mapped the pitch and yaw of the controller to move the 3D object around a sphere with the controller as its center. I further mapped the y-axis of the touchpad to bringing the object closer/ farther to the user. This interaction paradigm was implemented as a laser pointer metaphor. I also mapped the rotation of the 3D object along the x-axis of the touchpad with the axis of rotation being specified by the roll of the controller. In order to give a visual indicator of the axis of rotation, I used an always visible black arrow to show the axis of rotation.
The VR design tool also required a set of 3D primitives that the user can use and manipulate to create the mockup of the 3D environment. I decided to use the following 3D objects as primitives: cube, capsule, cylinder, sphere and a plane. I considered floating primitives attached to the controller that could be selected and spawned using the touchpad. But it would have interfered with the movement, scale and rotation of object operations using the touchpad. So I decided to place the primitives in the environment where they are always visible and the user can make copies of the 3D objects to create the mockup.

Concept Exploration - Ways to move, scale and rotate 3D objects
Prototype
User Testing (3 users)
The problems identified during user testing were:
 · Manipulating the axis of rotation of the 3D object by the roll of the daydream controller was found to be confusing to the user.
 · Users also could not hold their hands steady in an uncomfortable position that specifying the rotation of axis to certain directions required.
 · The tremor in the user's hand led to an unstable axis of rotation and undesired rotation of the 3D object.
 · Primitives were very close to the users leaving little working space.
 · The visual cues for the state change of the 3D objects were not clear.


Design Iteration 3
The previous prototype didn’t include a way to scale 3D objects. I needed an easy to understand way to rotate, scale and move objects. Pitch and yaw of the daydream controller and y-axis of the touchpad could be used to specify the amount of rotation, scale, and position along the three axes. But I needed a way to switch between these three modes. I could not divide the touchpad into zones to switch between these three modes as I was using the touchpad to specify the value of scale, rotation, and movement.
I also did not want the user to choose between these three modes every time they tried to act on a 3D object. So I needed a way to remember the last operation the user did. This would lead to an increase in the speed of creating the VR environment. Another requirement is that the user should always be aware of what mode they are in at the moment to perform the right function. Changing between the modes should be quick to maintain speed. I considered using the app button on the Daydream controller to cycle between the three modes, but it did not satisfy the above requirements. I came up with the idea of a floating 2D UI, which also acts as a mode indicator. When the user points at the floating 2D UI, they can swipe on the touchpad to change the mode.
Other changes that I made were to remove the 2D plane as a primitive as it could be replicated by scaling a cube along one of the axes and because of the back face culling of a plane, they are not very useful in VR as the user can move around in the environment. I also considered the problem of scaling and rotation across different axes: global and local. I found that local axes were more intuitive to understand as opposed to global axes.

Prototype
Move, Rotate and Scale functionality is separated using a scrollable 3D UI
User Testing
 · Users did not realize what the icons for changing the modes meant
 · There wasn’t a clear distinction between primitives and the working space
 · Users needed instructions to learn how to use it due to the complexity.


Design Iteration 4
Google Daydream did not have a live preview at the time to display what the VR headset was rendering on an external display. This made it harder to give instructions to users about how to use the tool. So, I included a 2D plane with instructions inside the scene that people can read.
I moved the primitives to the left side and stacked them behind each other to create a visual grouping.
In the previous prototype, I was using a multiple copy mechanism for spawning primitives. I had a fixed number of copies for each of the 3D primitives. The user could pull 3D copies from the primitives until all the copies were moved from their original position. I was also using a blue color material change of the 3D object when the laser was pointing at the 3D object to convey a state change. But having multiple overlapping copies of the 3D primitives caused the rendering of the blue material to look confusing. So, for this iteration, I changed the way I spawned a 3D primitive. I created a new copy of the 3D primitive at the original position when one of them was moved. This made the color change to blue on hover more clear.

Interaction Design
(a) Swipe to change the operation between move, rotate and scale (b) Rotate, (c) Move, (d) Scale objects
User testing
The findings from the user testing were:
 · People had trouble reading instructions
 · Move, scale, rotate was delightful
 · Change in colors of the object to represent state was still confusing
 · People expressed a need to delete objects
 · Users understood the concept of primitives to scale and turn into something else
 · The UI for move, scale and rotate was obstructing the working area

Design Iteration 5
The design tool had implemented a good foundation of features to move, scale and rotate 3D objects, so I decided to explore other features that would be useful to create VR environments. I investigated common features in software currently used by VR designers (Unity, 3DS Max, Photoshop) and digital design tools (Sketch, Figma, Origami). I decided on the following features: change color, delete, clone, group 3D objects, save and load VR environments.
For the next iteration, I decided to implement change color, delete and clone 3D objects feature. I decided to use floating 2D menus to surface these features. I investigated two kinds of menus: environmental menu and contextual menus. Environmental menus are always visible in the 3D environment and allow a user to select an operation at any point of time. A contextual menu only appears when the user performs the action to activate the menu.

Concept Exploration (Contextual and Environmental Menus)

Blender - a 3D modeling tool popular among VR professionals

Concept exploration of different kinds of menu systems for surfacing 3D operations
Problem - How to position the UI in the scene?

Idea 1. The UI appears right where the object is at the time of the app button press down
    Edge cases 
      · the object is moved after the UI appears
      · the user just forgets about the UI, should it disappear after a while? or how to exit the UI?
      · the UI intersects with the object, it should appear a little bit to the left, how much?
      · the UI would look smaller if the game object is far off

Idea 2. The UI appears at a fixed position in the environment and when app button is clicked while pointing at the object
    Edge cases 
      · deleting the object and other operations may become confusing if the UI is not near the object
      · how do you handle the case when user activates the UI for one object, then points at another object and performs operations in the UI, which object gets affected

Idea 3. The hybrid of the two above approaches might work, the UI appears at the same spherical position as the object but instead of being far off there is a maximum distance that it appears at, so maybe a radius of 2 meter
     · this works for far-off objects where UI might have become too small
     · also works for objects closer than the maximum radius specified
     · solves the problem of the user not being able to see the object and UI at the same time
    Edge cases
      · the UI needs to rotate depending on where the game object is
      · it will be difficult to determine how far off to the left should the UI be to the game object such that the game object is fully visible

Idea 4. Another approach might be to place the UI at a fixed place but use the app button to specify which function needs to be performed and then point at the object and press the app button again to perform those functions such as delete, copy and color a particular object
     · solves the edge cases of idea 3
    Edge cases
      · none that I could think of

In the case of an environmental menu, the user can select the operation from the menu and then specify the 3D object to operate on. The environmental menu has the advantage of specifying the operation once and then performing it on multiple 3D objects one after the other. I decided to forego the idea of contextual menus in favor of the environmental menu.

Prototype
Deleting an object using the delete tool
Cloning an object using the clone tool
Next Steps
I did not work on the color system yet. But I could see the problem of the UI taking up a lot of space in world reducing the working area. Further, I looked at other features that the tool would need to support for creating mockups of 3D environments. One of the major features was moving around in the environment. I did not have a good solution for it. Also, other features such as attach, detach 3D objects, save, load the scene, snap grids would need menu space as well. I investigated multiple solutions, but couldn't come up with a solution that would reposition the menus in an efficient way when the user moved and during other conditions. I realized that satisfying all these requirements would need a complete overhaul of the design and am working on the redesign now.

​​​​​​​​​​​​​​You can follow project updates on twitter :)

Back to Top