Design Goals
Explore how we can interact with map data and 3D buildings considering:
1. An Augmented Reality headset with a decent enough field of view (FOV)
2. Fully articulated hand tracking as the input mechanism
Tools used

Software and APIs: Mapbox, Unity, Github

Hardware: Oculus Rift, Zed Mini, Leap Motion,

Interaction Design
(a) Bi-manual hand interaction (b) Single handed interaction
Bi-Manual Hand Interaction
Both hands of the user need to be pinching in order to activate rotate, zoom, and pan operations. There are several limitations with the bi-manual hand interaction method:
1. It gets tiring to interact with the system over longer periods of time.
2. You need to occupy both of your hands to interact with the system leading to accessibility issues and/or situational disabilities.
This led me to investigate single handed interaction.
Single Handed Interaction
User has to pinch any one of their hands in order to activate rotate, zoom, and pan operations. Benefits of single handed interaction:
1. It is relatively less energy consuming to interact with the system over longer periods of time.
2. The user needs only one of their hands to interact with the system resulting in a more inclusive system.
Vertical vs Horizontal (Zoom dependent surface orientation)
Exploring a 3D representation of an urban part is done best on a horizontal surface as humans are used to seeing the ground as horizontal. Sci-fi movies like "Inception" break this rule to evoke wonder and communicate dream worlds. On the other hand when looking at a text-heavy map, humans tend to prefer reading/exploring it while holding it in their hands at an angle. Taking cue from this I decided to automatically change the orientation of the map depending on the level of zoom. So the user can easily explore the 3D model of a city in the horizontal orientation of the map and when the zoom level becomes low such that the map becomes text rich, the map is gradually tilted to a 30 degrees orientation from the vertical for maximum readability. The map also automatically rotates to orient north towards the up direction as human beings intuitively associate the north direction with up while reading a map. Rotation functionality is disabled when the map is in vertical mode.
(a) Miniature model of an urban area (b) Person reading a map (Photo: Alamy)
Visual Indicator for pinch
The system recognized the pinch action as the user's index finger and thumb finger coming closer than 3.5 cm. I added a visual signal to communicate to the the user that the pinch action had been recognized by the system. I also added a trail behind the 3D cursor to add visual history of the finger for the user. This was helpful, especially when frame rates were low, to reorient the user about how much they had moved their hands and in which direction.
(a) Green dot that scales in and out when a pinch was detected (b) Trail rendered behind the dot
Edge of the world
The map needed a boundary in order for the users to get familiar with the interactive surface and increase familiarity with the map. A large map with constantly changing boundaries and edge rotation quickly leads to becoming a distraction from the main task: exploration of the world.
(a) 3D model of a plane with a hole in the middle (b) Unity's "RenderQueue" functionality to render a boundary for the map
Visual Indicator for zoom level
I noticed a few problems at the previous stage:
1. It is hard to make accurate assumptions about hand travel when it is moving away/close from/to your eyes. Try holding your index finger right in front of your eyes at a comfortable distance. Now try to move the index finger 1 cm away from your eye. You will have a hard time gauging the distance that your finger has moved, while this is reasonably easy to do if you move your finger in a vertical or horizontal direction. The way that the level of zoom is controlled often lead to the user moving their hands away/close to their eyes. This made it very hard for the user to gauge how much zooming in/out they had already done.
2. Due to the network limitations of loading map tiles at different zoom levels and intensive rendering tasks on the GPU, there will be an occasional frame drop during user. If the user was in the middle of a zoom operation when the frame drop happened, they would lose context of how far they had zoomed in the map.
To solve these problems I decided to include a visual cue in the form of a square that scaled up or down depending on the amount of change in the zoom level. The square gradually animated back to the original boundaries of the map at the end of the zoom operation/pinch. This visual had an unexpected effect of communicating the boundaries of the map at the start of the zoom during the zoom operation. I further included a translucent square at the original boundaries of the map to communicate that the boundaries of the map have not changed and the blue square will come back to the original boundary at the end of the zoom.

3D model of the square in Blender

Where is north?
We have all seen that person using Google Maps on there phone trying to walk somewhere, reverse their direction and then walk again trying to figure out where they are supposed to walk. Not knowing which direction you are facing on a map can be one of the most frustrating experiences. I decided to add a compass in the interface as people are familiar with a compass. The compass when touched reorients the map to north, taking cues from mobile based map interfaces. The compass animates to raise a little bit from its resting position to represent the state that it can be touched to reorient the map to north.
(a) 3D model of a compass in blender (b) Animation states (rest and active) of the compass in Unity
Rendering optimizations
I had to make a few optimizations in order to keep a good enough frame rate:
1. Render only those buildings that have a height greater than 40 meters.
2. Turn off shadows. These were taking a huge amount of GPU time.
3. Turn off depth calculations done in ZED mini camera.

You may also like

Back to Top