Friendlier VR User Experiences: Rethinking Navigation
This post is the first of the series Friendlier VR User Experiences that will focus on our experiences developing Virtual Reality (VR) applications and our solutions to the technical and usability challenges posed by this emerging platform. The code for the solutions contained in this post can be found here.
VR is a topic that is hot on everyone’s lips at the moment, especially for the large tech companies clambering for their slice of the pie. With the unrelenting stream of new VR headsets and motion tracking gadgetry comes a fresh set of exciting usability challenges.
We recently shipped a Unity-based VR app that was targeted towards a slightly older demographic that had likely never experienced VR, let alone a conventional video game. Although we expected a slight initial hurdle for these users to acclimatise, we were surprised at the extent to which we needed to continue revisiting our user interfaces to make them immediately intuitive for our audience. For instance, users at the extreme end of the scale didn’t even realise that they could turn their head to look around and simply stood there motionless upon donning the Gear VR headset.
In this post, we are going to focus on one particular recurring difficulty that we had to address: the tendency for users to get ‘lost’ in menu screens.
Getting Lost In Menu Screens
One of the major issues we noticed when developing our VR menu framework was that users frequently ended up looking away from the UI into the void and becoming disoriented, unable to locate it again. This often occurred as a result of the user turning their head as the menu loaded up, and sometimes simply occurred due to curiosity.
A solution to this might have been to replace the void with a simple scene to help ground the user’s point of view, however we felt that this may further distract users from the task at hand. Hence, we decided to try and find an elegant solution that could cope with the user looking away or the menu scene loading with the user in an odd orientation.
First Solution: Unity’s GUIArrows Prefab
Our first port of call was to look at what Unity had to offer in their VR Samples package, which is where we found the GUIArrows
solution.
This prefab detects when the user is looking away from the desired direction and shows a series of arrows to guide them back to where they should be looking:
Although this solution proved adequate in most situations, we still encountered the odd user who didn’t quite grasp the intent of the arrows. We decided to take some inspiration from Unity’s default VR splash screen in coming up with a more robust solution.
Second Solution: Unity’s Camera-Following UI
When Unity’s Personal edition launches a game in VR, it shows a floating splash screen that follows the user’s gaze with a small amount of input lag. This makes it impossible for the user to look in the wrong direction, since the UI always ends up back in the centre of their view:
Although Unity does not provide the code for this splash screen, its functionality was trivial to replicate in a single line of code by simply Slerp
ing the UI’s rotation to match that of the desired camera:
using UnityEngine;
public class OverlyAttachedUI : MonoBehaviour
{
[Tooltip("Which camera the UI should follow.")]
[SerializeField] Camera followCamera;
[Tooltip("The speed at which the UI should follow the camera.")]
[SerializeField] float followSpeed = 1.5f;
// Update is called once per frame
void Update()
{
transform.rotation = Quaternion.Slerp(transform.rotation, followCamera.transform.rotation, followSpeed * Time.deltaTime);
}
}
Whilst this made it impossible for the user to lose the UI, it also made it impossible for them to interact with any part of the UI besides the centre. We needed to go one step further.
Final Solution: ClingyUI
The custom-built solution that we ultimately adopted introduces a ‘dead zone’ in which the UI stays put as long as the user continues to look in its general direction:
The Code
To implement this functionality, we define a new class called ClingyUI
with the following configurable properties that the developer can tweak in the inspector:
public class ClingyUI : MonoBehaviour
{
[Tooltip("Which camera the UI should follow.")]
[SerializeField] Camera followCamera;
[Tooltip("How far to the left or right of centre the UI transform must be before it starts following the camera's gaze. Defines the 'dead zone' in which the UI does not move.")]
[SerializeField] float thresholdAngle = 45.0f;
[Tooltip("The speed at which the UI should follow the camera.")]
[SerializeField] float followSpeed = 1.5f;
The above properties will enable the developer to choose a followCamera
they want their UI to follow every time it looks further than thresholdAngle
degrees away from it, arriving back in the camera’s periphery at a speed of followSpeed
.
Most of the work done by this script is performed in the Update
method that is called every frame:
void Update()
{
The most likely cause of a player becoming ‘lost’ in the menu void is that they are facing the wrong way, as opposed to looking up at the sky or down at their feet. We can thus throw out the vertical information in the forward vectors for both the UI and the camera it is following, ending up with two simple vectors that we can work with:
// Throw out the Y axes to only consider a horizontal plane.
Vector3 uiForwardXZ = new Vector3(transform.forward.x, 0.0f, transform.forward.z);
Vector3 cameraForwardXZ = new Vector3(followCamera.transform.forward.x, 0.0f, followCamera.transform.forward.z);
From here, we need to calculate the angle between these vectors so that we can determine whether the player is looking at the UI or away from it. We could of course use a dot product for this, however Unity gives a convenient Vector3.Angle
function that returns the angle in degrees:
// Calculate the angle between the UI and the camera.
float angle = Vector3.Angle(uiForwardXZ, cameraForwardXZ);
If the angle between these vectors is above our thresholdAngle
, we know that the player is looking away from the UI and we should rotate it back towards the player. Because we want to have a ‘dead zone’ where the UI doesn’t move at all, we only want to move it back to the edge of the dead zone as opposed to the centre of the screen.
In order to do this, we first need to know whether the UI is on the player’s left or right. Outside of our Update
method, we have another method called RelativeDirection
that returns a value we can use as a direction multiplier for our edge vector:
// Returns the direction of targetDir in relation to forward.
Direction RelativeDirection(Vector3 forward, Vector3 targetDir, Vector3 up)
{
Vector3 perp = Vector3.Cross(forward, targetDir);
float dir = Vector3.Dot(perp, up);
if(dir > 0.0f)
return Direction.Left;
else if(dir < 0.0f)
return Direction.Right;
else
return Direction.Centre;
}
Note that the multiplier returned by RelativeDirection
is represented as an enum
value. This avoids a loss of context when not being used as a multiplier (in an if
statement, for example), and is defined as follows:
// Enumerates a direction one vector might be in relation to another. The corresponding
// integer value can also be used as a multiplier for direction-dependent calculations.
public enum Direction
{
Left = -1,
Centre = 0,
Right = 1
}
Back in our Update
method, now that we know which side the UI is on, we can project a vector in that direction at our thresholdAngle
:
// If the UI is outside the threshold, smoothly move it back into the camera's periphery.
if (Mathf.Abs(angle) > thresholdAngle)
{
// Determine whether the UI is to the left or right of the camera and calculate a vector pointing in that direction at thresholdAngle degrees.
Direction direction = RelativeDirection(uiForwardXZ, cameraForwardXZ, Vector3.up);
Vector3 threshold = Quaternion.AngleAxis(thresholdAngle * (int)direction, Vector3.up) * cameraForwardXZ;
The following video should help to visualise the vectors that we have calculated so far:
In the video, the blue vector is cameraForwardXZ
, the red one is uiForwardXZ
, and the green one is threshold
, which always points at thresholdAngle
degrees to the left or right of cameraForwardXZ
.
We can now use threshold
as a target for our rotation adjustment by calculating a Quaternion
rotation from it:
// Calculate desired look rotation, negating pitch and roll axes.
Quaternion desiredRotation = Quaternion.LookRotation(threshold, Vector3.up);
desiredRotation = Quaternion.Euler(transform.rotation.x, desiredRotation.eulerAngles.y, transform.rotation.z);
From here, we can use Quaternion.Slerp
to interpolate smoothly towards the target rotation. The speed at which the UI arrives back in the player’s periphery is governed by our followSpeed
property:
// Interpolate to the desired rotation.
transform.rotation = Quaternion.Slerp(transform.rotation, desiredRotation, followSpeed * Time.deltaTime);
}
}
}
Using The Script
To use the ClingyUI
script, simply drop it on the top-level GameObject of your UI hierarchy. Note that the world-space location of this GameObject should match that of the main camera so that it can pivot around the player. You can even attach it as a child of the player if you like, however you will need to ensure that your input doesn’t rotate both the camera and the UI at the same time.
If that’s too much of an earful, this GitHub repository contains a Unity project that demonstrates all three user interface implementations – one per scene in the _Scenes
folder.