It gets uncomfortable if you cannot look away from something. Think of what it’s like when there’s a smear on your glasses.
Keep the camera moving at a constant speed. Acceleration and deceleration will make a user feel uncomfortable. Follow these rules: forward > backward, up / down > strafe left / right, fast camera cuts > gentle camera rotations.
Is the person short or tall? Think of the perspective difference between a young child and the world’s tallest person. This is much easier with positionally tracked systems, but can be considered in advance when positional tracking isn’t available.
Because there are no precedents for how people should interact with things in VR, there is little room for UI shorthand. This means ensuring that everything provides clear feedback, interactions are explained through action rather than text instruction, and that key concepts are introduced in a timely fashion. I know I said I’m “not interested in games” but Land’s End is the best example I’ve seen of this so far; the game is incredibly easy to learn and there is only one line of text to instruct you on what to do.
We’ve spent the better part of a decade undoing over the top skeuomorphism. The first instinct many people have in VR is to make everything behave like real world counterparts. We need not recreate all the minutiae of everyday existence — a pickle jar doesn’t need to be as difficult to open in VR as it is in real life. Use cues from the real world where helpful, but take advantage of the fact that the physics and character of a VR environment are flexible.
Keeping the frame rate above 60fps is paramount, and ensuring the frame rate stays stable is key. Without this, you risk motion sickness. The human vestibular system is a fickle beast. This means discussing with your development team what the aesthetic limits of your world are. Lower fidelity and higher stability is better than higher fidelity and lower stability. PSVR, being lower resolution but running at 120fps, is a good example of this at a hardware level.
Fitt’s law is in full force in VR. Ensure that user’s can employ an economy of motion: cluster actions that are used together (ie. next/prev), make objects magnetic/snap to grid, etc. Know before hand whether your application is better suited to seated or standing, or requires full 360° rotation.
How many times per second is positional and orientation data sampled. Poor performance here is one of the root causes of motion sickness. All modern VR platforms achieve this baseline. The iPhone 6’s IMU (inertial measurement unit) has a maximum sampling rate of 100hz. Android phones vary wildly. To make matters worse, in Chrome for Android, the sampling rate seems throttled and experiences that require fast motion can quickly lead to motion sickness.
How many times per second does the display refresh. All modern VR platforms achieve this baseline. For smartphone VR, this is a spec that can help you define which devices you don’t support.
How many frames are rendered per second by the software. This is not constant for a piece of software — it depends significantly on the capabilities of the CPU and GPU running it. You want to write your software so that it hits a baseline of 60fps on any target hardware. At the moment, this is a serious limitation of VR on Android web browsers — chrome is capped to 30fps.
Everything you define in a scene will be measured in meters. This is true for WebVR (and thus A-frame), Unity and Unreal. Sorry imperial countries, but metric is the the standard for the metaverse.
In print we had DPI (dots per inch), on screens we have PPI (pixels per inch), and in VR we have PPP (pixels per degree). Knowing this per platform will help you ensure your designs are legible.
Measure of how much of your vision is occupied by the VR screen, measured in degrees, horizontally and vertically.
This is the distance between people’s eyes. Every individual’s is slightly difference and this will impact the effect of the stereoscopic effect. This is what produces the illusion of depth.
To correct for the distortion that the lenses apply, the rendering engine will create something called barrel distortion so that the image on the display looks sharper when it is reflected to your eye through the lenses.
All HMDs require lenses to distort the image displayed on the screen and make it more comfortable for your eyes. Imagine how hard it would be to focus on a screen so close to your face!
Ahe screen in the headset. All systems are using LCD screens at the moment, hitting at least 400ppi.
If you need to move the user in 3d space (which really means move the camera), don’t use acceleration — linear motion only. The best option is to instantly move them from one position to the other (called “teleporting” them). Follow these rules: forward > backward, up / down > strafe left / right, fast camera cuts > gentle camera rotations.
Remember what you do when something is moving quickly towards your face: you duck. Especially for practical applications, this is probably not the instinct you want to trigger in your users.
If you think motion is appropriate, short/slow steps towards and away from the camera are much easier to stomach.
This is tough to understate, and is the largest departure from what we’re used to in software. People will not be multi-tasking in VR — at least not yet — and they will certainly not be doing anything else in the real world while they are in VR. They are going to be wholly consumed by VR, with their attention fully dedicated to the task at hand. Sound helps them situate themselves and focus on that task. It is also one of the fundamental ways that you can provide feedback to the user.