24 hours in Virtual Reality


Get the short URL

I have few plans in 2016 year, but one of them is clear: to have spent at least 24 hours in Virtual Reality, because I’m convinced that this will be key to understanding how we interact with computers and humans.


Here’s why:

The experience will be incredible.

VR newsletter Hammer & Tusk says the content is already great—even though, in their words, this is the “fart app” era of VR. And yes, there will be porn.

We’re all going to be hooked.

As someone observed recently, only the 0.1% would want to live in Actual Reality; why live in the real world when you can live in one where you are queen? Stories of WOW addiction will seem but a pale shadow compared to the Wall-E-like immersion, and resulting slovenliness.

If you’ve ever seen the way-before-its-time Brainstorm, you’ll remember a scene where someone splices a particularly juicy part of some content so it loops over and over.

brainstorm2 - pixellated


(Oh, yeah; it’s got Christopher Walken.


You should probably go watch it.)

An Empathy machine

It’s not just the allure of AR/VR. It’s that, as Datavized’s Hugh McCrory told me, VR is an empathy machine. It lets us explore at the scale of atoms, or of galaxies. It makes the maniform multiverse. If you don’t think it’s going to be nearly indistinguishable from reality, watch this video of a 4K mod of Star Wars Battlefront, playing at 60 FPS:

There’s more coming, from companies like the not-quite-ready-yet Teslasuit which simulates heat, cold, and pressure across your body. Once it’s good enough, why try to teleport yourself to another place, when it’s easier to render that place where you are? Why use the default laws of physics when you can tweak them? This is the stuff of gods and nightmares, and it’s ours.

The interfaces are going to be weird

Remember all the trackballs, flight sticks, and powergloves from byegone eras? Unless a controller shipped with the system by default, it was stillborn. But VR needs a controller—whether that’s sensing your body and hands, or giving you something to hold. That’s why Oculus and Gear VR have controllers.

Most of our modern interfaces rely on a screen; a barrier between us and the thing. Keyboards and mice and trackballs and joysticks are all of this kind.

But the real world doesn’t have a screen. You don’t know where to touch to get to the thing; you touch the thing itself. You don’t pinch, you move your head forwards. You don’t scroll down, you duck. When you grab a hammer, the interface isn’t your fingers. The ball of the hammer, or end of the rope, or blunt of the bat, or tip of the whip, become your interface. You don’t interface with the hammer. You and the hammer gang up on the world.

Interfaces let us reverse-engineer the minds of the designers and the anthropology of the users. They’re also some of the hardest things to get right, breaking the fourth wall when they drag us back into the real world. So I want to understand that part particularly well.

Here’s my ask: I need your help.

I want to wear, poke, prod, break, and puke. I’ve already reached out to many of the big players (some were friendly; others shunted me off to their PR teams.) Warm introductions matter. If you know someone who knows someone, tell me. I will probably travel anywhere on the planet to understand how this is going to change our species.

Sidenote: Semantically, Microsoft calls this Mixed Reality; if video is fast enough, AR is just one instance of VR anyway. Others disagree; most think Oculus, HTC Vive, Samsung Gear VR, and Sony’s headsets are for Virtual Reality, while Microsoft’s Hololens and Google’s Magic Leap are Augmented Reality. Apple’s an unknown quantity (more on this in a later post.) Lots of people fight over who belongs in which group.