24h in VR 0:00-0:23: Electric Monk

I’m spending at least 24 hours in Virtual and Augmented reality this year, so I can better understand how it’s going to change the way we interact with machines, information, and one another.

0 Comments

Get the short URL

My first foray into VR environments since I decided to devote serious time to it came from Dylan Fries at Electric Monk media. They’re a cross-media design and development firm in Winnipeg, Manitoba; among other things, they’re behind the film Men With Beards.

Dylan Fries at Electric Monk

Dylan has been building interactive applications for years, and has been exploring building immersive experiences.

One of the things that’s immediately apparent from these experiences is that VR isn’t just a set of goggles. It’s a complex interplay between controller, software, display, and design decisions made by humans. This diagram by Shmuel Csaba Otto Traian explains all the components well (taken from the Wikimedia Commons:)

Linux kernel and gaming input-output latency

In Dylan’s case, I was using an Oculus Rift DK2, which is the developer kit version of the soon-to-launch Oculus. While the Oculus can support six degrees of freedom, Dylan only had three enabled, and this made the experience somewhat less immersive.

3 or 6 dimensions in VR

The first Oculus only supported three dimensions (which meant you could look around, but not, for example, duck behind something.) With only three dimensions working, it feels a bit like your head is locked in a vice, which can be a bit unnerving.

The problem, of course, is that six degrees of freedom is harder to track. Whereas a headset can track its own orientation using accelerometers, it’s much harder to know where in the room the headset is. Different vendors solve this in different ways: Oculus uses a camera that can see the goggles and estimate their location while sitting down; the HTC Vive uses a pair of lasers to beam structured light that can be seen by both the headset and controllers, which lets you walk around.

Laser and sensors for Vive

And Google’s Project Tango figures out where it is by looking at features in the room such as a wall or a chair.

Anyway, back to Electric Monk. Dylan first showed me an immersive experience he’s working on entitled Phantom of the North: the Great Grey Owl Experience. It’s a nature documentary of sorts, in which as you gaze around the arctic landscape you trigger explanations of the flora and fauna around you.

I didn’t feel queasy at first, and looked around constantly. Fixing gaze is perhaps the most fundamental user interface primitive of virtual reality: when you stare at designated location in space for a while, a small circle starts to fill in, after which the audio plays. Despite the fairly grainy visuals on the DK2, the experience was believable, helped by the environmental sounds Electric Monk had mixed in.

It was really, really hard to pay attention to what Dylan was explaining to me about the experience while I was in it. Keeping track of two narratives is jarring, and needs a lot of concentration. I also found it unnerving to look down—I had no feet! More than anything, this sense of disembodiment was what was most unsettling.

Dylan restarted the experience, and as I slide sideways along a pre-ordained path through the wilderness, I felt my first bit of motion sickness. I’m a sailor, and I’m pretty comfortable when moving around, so even this little twinge was unexpected.

After this, Dylan fired up a project he’s been building in Unity for nearly three years. It began as a way for him to experiment with moving around virtual worlds, and he calls it SpaceCoaster. It begins with you sitting in a chair, hands on lap. Dylan explained that upon seeing this, most people immediately put their hands on their lap to comply with their avatar—underscoring just how much designers need to pay attention to proprioception.

Dylan handed me a game controller similar to the XBox or Playstation controller. When I pressed a button, I was immediately launched from the earth into space, where I moved gradually on a path around several spacecraft. Off in the distance I could see the sun. Dylan had used stock models, because the experience was mostly for him to learn about movement.

He explained the controls, and I started moving freely in 3D space, navigating between the spacecraft and asteroids. The movement was smooth, and definitely convincing. I found myself flying around fairly naturally within minutes.

Dylan explained that in an early build of the experience, he had a button that brought the user to a complete stop instantly—but that when he used the button, the nausea was so jarring he couldn’t bear to revisit the app for an entire week! He’d since replace the stop with an “air brake” to gradually slow movement, which was much easier to take.

Weaving between the ships and trying out the controls took a while to master, particularly since I couldn’t actually see the controller in the experience itself.

After a short while, we switched to the final environment, Back to Dinosaur Island.

This demo, built on the Crytek’s Crysis engine, is fairly well known. The visuals are excellent, from the lush forest to the steam of the snarling T-Rex. But even here, the pressures of VR are clear: texture maps don’t work well.

A common trick in 3D modeling that can reduce the number of sides (polygons) in a model is to map an image onto a surface so that the surface looks three-dimensional. This works well on a simple display, but with stereoscopic vision from VR goggles, it’s immediately apparent as a flat surface.

In all, I spent 23 minutes in the three applications. That 24 hours is gonna take some time!

Lessons learned

  • Three versus six degrees of freedom makes a huge difference in the experience.
  • Not having a body is weird. Having a body that isn’t positioned as yours is weird, too, and the human brain will try to make the two match, even if that means contorting yourself.
  • It’s hard to communicate with the real world when you’re in a virtual one, making feedback harder for testing and users.
  • Sudden, uncontrolled movement can be jarring. Developers need to test their physics engine, and its impact on end users, carefully. Otherwise you may not only lose users, but also make it harder for you to work on your project!
  • When you can’t see the controller, it’s harder to get the kind of feedback loop going for a user that encourages experimentation and rapid learning.
  • Old tricks and shortcuts for 3D modeling don’t always work, and developers probably need to revisit and retool their models to make a film or game work in virtual reality.

After spending time talking to Dylan about his experiences, I got ready for some time with another Winnipeg-based VR developer, Campfire Union. More about that next time.