Monday, 8 September 2014

Lunch at the bottom of the sea.

"If your simulation is trying to deal with an unrestrained, tracked humanoid you are going to lose, each and every time."

When I was very young, I read a story by Willard Price that featured a scene in which the main characters try to eat lunch while sitting at the bottom of the Pacific Ocean.  They learn how difficult even simple tasks are to perform when your operating in an alien environment:

"Hal and Skink followed his example, and the process was repeated until all the sausages were gone. But there still remained the puzzling problem of how to drink a bottle of Coca-Cola ten fathoms beneath the sea.


When Dr Bake prised off the cap of his bottle a strange thing happened. Since the pressure outside was so much greater than that inside the bottle, sea water immediately entered and compressed the contents. But a little sea water did no harm, and Dr Blake pressed the mouth of the bottle to his lips.

By breathing out into the bottle he displaced the contents which thereupon flowed into his mouth. He drained the bottle. When he took it from his lips the sea water filled it with a sudden thud. Hal and Skink faithfully followed the same procedure."

I am often reminded of this recently when reviewing the current state of VR development.  What a tremendous challenge we've chosen to pursue .  It makes an underwater meal look very easy in comparison.

Thanks to John Carmack,. Michel Abrash, Palmer Luckey, Valve, Oculus and a great many others, I feel that we are well on our way at this point at getting where we need to go on the visual side of the equation.  We've gathered the steel and forged a sword that cuts - now all that remains is to sharpen the blade.

Meanwhile many are turning their attention to attempting to give us our hands and feet in the virtual space.  I'm a little less optimistic on this side of things at the moment.

Here's the simple problem:  If you developed an excellent 1:1, low latency tracking system for hands, imagine how potentially unsatisfying it would be to interact with a 1x1 foot virtual cube.  Your real hands, unrestrained by the rules of the virtual world cannot be kept one foot apart stay apart while clutching the sides of the box.  This means that your simulated hands will:

  • Pass through the box.  Unpleasant.
  • Stop being simulated and appear to stop while your real hand moves.  Most unpleasant.
  • Be subjected to some kind of trickery perhaps, you could attempt to simulate the offending hand passing over or under the box as a compromise between the virtual world simulation violation and the real world movement.  Perhaps borrowing an idea or two from here, but on a such smaller scale.

I don't feel that any of these options hold a lot of promise, nothing that would feel particularly satisfying at any rate.  You also need to resign yourself to the knowledge that the end user will purposely try to poke holes in your simulation, they won't work with you, they will pick your world apart given the chance.

If your simulation is trying to deal with an unrestrained, tracked humanoid you are going to lose, each and every time.

So, what to do?

Let's go back to the example of having lunch under the sea for a moment.  Humans underwater face a number of challenges:
  • They must carry and use breathing apparatus to stay underwater.  Heavy compressed tanks on their back and a regulator in their mouth.  This means they cannot talk.
  • Human eyes see poorly when exposed directly to water.  They must wear a face mask at all times in order to focus on the world around them.  They view the world through glass.
  • The human body is buoyant in the water, they feel like they weigh less than we do on land.  They cannot readily stand without a weighted belt, They cannot readily walk due to the water surrounding them.  They use flippers to help them move at a decent pace from place to place.       
and so on...

Each one of these tools represent a compromise that we've made when operating underwater, we accept that you can't take a stroll underwater, so we adapted, learned from the fish and adopted fins.  Fins are great, they make a lot of sense underwater and allow us to move in a way that works with, not against, our surroundings.

I think, for the present, a similar tact needs to be considered for VR.  Lean heavily on its strengths (of which there are many...) and choose your battles wisely in terms of what aspects of reality you are trying to simulate.

As the DK2 have rolled in, we've seen a wave of users equipping their workstations with flight sticks and steering wheels.  This is interesting.  Last time I saw joysticks this popular was about 20 years ago.

There's a good reason for this though, these devices can be represented 1:1 in the VR world.  You turn the wheel, the wheel in VR moves, you pull back on the throttle, your virtual throttle responds in kind and so much the better if these virtual controls are attached to your virtual hands.  There's no cheating here, the wheel feels solid in you hands and it moves as it should in VR.  This is inherently very satisfying to the user.  

Which of course, very quickly brings me to this:

This is the Powerloader from Aliens as I expect any visitor to the blog to know, and I think it makes a very good target for a credible VR experience if you absolutely insist on attempting to track limbs.  Use 3-axis motion sensing to track arm and leg movement but DON'T attempt to simulate the user's limbs directly interacting with the environment.  You need a layer of abstraction between your world and the user's movements.

If they try to move their arm through the floor, they are met with a solid CLANG of the metal arm colliding with the floor.  The user will understand and they won't feel cheated, it will feel very real.  They are retrained by the limits of the device they are simulated as controlling.  You could still feel VERY free as a user while using this simulation, but the designer has a "sanity check" when trying to constrain the limits of what the user can do with their limbs.

The user can still work out plenty of ways to attempt to violate the position of their arms with the what they are seeing in the simulation, but in the user's mind, the reason why it does not work is because of a limit of the machine they are controlling rather than a fault in the reality of the simulated world that the are trying to believe in.  You might want to reread that last awkwardly written sentence, because it really sums up what I'm trying to get across here:

  As a developer, give yourself a break and build in some constraints into your world that fit with the narrative of the environment.  One of the last things you want to try to do is simulate unbridled reality.  Aim to simulate a tiny slice and then do it very, very well.

VR users always need a layer of abstraction between them and the environment.  Allow them to manipulate a mechanism, be it a car, tank or exoskeleton  but don't dare let them actually try to interact with the world 1:1 with their own limbs without some kind mechanism between them and the world.

You are no longer developers, you are magicians, you need to get your audience to suspend their disbelief and like any good magician, you do this by carefully limiting what they can see and do at all times.

If anyone wants to say hi, I can been reached on Twitter at @ID_R_McGregor, or you link to me in Google+ if you happen to actually have a Google+ account that you surprisingly use for Google+ things.


  1. Very insightful and interesting post.

    I firmly believe that since Oculus is targeting a seated experience for CV1, the character's avatar should be seated. This is something that I think most indie developers are missing. It's, I suppose, understandable, as none of us have achieved this magical "presence" thing we keep hearing so much about, so most developers don't really know what to target. As such, we end up with demos that use the old paradigm of 2D FPS development: have a disembodied camera that floats and strafes around a room. But nothing removes me from an experience faster, and reminds my brain that I'm in a game than that type of interaction. I think we as developers are going to have to get a lot more creative with the way we represent movement in games.

    When people think "cockpit" or "seated avatar", their minds naturally go to established paradigms and concepts, like cars, airplanes, and spaceships. But why not attempt to resurrect the FPS in VR by having a seated avatar zip through hallways in a hovercraft with guns? In a sci-fi world, a lore explanation for such a concept would be easy enough.

    Suddenly, we have the paradigm we're used to (moving through areas quickly, turning, and shooting), but our avatar, like our real body, is seated. This, I think, would go a LONG way towards achieving a more comfortable and "present" experience, and it's something developers have thus far either ignored or overlooked.

    1. Thank you for reading.

      I agree with you wholeheartedly. I understand the temptation for developers to retrofit existing games to work with the Rift and I don't knock some of the compelling experiences that result, but I feel that there's something far greater that we can build for ourselves here. First however, we need to "begin again" in terms of our approach.

    2. Yes. Everytime I see another Kickstarter or other announcement of a VR game that uses traditional FPS paradigms I groan.

      The VR market is pretty small at the moment. If developers target the few people who won't get sick to their stomachs playing VR FPS games they're not going to make much money.

      "comfort mode" might get them a slightly broader market but still... Why not just start with something not predisposed to making your users ill?

  2. This comment has been removed by a blog administrator.