[x3d-public] X3D VR

doug sanden highaspirations at hotmail.com
Wed Dec 21 06:36:02 PST 2016


Andreas 
Q. can/could/should io/stereo mishmash be in a separate config file?
Q. can/could/should the io and stereo special events come from the singleton Browser object?
-Doug
...
Great topic. I struggled with Android NDK x3dbrowser handling of gyro etc for navigation and found a way to hack and intercept the viewpoint matrix stack, and similar config issues with cardboard viewport distortion configuration -great spaghetti hacking- but need to make it more systematic, configurable and if possible following some standard pattern.
...
One annoying thing is having to put per-device configurations into the scene file, such as special handling of triggers, matrices etc. 
I
DEAS: 
- hyperscene one level up from scene file 
-  .x3dcfg - a separate mini-scene with small set of nodes, inhaled separately by the program

And the goal of either of those is to convert per-device stereo and input mishmash into some very standard things hidden in the Browser as seen by the scene, so scene files can still be rendered non-stereo with no change to the scene file. Yet still authorable.
For example in conventional x3d, as the user works the mouse to navigate, you don't necessarily get any mouse events coming into the scene - no routing is mandatory to make navigation happen - and the viewpoint 'magically' moves as seen from the scene file. It might let you know where it moved, and allow you to do some moving, but it doesn't require the scene author to intervene, or know which buttons are pressed.
Mind you for a few decades things have been fairly stable hardware-wise on the desktop, with a mouse and keyboard. So the formulas for converting mouse and keyboard to viewpoint navigation -and stereo settings- were baked into the browser. 
Now with hardware and stereo permutations fluid and growing in number, it's getting hard for Browser code to hide all that. And some scene authors _do_ want to intercept and play with it.  

IDEA: Browser fields: instead of new nodes and components for getting io/stereo information into the scene to play, can/could/should they be fields/events on the x3d Browser object? That would keep them Singletons - only one per scene (vs separate node type which could be instanced multiple times - although I think we have a node or two like this already - should be singletons but are allowed to be instanced multiple). And analogous to the desktop Browser managing/hiding all that info.

But try and imagine all the permutations of your basic scene file you'd need for every permutation of device and stereo setting if its all in the scene file as new nodes.
So the urge to abstract a bit and make more flexible for scene authors.
A question is: where - in the scene file, or a separate config file for the browser.




________________________________________
From: x3d-public <x3d-public-bounces at web3d.org> on behalf of Andreas Plesch <andreasplesch at gmail.com>
Sent: December 20, 2016 10:50 PM
To: X3D Graphics public mailing list
Subject: [x3d-public] X3D VR

Hi

Working with x3dom and WebVR I am exploring what additions to X3D would be necessary or useful to effectively use current VR hardware such as the Rift, Vive, Gear, or Cardboard.

The proposed RenderedTexture node is currently used in x3dom with special stereo modes to generate left and right views from a single viewpoint in a single GL context.

A Layer for each left and right view could also be used with two coordinated but separate viewpoints. This would be Cobweb's pattern.

Since the HMD and its own runtime know best the internal optical characteristics of the display, the preferred interface at least in WebVR are view and projection matrixes directly usable by consumers for 3d graphics generation, and not position, orientation and fov. This leads to a new requirement for X3D to accept these raw matrixes as input.

Since x3dom uses RenderedTexture for VR, I added there additional special stereo modes which directly receive these matrixes from the HMD to be used for rendering at each frame. This in effect accomplishes a first, head and body based, level of navigation. In my first tests (on github) it works well and should be robust across devices. This approach does require some standard API to the HMD such as WebVR.

Another possibility is to add view and projection matrix fields input fields to viewpoints which can be continually updated. One could convolve the view matrix with position/orientation fields, or optionally completely ignore them.

One then could use external SAI to keep updating or perhaps introduce an environment ProjectionSensor. It would relay these matrix events from an HMD runtime to then be routed to the viewpoints.

A second level of navigation is accomplished with handheld controllers. Until some standard gestures evolve, it will be necessary to expect custom per scene navigation. X3d should have a way to sense buttons and axes velocities of such controllers so these are then available to manipulate the view in some way. InstantReality has a general IOSensor to that effect. On the web the GamePad API is a standard which would need to be used.

Summary: RenderedTexture with stereo modes, matrix fields for viewpoints, a ProjectionSensor, and a ControllerSensor would all be candidates for x3d VR support.

Thanks for reading, any thoughts welcome,

Andreas




More information about the x3d-public mailing list