[x3d-public] X3D VR

doug sanden highaspirations at hotmail.com
Wed Dec 21 07:54:55 PST 2016


> Here is another item for discussion. For VR it is especially necessary
>  to be thoughtful about performance since high frame rates, high
>  resolution and the doubled output are all important. Ideally, one
>  would want to traverse the scene graph only only once per frame and
>  generate right and left render output in parallel. To enable this
>  approach, it seems it is necessary to have special VR/stereo nodes or
>  modes so the browser can optimize in this way. Perhaps LayerSet allows
>  for this kind of parallel rendering of multiple views of the same
>  scene currently ? 
> 

ParticlePhysics comes to mind.
- you send all your shape stuff to the shader for one particle
- then you go into a loop sending a xyz particle position and telling the shader to draw again

For stereo you could do something similar, except loop of 2, and do a viewpoint shift on each loop.

________________________________________
From: x3d-public <x3d-public-bounces at web3d.org> on behalf of Andreas Plesch <andreasplesch at gmail.com>
Sent: December 21, 2016 7:59 AM
To: Roy Walmsley
Cc: X3D Graphics public mailing list
Subject: Re: [x3d-public] X3D VR

On Wed, Dec 21, 2016 at 6:00 AM, Roy Walmsley <roy.walmsley at ntlworld.com<mailto:roy.walmsley at ntlworld.com>> wrote:
Hi Andreas,

That’s great to see what you have been doing here. Would  you like to take your node suggestions one step further and propose the fields that might be specified for each node?

I agree it is often constructive to have more specific suggestions to discuss and iterate, so I will give this a try. In the mean time, it also useful to collect more diffuse ideas or existing solutions into a central place like this thread.

What we could do then is, in the new year, is to host an open discussion on WebVR and X3D. Contributions from any other readers are welcome, and could lead to a lively discussion on this topic.

Here is another item for discussion. For VR it is especially necessary to be thoughtful about performance since high frame rates, high resolution and the doubled output are all important. Ideally, one would want to traverse the scene graph only only once per frame and generate right and left render output in parallel. To enable this approach, it seems it is necessary to have special VR/stereo nodes or modes so the browser can optimize in this way. Perhaps LayerSet allows for this kind of parallel rendering of multiple views of the same scene currently ?

Definitely hoping for a lively discussion,

-Andreas

Thank you for the suggestions,

All the best,

Roy

From: x3d-public [mailto:x3d-public-bounces at web3d.org<mailto:x3d-public-bounces at web3d.org>] On Behalf Of Andreas Plesch
Sent: 21 December 2016 05:51
To: X3D Graphics public mailing list <x3d-public at web3d.org<mailto:x3d-public at web3d.org>>
Subject: [x3d-public] X3D VR


Hi

Working with x3dom and WebVR I am exploring what additions to X3D would be necessary or useful to effectively use current VR hardware such as the Rift, Vive, Gear, or Cardboard.

The proposed RenderedTexture node is currently used in x3dom with special stereo modes to generate left and right views from a single viewpoint in a single GL context.

A Layer for each left and right view could also be used with two coordinated but separate viewpoints. This would be Cobweb's pattern.

Since the HMD and its own runtime know best the internal optical characteristics of the display, the preferred interface at least in WebVR are view and projection matrixes directly usable by consumers for 3d graphics generation, and not position, orientation and fov. This leads to a new requirement for X3D to accept these raw matrixes as input.

Since x3dom uses RenderedTexture for VR, I added there additional special stereo modes which directly receive these matrixes from the HMD to be used for rendering at each frame. This in effect accomplishes a first, head and body based, level of navigation. In my first tests (on github) it works well and should be robust across devices. This approach does require some standard API to the HMD such as WebVR.

Another possibility is to add view and projection matrix fields input fields to viewpoints which can be continually updated. One could convolve the view matrix with position/orientation fields, or optionally completely ignore them.

One then could use external SAI to keep updating or perhaps introduce an environment ProjectionSensor. It would relay these matrix events from an HMD runtime to then be routed to the viewpoints.

A second level of navigation is accomplished with handheld controllers. Until some standard gestures evolve, it will be necessary to expect custom per scene navigation. X3d should have a way to sense buttons and axes velocities of such controllers so these are then available to manipulate the view in some way. InstantReality has a general IOSensor to that effect. On the web the GamePad API is a standard which would need to be used.

Summary: RenderedTexture with stereo modes, matrix fields for viewpoints, a ProjectionSensor, and a ControllerSensor would all be candidates for x3d VR support.

Thanks for reading, any thoughts welcome,

Andreas



--
Andreas Plesch
39 Barbara Rd.
Waltham, MA 02453



More information about the x3d-public mailing list