[x3d-public] A suggestion for a node supporting VR headsets in X3D
Leonard Daly
Leonard.Daly at realism.com
Sun Jul 9 10:01:06 PDT 2017
OK. I really think this is a very BAD IDEA. I've tried to explain why
below. It's also toned down a lot from my first effort.
Having a separate node that provide for duplicate control for certain
scene assets (e.g., cameras) is a mistake. If UserInfo defined a stereo
camera and the Viewpoint used an orthographic camera you would get
vision disorientation. Stereo vision needs the two view frustas to
converge. There is no convergence with orthographic projection.
When viewing a scene in a headset, it is generally accepted that certain
features need to remain fixed (e.g., world-up).
There is already a node for the camera. It should handle all aspects of
the camera (perspective, ortho, stereo, etc.). That node is called
Viewpoint (there is really no need for OrthoViewpoint). Introducing
another, perhaps contradictory means for that would just add confusion.
There is a developing standard for VR on the Web by the W3C. It is
called WebVR. It does not control how you present certain information to
the user, but how code needs to interact with the browser in order to
have the proper VR experience. WebVR also includes some standardization
of the various user-interface devices. A misalignment with WebVR would
just cause more confusion. More confusion leads to a lower adoption rate.
The spec needs to keep related things together. Viewpoint should be how
the scene is presented to the user. It needs to cover all aspects of the
camera, camera type, FOV, interpupillary distance. NavigationInfo covers
how the user navigates from the current viewing position. Audio node(s)
would indicate whether the sound is mono, spatialized, or stereo (plus).
If a browser wishes to build in a set of user-configurable options
(perhaps for disability access or some other reason), then that is fine;
but would not be part of the spec.
The discussion of multi-user is irrelevant. Each user has their own copy
of the scene (at least for rendering purposes). What one person sees is
independent of another in the same scene at the same time. Multi-user
support is more for passing high-level information from one viewer to
another. Presenting a "birds-eyes" view of a scene in headset mode is
not a good idea. Of course it can be done, but the results will be bad
and very disturbing (as in cognitive dissonance). (Think of a web page
with every word on a blink tag with different rates. That is just
beginning to go down the disturbing path.)
Leonard Daly
> Hi,
>
> At a recent X3D working group meeting we proposed some possible new
> nodes to provide support for VR displays within X3D. Here is an
> extract from the minutes:
>
> * VR
> o Stereo viewpoint, which needs two cameras. The X3D
> specification only allows one camera, e.g. Viewpoint, which is
> a bindable node. Contrast here the Viewpoint/OrthoViewpoint
> nodes, which are perspective and orthographic renderings,
> respectively.
> o Input interfaces, possibly from multiple buttons. For example,
> TouchSensor, may need to be expanded to permit more types of
> input.
> o Audio – is it good enough. There is no left, right ear
> dependence, i.e. stereo audio.
> o Viewpoint switching – alternative methods of animating
> viewpoint changes are required. For example, Viewpoint node
> currently has a /jump/ field, but Fade out/Fade in might be
> needed.
>
> After that meeting Don and I were talking about this, and came up with
> an alternative suggestion. This was for the following node:
>
> * UserInfo
> o Should this be a bindable node, with its own binding stack, so
> that only one was active at a time? Then, what about multiple
> users interacting with the same scene?
> o It could indicate if the user was needed a stereo rendering,
> and have attributes for stereo parameters.
> o It could indicate if the user required stereo audio, having
> appropriate attributes.
> o It could have attributes relating to FOV – currently in Viewpoint.
>
> This node would not have to be included in a scene. It could be added
> by the implementation (or modified if present) using SAI techniques.
>
> The node could be added to the Navigation component, perhaps at
> support level 2, to fit into the immersive profile.
>
> An example:
>
> * Let’s imagine a garden scene. We would like to be able to render
> it in stereo, from the perspective of two different users. The
> first is human. The second is a bird. The saved scene need not
> have a UserInfo node, and so could be independent of the user. The
> implementation would then insert the appropriate UserInfo node
> into the scene at run time, permitting the scene to be rendered
> with the desired user properties.
>
> What do you think? Good idea? Bad idea?
>
> All comments welcome.
>
> All the best,
>
> Roy
>
>
>
> _______________________________________________
> x3d-public mailing list
> x3d-public at web3d.org
> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
--
*Leonard Daly*
3D Systems & Cloud Consultant
LA ACM SIGGRAPH Chair
President, Daly Realism - /Creating the Future/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20170709/da775f21/attachment.html>
More information about the x3d-public
mailing list