[x3d-public] X3D VR and viewing - HMD node or mode?

Andreas Plesch andreasplesch at gmail.com
Tue Jan 17 16:42:34 PST 2017


continuing diff WebVR and X3D...

Third, Vrframedata: https://w3c.github.io/webvr/#interface-vrframedata

readonly attribute DOMHighResTimeStamp
<http://www.w3.org/TR/hr-time/#domhighrestimestamp> timestamp
<https://w3c.github.io/webvr/#dom-vrframedata-timestamp>; X3D events have
timestamps

readonly attribute Float32Array leftProjectionMatrix
<https://w3c.github.io/webvr/#dom-vrframedata-leftprojectionmatrix>;
critical for correct rendering as fine tuned for HMD; used internally in
left-vr RenderedTexture mode; could be routed to Viewfrustum node;
MetadataMatrix4f

readonly attribute Float32Array leftViewMatrix
<https://w3c.github.io/webvr/#dom-vrframedata-leftviewmatrix>; see above;
turns out that Viewfrustum expects the inverse

readonly attribute Float32Array rightProjectionMatrix
<https://w3c.github.io/webvr/#dom-vrframedata-rightprojectionmatrix>; see
above

readonly attribute Float32Array rightViewMatrix
<https://w3c.github.io/webvr/#dom-vrframedata-rightviewmatrix>; see above

readonly attribute VRPose <https://w3c.github.io/webvr/#vrpose> pose
<https://w3c.github.io/webvr/#dom-vrframedata-pose>; see below

Fourth, Vrpose: https://w3c.github.io/webvr/#interface-vrpose

readonly attribute Float32Array? position
<https://w3c.github.io/webvr/#dom-vrpose-position>; consistent with
viewMatrix, relative to resetPose position, Metadata SFVec3f

readonly attribute Float32Array? linearVelocity
<https://w3c.github.io/webvr/#dom-vrpose-linearvelocity>; velocity in
direction of orientation; Metadata SFFloat ; can be used by browser to
predict position at predicted time of submission of completely rendered
frame.

readonly attribute Float32Array? linearAcceleration
<https://w3c.github.io/webvr/#dom-vrpose-linearacceleration>; Metadata
SFFloat

readonly attribute Float32Array? orientation
<https://w3c.github.io/webvr/#dom-vrpose-orientation>; Metadata SFRotation,
needs conversion from quaternion

readonly attribute Float32Array? angularVelocity
<https://w3c.github.io/webvr/#dom-vrpose-angularvelocity>; Metadata SFFloat

readonly attribute Float32Array? angularAcceleration
<https://w3c.github.io/webvr/#dom-vrpose-angularacceleration>; Metadata
SFFloat

Summary

Left/right/projection/viewMatrix: Since there is no MetadataMatrix4f,
Metadata MFFloat

PosePosition: at timestamp, actual position at frame submission time could
be estimated

PoseVelocity/Acceleration: potentially useful for in-scene prediction

Next VrEyeParameters and VrStageParameters


Andreas

On Jan 15, 2017 1:51 PM, "Andreas Plesch" <andreasplesch at gmail.com> wrote:

> Continuing with diff of functionality between WebVR API and X3D scenegraph
> ...
>
> Second, VRLayer : https://w3c.github.io/webvr/#interface-vrlayer
>
> readonly attribute VRSource? source; VRSource is the html canvas element
> used for display in the HMD; would correspond to the X3D element;
>
> readonly attribute sequence<float> leftBounds; viewport for left eye;
> Metadata MFFloat leftBounds; at browser discretion but usually set to left
> half of the render area (default)
> readonly attribute sequence<float> rightBounds; viewport for right eye;
> Metadata MFFloat rightBounds; at browser discretion but usually set to
> right half of the render area (default)
>
> Third, VRDisplay capabilites : https://w3c.github.io/webvr/#interface-
> vrdisplaycapabilities
>
>   readonly attribute boolean hasPosition; position tracking; MetadataBool
>   readonly attribute boolean hasOrientation; orientation tracking;
> MetadataBool
>   readonly attribute boolean hasExternalDisplay; false for cardboard;
> MetadataBool
>   readonly attribute boolean canPresent; probably not useful for scene
>   readonly attribute unsigned long maxLayers; currently restricted to one
>
> Summary: additional Metadata for VrDisplay
>
> VRSourceLeftBounds
> VRSourceRightBounds
> hasPosition
> hasOrientation
> hasExternalDisplay
>
> next VRFrameData and VRPose
>
> Andreas
> On Sat, Jan 14, 2017 at 2:07 PM, Andreas Plesch <andreasplesch at gmail.com>
> wrote:
>
>> Sounds like a plan as it will be critical to interface well with the
>> WebVR API as an important emerging standard for consumer VR hardware.
>>
>> First, a link to the WebVR standard:
>>
>> https://w3c.github.io/webvr/
>> latest: https://rawgit.com/w3c/webvr/master/index.html (development)
>>
>> Going through every parameter in the webvr standard will be a lengthy
>> process, so let's better get started.
>>
>> Note, that right in the introduction the focus is directed at interfaces
>> to VR hardware.
>>
>> X3D though device independant has a fair amount of assumptions on
>> hardware capabilities used to render and interact with the scenegraph. For
>> example, pointing devices and keyboards and opengl/inventor based rendering
>> are implied to be available.
>>
>> > Regarding configuration information, here is another way of considering
>> our many existing capabilities together with new opportunities.
>>
>> >
>>
>> > a. Perform a diff of functionality between WebVR API and X3D scene
>> graph.
>>
>> > -   What is in WebVR needed for HMDs that isn't supported in X3D?
>> > -  probably also helpful to list everything found in common already.
>>
>> First: https://w3c.github.io/webvr/#interface-vrdisplay
>>
>> General points: x3d does not have a concept of a display, eg. the exact
>> method of displaying is left to the x3d browser. The scene does not have a
>> way to infer display parameters other than browser.options which a browser
>> may provide. In addition to display parameters, vrDisplay also includes
>> navigation parameters, the pose. x3d has a concept of navigation, so there
>> are intersections.
>>
>> readonly attribute boolean isConnected: could be used by a browser
>> implementation to indicate to the user that HMD is available. If x3d has
>> guidelines on browser UI, it should include a list of available display
>> devices. Probably best left to the discretion of the browser.
>>
>> readonly attribute boolean isPresenting: the browser should use this
>> attribute to indicate that a device is currently in use. On a web page this
>> could be a message on the monitor, replacing the scene rendering. Should be
>> also available to the scene, to allow dynamic adjustment of scene
>> content/viewpoint use when HMD is used. MetadataBoolean
>> vrDisplayisPresenting .
>>
>> readonly attribute VRDisplayCapabilities capabilities: detail for each
>> property later. Most capabilities potentially useful for a scene, eg.
>> should be in Metadata. If a capability is not available, the browser should
>> not attempt to substitute for the missing capability. Should be shown in a
>> browser UI.
>>
>> readonly attribute VRStageParameters? stageParameters: detail for each
>> property later. Concerns both viewing and interaction. Browser needs
>> sittingToStandingTrafo, scene could use sizes of stage to add visual stage
>> representation to scene, so in Metadata.
>>
>> VREyeParameters getEyeParameters(VREye whichEye): renderWidth/Height for
>> browser rendering, and browser UI . fov and offset deprecated . Resolution
>> useful for scene as Metadata, also more generally for any kind of display.
>>
>> boolean getFrameData(VRFrameData frameData): VRFramedata have
>> projection/view matrices and pose. Viewpoint discussion applies. Browser
>> uses these data to navigate by updating viewpoint on top of other
>> navigation mode dependent navigation. Useful also for scene but expensive
>> since Metadata would need to be updated every frame. Detail later but x3d
>> needs to use the provided data.
>>
>> [NewObject] VRPose getPose(): deprecated in favour of FrameData
>>
>> void resetPose(): designate a key press to call resetPose when HMD
>> isPresenting, similar to key presses to cycle viewpoints ? InputField
>> MetadataBoolean vrDisplayResetPose to trigger reset ?
>>
>> attribute double depthNear; gets used to determine provided projection
>> matrix, should be set to viewpoint's zNear by browser, so no new field
>> required
>>
>> attribute double depthFar; same
>>
>> unsigned long requestAnimationFrame(FrameRequestCallback callback);
>> browser implementation detail
>>
>> void cancelAnimationFrame(unsigned long handle); browser implementation
>> detail
>>
>> Promise<void> requestPresent(sequence<VRLayerInit> layers); layer has
>> source to be displayed, currently only 1 layer allowed, no new nodes .
>>
>> Promise<void> exitPresent(); used by browser to stop displaying on HMD,
>> Scene can use isPresenting above to react.
>>
>> sequence<VRLayer> getLayers();
>>
>> void submitFrame(); browser uses this to indicate that a frame is ready
>> to be displayed in the HMD. WebVR will then display it.
>>
>>
>> Summary: VrDisplay node could have these Metadata:
>>
>> isPresenting
>>
>> capabilities.hasFeatureX
>>
>> stageParametersSizes
>>
>> left/rightRenderWidth/Height
>>
>> resetPose
>>
>>
>> Next VRLayer and VRDisplay capabilities.
>>
>> -Andreas
>>
>> >
>> > b. Characterize and define those parameters, divide and conquer:
>> > - what information is specific to hardware devices for viewing,
>> > - what information is specific to hardware devices for interaction,
>> > - what information is specific to user anatomy: spacing between eyes,
>> etc.
>> > - what information might be considered scene specific, i.e. varying by
>> scene usage?
>> > - what information might be considered a user preference?
>> >
>> > c. Preparing for initial experimentation and evaluation, what are use
>> cases for common use?
>> > - the MAR Reference Model (update briefing in Seoul next week) has done
>> a lot on that.
>> >
>> > d. Good assumption: X3D viewing navigation and interaction models are
>> device neutral already.  Rephrase: authors can create scenes for users that
>> can be used immediately and effectively over a wide range of devices
>> already, no further customization needed.
>> >
>> > e. Looking over all the WebVR/HMD diffs with existing X3D:
>> > -  likely this is mostly information that can be expressed as
>> parameters.
>> > - just like on this discussion thread, people will want to try
>> different approaches with nodes/fields/scripts
>> > - the information set of parameters could be captured in a Prototype,
>> perhaps embedding a Script
>> > - the information set of parameters could be captured in a MetadataSet
>> collection
>> > - this enables different players to implement in their own way using
>> shared X3D content
>> >
>> > Hope this helps explain a potential path for further cooperative
>> exploration.  There are lots of alternatives.
>> >
>> > I will be requesting permission to share MAR Reference Model more
>> widely in our consortium/community, it will likely complement WebVR nicely.
>> >
>> > We can discuss for a bit on today's X3D working group teleconference,
>> if you or anyone wants.  Probably should be a monthly focus topic, dialog
>> often clarifies email ideas nicely.
>> >
>> > v/r Don
>> >
>> >
>> > On 1/10/2017 7:42 PM, Andreas Plesch wrote:
>>
>> >>
>>
>> >> On Jan 8, 2017 5:45 PM, "Don Brutzman" <brutzman at nps.edu <mailto:
>> brutzman at nps.edu>> wrote:
>>
>> >>>
>>
>> >>>
>> >>> Another option: can we define HMD parameters separately from existing
>> Viewpoint and NavigationInfo nodes, so that existing scenes (with many many
>> viewpoints already defined) might be viewable without modification.
>> >>
>> >>
>> >> Ok, while sofar the node/field proposals were made from a practical
>> standpoint taking into account existing or feasible implementations, let's
>> turn it around and see what should ideally be available from a scene
>> authoring perspective.
>> >>
>> >> Let's imagine there is a browser and it has built-in options to
>> display an existing scene in a selected HMD taking advantage of stereo
>> immersion and head tracking/body tracking.
>> >>
>> >> Let's also assume the scene is designed for both desktop and HMD
>> experience. This is probably harder than it sounds but scenes designed for
>> WALK mode with a single viewpoint or careful transitions between
>> viewpoints, on a human scale, should be a good start.
>> >>
>> >> The browser can figure out optimal rendering resolution, and use the
>> HMD provided projection and view matrixes for navigation relative to the
>> bound viewpoint.
>> >>
>> >> All this does not seem to need scene specific information (?). This is
>> why we had previously discussed a browser configuration file or Engine node
>> above/next to the scene level.
>> >>
>> >> Now, it turns out that currently the easiest way to add such
>> functionality to the x3dom browser is to preprocess the scene (via
>> 'injection') to use additional nodes (the RenderedTexture node). When
>> getting out of HMD mode, these additional nodes can be removed again.
>> >>
>> >> For other browsers (such as cobweb) it may be advantageous to
>> preprocess the scene in a different way, requiring other, specialized nodes
>> (Viewfrustum).
>> >>
>> >> This is just one way to add functionality to a x3d browser and
>> currently the most available one. It is also modular as  nodes can be used
>> for other purposes.
>> >>
>>
>> >>> Perhaps an HMD node, rather than HMD mode?  That is, a separate node
>> that complements NavigationInfo?
>>
>> >>
>> >>
>>
>> >>> The VRML and X3D specifications have done an excellent job in being
>> device neutral (as well as quite supportive of most devices) when defining
>> user viewing and interaction already.   It will be good to keep building on
>> that device independence, which incidentally lets you create large shared
>> environments where users have multiple diverse devices interacting.
>>
>> >>>
>> >>> WebVR might provide an excellent first pass for the parameters that
>> might be needed for HMDs.
>> >>
>> >>
>> >> WebVR inspired parameters supplied by/provided to a HMD node could be:
>> >>
>> >> isPresenting: SFBool, currently in use, scene may want to adapt to
>> immersion experience in some way
>> >>
>> >> Stage: some HMDs have a certain, limited area in which a user can be
>> tracked and be expected to stay inside. This could be useful information
>> for a scene. Includes size, location.
>> >>
>> >> Enabled: a scene may want to strictly disable display on a HMD
>> >>
>> >> DeviceID: a scene may want to know a name for the device to reflect
>> >>
>> >> Capabilities: resolution: a scene may want to adapt (hi res textures)
>> >>
>> >> resetPose: make current location/orientation the reference.
>> >>
>>
>> >>>
>>
>> >>> Presumably most of fields are metadata for devices to use, which is
>> in keeping with scenes that can be utilized by many devices.
>> >>
>> >>
>> >> Hmm, as evidenced above, I tend to think about the reverse, eg. fields
>> which can be utilized by scenes.  So I am probably missing something.
>> >>
>>
>> >>> So if we can agree on parameters then a simple MetadataSet node (or
>> prototype) for HMD might be sufficient to start performing some
>> experimentation in developmental versions of players.
>>
>> >>>
>> >>> Wondering if such metadata is oriented towards several categories at
>> once:
>> >>
>> >>
>> >> (a) device specific,
>> >> all above
>> >>
>> >> (b) scene specific,
>> >>
>> >> I cannot think of anything, eg. imagine how a scene could modify the
>> behaviour of a HMD. At this point HMDs only have a single mode. In the
>> future, a HMD may switch between VR, AR, MR and a scene request an expected
>> mode ?
>> >>
>> >> If we start to also consider controllers, this might be the place
>> where a scene configures usage of controller buttons and axes.
>> >>
>> >> and (c) user specific (which can also be saved as preferences).
>> >>
>> >> lastPose: pose before exiting HMD mode
>> >> resumePose: return to last pose upon entering
>> >> ...
>> >>
>> >> I am still unsure if I grasped the idea behind using a MetadataSet
>> node as a conduit between scene and HMD.
>> >>
>> >> Andreas
>> >>
>> >> This might help re-use and portability.
>>
>> >>>
>>
>> >>>
>> >>>
>> >>> On 1/8/2017 12:40 PM, Andreas Plesch wrote:
>>
>> >>>>
>>
>> >>>>
>> >>>> Yves,
>> >>>>
>> >>>> thanks for your input. While seeing a Viewfrustum node as a
>> generalization is convincing, there is a practical difference in how it
>> would be used. Rather than specifying an initial viewpoint from which user
>> navigation starts/is relative to, the matrices would be used directly for
>> navigation. Not sure what the implications of this difference are.
>> >>>>
>> >>>> Another option is to define another navigation mode, 'HMD', which
>> would take HMD output (the matrixes) and apply it to the current viewpoint.
>> If the viewpoint is a stereo viewpoint left and right matrixes would be
>> supplied. Additional navigation from controllers would still need to be
>> possible. What device 'HMD' refers to could be defined by a new
>> NavigationInfo field 'hmd' MFString = "''AUTO' 'VIVE' ... " in case of
>> multiple connected devices.
>> >>>>
>> >>>> Andreas
>> >>>>
>> >>>> On Jan 8, 2017 11:38 AM, "Yves Piguet" <yves.piguet at gmail.com
>> <mailto:yves.piguet at gmail.com> <mailto:yves.piguet at gmail.com <mailto:
>> yves.piguet at gmail.com>>> wrote:
>>
>> >>>>>
>>
>> >>>>>
>> >>>>>
>> >>>>> I'd also prefer a new node. Another reason is the existence of
>> OrthoViewpoint: a viewpoint where you specify an arbitrary projection
>> matrix is a generalization of both Viewpoint and OrthoViewpoint, so it
>> should logically be a sibling which also implements the X3DViewpointNode
>> abstract node. Now an extension for stereoscopic views, where you still
>> have a single viewer position which defines viewer-related sensor events,
>> LOD, Billboard etc. and a single node traversal for rendering with two
>> projection matrices, seems fine.
>> >>>>>
>> >>>>> Yves
>> >>>>>
>> >>>>>
>> >>>>> > On 8 Jan 2017, at 15:29, Andreas Plesch <andreasplesch at gmail.com
>> <mailto:andreasplesch at gmail.com> <mailto:andreasplesch at gmail.com <mailto:
>> andreasplesch at gmail.com>>> wrote:
>> >>>>> >
>> >>>>> > > Date: Sat, 7 Jan 2017 21:04:42 +0100
>> >>>>> > > From: Holger Seelig <holger.seelig at yahoo.de <mailto:
>> holger.seelig at yahoo.de> <mailto:holger.seelig at yahoo.de <mailto:
>> holger.seelig at yahoo.de>>>
>> >>>>> > > To:x3d-public at web3d.org <mailto:x3d-public at web3d.org> <mailto:
>> x3d-public at web3d.org <mailto:x3d-public at web3d.org>>
>> >>>>> > > Subject: Re: [x3d-public] X3D VR
>> >>>>> > >
>> >>>>> > >
>> >>>>> > > I think instead of defining a new field to the existing
>> viewpoints, it
>> >>>>> > > should be considered that for that purpose and new viewpoint
>> node could
>> >>>>> > > be appropriate. A node called something like ?MatrixViewpoint?.
>> This
>> >>>>> > > node could have two fields matrix and projectionMatrix. The
>> matrix field
>> >>>>> > > would store the transformation matrix with orientation and
>> position
>> >>>>> > > values. The projectionMatrix field would store the projection
>> matrix.
>> >>>>> > > This viewpoint would be an all purpose viewpoint. The main idea
>> is if
>> >>>>> > > one is able to define an ?viewMatrix? field as Andreas proposes
>> he is
>> >>>>> > > able to handle a node like MatrixViewpoint with its matrix
>> fields.
>> >>>>> > > Defining a new node has the advantages to left the existing
>> viewpoints
>> >>>>> > > untouched and to handle this behavior in a new node.
>> >>>>> > >
>> >>>>> > > Holger
>> >>>>> >
>> >>>>> > It turns out that x3dom and InstantReality have such a node,
>> called Viewfrustum:
>> >>>>> >
>> >>>>> >https://doc.x3dom.org/author/Navigation/Viewfrustum.html <
>> https://doc.x3dom.org/author/Navigation/Viewfrustum.html>
>> >>>>> >
>> >>>>> > So this is a pretty strong argument for a new node instead of new
>> fields.
>> >>>>> >
>> >>>>> > New fields have the advantage that they should make it easier to
>> adapt existing scenes. I think default values could be such that they
>> guarantee full backward compatibility.
>> >>>>> >
>> >>>>> > I will try to use Viewfrustum with the x3dom classroom example to
>> gain some experience.
>> >>>>> >
>> >>>>> > Andreas
>> >>>>> >
>> >>>>> > >
>> >>>>> > > Am 07.01.2017 um 18:07 schrieb Andreas Plesch:
>> >>>>> > > > Hi,
>> >>>>> > > >
>> >>>>> > > > back from the holiday season, here are proposals for new
>> fields for
>> >>>>> > > > Viewpoint.
>> >>>>> > > >
>> >>>>> > > > I do not have an x3dom implementation but the idea is to add
>> two
>> >>>>> > > > Matrix4d fields:
>> >>>>> > > >
>> >>>>> > > > projectionMatrix :
>> >>>>> > > >
>> >>>>> > > > default: NULL
>> >>>>> > > >
>> >>>>> > > > purpose: allow control of the view projection from 3d to 2d
>> by such a
>> >>>>> > > > matrix from sources such HMD drivers. The projection matrix
>> controls fov
>> >>>>> > > >
>> >>>>> > > > The projectionMatrix should be used as is by the browser if
>> not NULL. If
>> >>>>> > > > NULL, a projection matrix shall be contructed from viewpoint
>> fields.
>> >>>>> > > >
>> >>>>> > > >
>> >>>>> > > > viewMatrix:
>> >>>>> > > >
>> >>>>> > > > default: unit matrix
>> >>>>> > > >
>> >>>>> > > > purpose: allow control of the viewer orientation and location
>> by such a
>> >>>>> > > > matrix from sources such as HMD drivers.
>> >>>>> > > >
>> >>>>> > > > The viewMatrix V shall be combined with the matrix M derived
>> from the
>> >>>>> > > > location/ orientation fields by M * V for rendering use.
>> >>>>> > > >
>> >>>>> > > > This means that if V is used in a scene exclusively the
>> location fields
>> >>>>> > > > needs to be reset to 0 0 0 from its default by the scene.
>> >>>>> > > >
>> >>>>> > > > Typical use would be as sinks of routes from (new)
>> ProjectionSensor or
>> >>>>> > > > VRDevice node fields.
>> >>>>> > > >
>> >>>>> > > > A VRDevice delivers both matrixes for each eye. A scene can
>> then set up
>> >>>>> > > > a viewpoint for each eye, and use Layers to show them side by
>> side.
>> >>>>> > > >
>> >>>>> > > > In terms of efficiency, this would seem to require repeated
>> traversals
>> >>>>> > > > of the scene unless both Layers can be recognized to show the
>> same scene
>> >>>>> > > > content.
>> >>>>> > > >
>> >>>>> > > > A more convenient and integrated alternative would be to
>> introduce a
>> >>>>> > > > stereo SFBool field and have left and right variants of the
>> matrix
>> >>>>> > > > fields ( and perhaps also left and right variants of the
>> location,
>> >>>>> > > > orientation, fov, zfar, znear fields).
>> >>>>> > > >
>> >>>>> > > > stereo = false would ignore all left and right fields.
>> >>>>> > > > stereo = true would use the left/right fields and render them
>> side by side.
>> >>>>> > > >
>> >>>>> > > > Since in this case there is a guaranty that views for each
>> eye use the
>> >>>>> > > > same scene content, only one traversal should be needed to
>> render both eyes.
>> >>>>> > > >
>> >>>>> > > > The idea would be switching mono to stereo and back only
>> requires
>> >>>>> > > > toggling the stereo field. Navigation may have a similar
>> switch to and
>> >>>>> > > > from mouse/keyboard modes ?
>> >>>>> > > >
>> >>>>> > > > Implementation in x3dom of viewMatrix and projectionMatrix
>> may be rather
>> >>>>> > > > straightforward. I do not see large obstacles (famous last
>> words ...).
>> >>>>> > > > Implementation in cobweb of viewMatrix and projectionMatrix
>> may be also
>> >>>>> > > > straightforward but I am less sure.
>> >>>>> > > >
>> >>>>> > > > Without layers in x3dom then one could put two scene side by
>> side on the
>> >>>>> > > > web page, use external js to feed the matrix fields, and then
>> also use
>> >>>>> > > > external js to grab the frames from both scene canvases,
>> combine them
>> >>>>> > > > into a single frame on another canvas, and use that finally
>> as a source
>> >>>>> > > > for the HMD display. (This sounds slow).
>> >>>>> > > > In cobweb, it should not be hard to set up a scene with a
>> layer for each
>> >>>>> > > > eye.
>> >>>>> > > >
>> >>>>> > > > Implementation of stereo, left and rightView/ProjectionMatrix
>> in x3dom
>> >>>>> > > > is harder since it needs changes to the rendering code which
>> is a bit
>> >>>>> > > > unstructured. The first approach would be for stereo=true to
>> go through
>> >>>>> > > > the rendering twice with the rendering surfaces appropriately
>> set somehow.
>> >>>>> > > >
>> >>>>> > > > -Andreas
>> >>>>> > > >
>> >>>>> > > > On Dec 21, 2016 6:00 AM, "Roy Walmsley" <
>> roy.walmsley at ntlworld.com <mailto:roy.walmsley at ntlworld.com> <mailto:
>> roy.walmsley at ntlworld.com <mailto:roy.walmsley at ntlworld.com>>
>> >>>>> > > > <mailto:roy.walmsley at ntlworld.com <mailto:
>> roy.walmsley at ntlworld.com> <mailto:roy.walmsley at ntlworld.com <mailto:
>> roy.walmsley at ntlworld.com>>>> wrote:
>> >>>>> > > >
>> >>>>> > > >     Hi Andreas,____
>> >>>>> > > >
>> >>>>> > > >     __ __
>> >>>>> > > >
>> >>>>> > > >     That?s great to see what you have been doing here. Would
>> you like
>> >>>>> > > >     to take your node suggestions one step further and
>> propose the
>> >>>>> > > >     fields that might be specified for each node?____
>> >>>>> > > >
>> >>>>> > > >     __ __
>> >>>>> > > >
>> >>>>> > > >     What we could do then is, in the new year, is to host an
>> open
>> >>>>> > > >     discussion on WebVR and X3D. Contributions from any other
>> readers
>> >>>>> > > >     are welcome, and could lead to a lively discussion on
>> this topic.____
>> >>>>> > > >
>> >>>>> > > >     __ __
>> >>>>> > > >
>> >>>>> > > >     Thank you for the suggestions,____
>> >>>>> > > >
>> >>>>> > > >     __ __
>> >>>>> > > >
>> >>>>> > > >     All the best,____
>> >>>>> > > >
>> >>>>> > > >     __ __
>> >>>>> > > >
>> >>>>> > > >     Roy____
>> >>>>> > > >
>> >>>>> > > >     __ __
>> >>>>> > > >
>> >>>>> > > >     *From:*x3d-public [mailto:x3d-public-bounces at web3d.org
>> <mailto:x3d-public-bounces at web3d.org> <mailto:x3d-public-bounces at web
>> 3d.org <mailto:x3d-public-bounces at web3d.org>>
>> >>>>> > > >     <mailto:x3d-public-bounces at web3d.org <mailto:
>> x3d-public-bounces at web3d.org> <mailto:x3d-public-bounces at web3d.org
>> <mailto:x3d-public-bounces at web3d.org>>>] *On Behalf Of *Andreas Plesch
>> >>>>> > > >     *Sent:* 21 December 2016 05:51
>> >>>>> > > >     *To:* X3D Graphics public mailing list <
>> x3d-public at web3d.org <mailto:x3d-public at web3d.org> <mailto:
>> x3d-public at web3d.org <mailto:x3d-public at web3d.org>>
>> >>>>> > > >     <mailto:x3d-public at web3d.org <mailto:x3d-public at web3d.org>
>> <mailto:x3d-public at web3d.org <mailto:x3d-public at web3d.org>>>>
>> >>>>> > > >     *Subject:* [x3d-public] X3D VR____
>> >>>>> > > >
>> >>>>> > > >     __ __
>> >>>>> > > >
>> >>>>> > > >     Hi____
>> >>>>> > > >
>> >>>>> > > >     Working with x3dom and WebVR I am exploring what
>> additions to X3D
>> >>>>> > > >     would be necessary or useful to effectively use current
>> VR hardware
>> >>>>> > > >     such as the Rift, Vive, Gear, or Cardboard.____
>> >>>>> > > >
>> >>>>> > > >     The proposed RenderedTexture node is currently used in
>> x3dom with
>> >>>>> > > >     special stereo modes to generate left and right views
>> from a single
>> >>>>> > > >     viewpoint in a single GL context.____
>> >>>>> > > >
>> >>>>> > > >     A Layer for each left and right view could also be used
>> with two
>> >>>>> > > >     coordinated but separate viewpoints. This would be
>> Cobweb's pattern.____
>> >>>>> > > >
>> >>>>> > > >     Since the HMD and its own runtime know best the internal
>> optical
>> >>>>> > > >     characteristics of the display, the preferred interface
>> at least in
>> >>>>> > > >     WebVR are view and projection matrixes directly usable by
>> consumers
>> >>>>> > > >     for 3d graphics generation, and not position, orientation
>> and fov.
>> >>>>> > > >     This leads to a new requirement for X3D to accept these
>> raw matrixes
>> >>>>> > > >     as input.____
>> >>>>> > > >
>> >>>>> > > >     Since x3dom uses RenderedTexture for VR, I added there
>> additional
>> >>>>> > > >     special stereo modes which directly receive these
>> matrixes from the
>> >>>>> > > >     HMD to be used for rendering at each frame. This in effect
>> >>>>> > > >     accomplishes a first, head and body based, level of
>> navigation. In
>> >>>>> > > >     my first tests (on github) it works well and should be
>> robust across
>> >>>>> > > >     devices. This approach does require some standard API to
>> the HMD
>> >>>>> > > >     such as WebVR.____
>> >>>>> > > >
>> >>>>> > > >     Another possibility is to add view and projection matrix
>> fields
>> >>>>> > > >     input fields to viewpoints which can be continually
>> updated. One
>> >>>>> > > >     could convolve the view matrix with position/orientation
>> fields, or
>> >>>>> > > >     optionally completely ignore them.____
>> >>>>> > > >
>> >>>>> > > >     One then could use external SAI to keep updating or
>> perhaps
>> >>>>> > > >     introduce an environment ProjectionSensor. It would relay
>> these
>> >>>>> > > >     matrix events from an HMD runtime to then be routed to the
>> >>>>> > > >     viewpoints.____
>> >>>>> > > >
>> >>>>> > > >     A second level of navigation is accomplished with handheld
>> >>>>> > > >     controllers. Until some standard gestures evolve, it will
>> be
>> >>>>> > > >     necessary to expect custom per scene navigation. X3d
>> should have a
>> >>>>> > > >     way to sense buttons and axes velocities of such
>> controllers so
>> >>>>> > > >     these are then available to manipulate the view in some
>> way.
>> >>>>> > > >     InstantReality has a general IOSensor to that effect. On
>> the web the
>> >>>>> > > >     GamePad API is a standard which would need to be used.____
>> >>>>> > > >
>> >>>>> > > >     Summary: RenderedTexture with stereo modes, matrix fields
>> for
>> >>>>> > > >     viewpoints, a ProjectionSensor, and a ControllerSensor
>> would all be
>> >>>>> > > >     candidates for x3d VR support.____
>> >>>>> > > >
>> >>>>> > > >     Thanks for reading, any thoughts welcome,____
>> >>>>> > > >
>> >>>>> > > >     Andreas ____
>> >>>>> > > >
>> >>>>> > > >
>> >>>>> > > >
>> >>>>> > > >
>> >>>>> > > > _______________________________________________
>> >>>>> > > > x3d-public mailing list
>> >>>>> > > >x3d-public at web3d.org <mailto:x3d-public at web3d.org> <mailto:
>> x3d-public at web3d.org <mailto:x3d-public at web3d.org>>
>> >>>>> > > >http://web3d.org/mailman/listinfo/x3d-public_web3d.org <
>> http://web3d.org/mailman/listinfo/x3d-public_web3d.org>
>> >>>>> > > >
>> >>>>> > >
>> >>>>> > >
>> >>>>> > > --
>> >>>>> > > Holger Seelig
>> >>>>> > > Mediengestalter Digital ? Digital Media Designer
>> >>>>> > >
>> >>>>> > > Scheffelstra?e 31a
>> >>>>> > > 04277 Leipzig
>> >>>>> > > Germany
>> >>>>> > >
>> >>>>> > > Cellular:+49 1577 147 26 11 <tel:+49%201577%201472611>
>> >>>>> > > E-Mail:   holger.seelig at create3000.de <mailto:
>> holger.seelig at create3000.de> <mailto:holger.seelig at create3000.de <mailto:
>> holger.seelig at create3000.de>>
>> >>>>> > > Web:     http://titania.create3000.de <
>> http://titania.create3000.de>
>> >>>>> > >
>> >>>>> > > Future to the fantasy ? ?
>> >>>>> > >
>> >>>>> > >
>> >>>>> > >
>> >>>>> > > ------------------------------
>> >>>>> > >
>> >>>>> > > Subject: Digest Footer
>> >>>>> > >
>> >>>>> > > _______________________________________________
>> >>>>> > > x3d-public mailing list
>> >>>>> > >x3d-public at web3d.org <mailto:x3d-public at web3d.org> <mailto:
>> x3d-public at web3d.org <mailto:x3d-public at web3d.org>>
>> >>>>> > >http://web3d.org/mailman/listinfo/x3d-public_web3d.org <
>> http://web3d.org/mailman/listinfo/x3d-public_web3d.org>
>> >>>>> > >
>> >>>>> > >
>> >>>>> > > ------------------------------
>> >>>>> > >
>> >>>>> > > End of x3d-public Digest, Vol 94, Issue 17
>> >>>>> > > ******************************************
>> >>>>> >
>> >>>>> > _______________________________________________
>> >>>>> > x3d-public mailing list
>> >>>>> >x3d-public at web3d.org <mailto:x3d-public at web3d.org> <mailto:
>> x3d-public at web3d.org <mailto:x3d-public at web3d.org>>
>> >>>>> >http://web3d.org/mailman/listinfo/x3d-public_web3d.org <
>> http://web3d.org/mailman/listinfo/x3d-public_web3d.org>
>> >>>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> _______________________________________________
>> >>>> x3d-public mailing list
>> >>>> x3d-public at web3d.org <x3d-public at web3d.org> <mailto:
>> x3d-public at web3d.org>
>> >>>> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>> <http://web3d.org/mailman/listinfo/x3d-public_web3d.org> <
>> http://web3d.org/mailman/listinfo/x3d-public_web3d.org>
>> >>>>
>> >>>
>> >>>
>> >>> all the best, Don
>> >>> --
>> >>> Don Brutzman  Naval Postgraduate School, Code USW/Br
>> brutzman at nps.edu <mailto:brutzman at nps.edu>
>> >>> Watkins 270,  MOVES Institute, Monterey CA 93943-5000 USA
>> +1.831.656.2149 <tel:(831)%20656-2149>
>> >>> X3D graphics, virtual worlds, navy roboticshttp://faculty.nps.edu
>> /brutzman <http://faculty.nps.edu/brutzman>
>> >>
>> >>
>> >
>> >
>> > all the best, Don
>> > --
>> > Don Brutzman  Naval Postgraduate School, Code USW/Br
>> brutzman at nps.edu
>> > Watkins 270,  MOVES Institute, Monterey CA 93943-5000 USA
>> +1.831.656.2149
>> > X3D graphics, virtual worlds, navy robotics
>> http://faculty.nps.edu/brutzman
>>
>>
>> --
>> Andreas Plesch
>> 39 Barbara Rd.
>> Waltham, MA 02453
>>
>
>
>
> --
> Andreas Plesch
> 39 Barbara Rd.
> Waltham, MA 02453
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20170117/ae96ddfa/attachment-0001.html>


More information about the x3d-public mailing list