[x3d-public] Layering/Layout

doug sanden highaspirations at hotmail.com
Sun Mar 13 06:55:53 PDT 2016


Thanks Joe - good analysis.

> OK, that would put te Hud in Layer 0. Then would you need to bind the
> Hud instance to an instance of a ProximitySensor or Layer, or
> execution context?
> 
> Somehow this should be easy to do even if different Hud interfaces are
> in different Layers to be displayed at different situations.

What I learned about layers: each layer can have a bound viewpoint, and 
navigationInfo. And receive navigation events. From the same
device ie HMD and button wand. And interpret same or different depending on navinfo.

I don't know what Cardboard users do to signal like buttons. (Q. what /how do they do that?)
 But if they had a separate button wand, then they could press a button 
to bring up the HUD and freeze it -as you say relative to the viewer pose 
at that moment- and use the HMD IMU (inertial measuring unit) to point the 
viewpoint toward the menu option they want before pressing another wand button.

-Doug


> 
> 
> ________________________________________
> From: Joe D Williams <joedwil at earthlink.net>
> Sent: March 12, 2016 2:21 PM
> To: doug sanden; x3d-public at web3d.org
> Subject: Re: [x3d-public] Layering/Layout
> 
> > How to accommodate that if Layout/Hud is separated from viewpoint
> > (in different Layers/executionContexts)?
> 
> VIewpoint should not matter. What matters is the Hud position and
> rotation relative to the viewer position and orientation. Information
> for the viewer is given by a ProximitySensor. Usually the Hud is part
> of a transform that is animated by routing outputs of a
> ProximitySensor to the Hud parent Transform. ProximitySensor outputs
> represent the rotation and position of the viewer relative to the
> ProximitySensor center and range. The Hud is then displaced from the
> viewer position by another transform to maintain the position of
> interfaces in the field of view as the viewer navigates in various
> ways through ProximitySensor space, including changing the Viewpoint.
> 
> So, it seems like this is not dependent upon the Viewpoint selected
> but instead the position of the viewer in viewer space, which is the
> same as controlling ProximitySensor coordinate space. Any and All
> active ProximitySensors anywhere send events when the viewer is in
> range. I don't know, it could get complicated but I would say to put
> Hud and ProximitySensor in the same Layer or execution context so that
> no events need to be passed, thinking that the viewer is in the same
> space as the controlling ProximitySensor, or any other X3D
> X3DEnvironmentalSensorNode.
> 
> Nancy, in the Humanoid animation wen3d.org examples shows a Hud.
> 
> > A fuzzy idea is to keep Layout separate from the scene, for HUD
> > (heads-up display ie menu.dashboard):
> > <X3D>
> > <Hud url='"xxx"'/>
> 
> OK, that would put te Hud in Layer 0. Then would you need to bind the
> Hud instance to an instance of a ProximitySensor or Layer, or
> execution context?
> 
> Somehow this should be easy to do even if different Hud interfaces are
> in different Layers to be displayed at different situations.
> 
> Thanks,
> Joe
> 
> 
> ----- Original Message -----
> From: "doug sanden" <highaspirations at hotmail.com>
> To: <x3d-public at web3d.org>
> Sent: Friday, March 11, 2016 6:58 AM
> Subject: [x3d-public] Layering/Layout
> 
> 
> >
> > I thought Layering/Layout could have been up one level.
> > -Doug
> >
> > more..
> > I implement Layering/Layout. I thought it was straining too hard to
> > be integrated into regular scenery. Except still needing
> > IMPORT/EXPORT. (for freewrl I found it didn't need a separate
> > executionContext/Inline for each layer, so Inline+IMPORT/EXPORT are
> > optional)
> >
> > A fuzzy idea is to keep Layout separate from the scene, for HUD
> > (heads-up display ie menu.dashboard):
> > <X3D>
> > <Hud url='"xxx"'/>
> > <Scene>
> > </Scene>
> > </X3D>
> > This would free those Layout/Hud nodes from needing to be generally
> > scenegraph usable/friendly/integrated.
> >
> > Or for Layering:
> > <X3D>
> > <Scene DEF='Scene1'>
> > </Scene>
> > <Scene DEF='Scene2'>
> > </Scene>
> > </X3D>
> > with layering in order of listing, and both scene and hud is-a
> > layer: Scene:Layer, Hud:Layer
> >
> > But how would/could/should routes work - how to apply HUD
> > choices/settings to the scene? IMPORT/EXPORT as the layering
> > component specifies, or an X3D level of ROUTE which could route
> > between scenes/execution context/layers like/instead-of
> > IMPORT/EXPORT:
> > <X3D>
> > <Hud DEF='hud1' url='"xxx"'/>
> > <Scene DEF='scene1'>
> > </Scene>
> > <ROUTE fromLayer='Hud' fromNode='menu1' fromField='menuChoice1
> > toLayer='scene1' toNode='..../>
> > </X3D>
> >
> > more..
> > One reason to want the layout/hud to still be subjectable to 3D
> > transformation at some level:  for HMD (head mounted display) menus
> > need to be pinnned in 3D space (or pan in 2D) as the user moves
> > their head so the center + of the view is over a menu item. How to
> > accommodate that if Layout/Hud is separated from viewpoint (in
> > different Layers/executionContexts)?
> >
> > more..
> > reminds me of Geo Nodes which -like KML- only makes sense at the
> > root level.
> > _______________________________________________
> > x3d-public mailing list
> > x3d-public at web3d.org
> > http://web3d.org/mailman/listinfo/x3d-public_web3d.org
> 
> 



More information about the x3d-public mailing list