[x3d-public] V4.0 Open discussion/workshop on X3D HTML integration
andreasplesch at gmail.com
Fri Jun 3 11:40:11 PDT 2016
Hi Roy, group,
I would like to join the 2h open discussion on Wednesday but will be in an
all day meeting, actually on the West Coast.
These are excellent questions and trying to answer them will help getting
Here are some thoughts.
The discussion on introducing an id field seemed to point towards the need
to have fuller integration in the sense that it is difficult to isolate
features. It may be necessary to define a x3d dom similar to the svg dom,
with the corresponding interfaces. svg is very successful on the web but it
took a long time to arrive there.
x3dom has a dual graph approach. There is the x3d graph and in parallel the
page dom graph which are kept in sync but are both fully populated.
Johannes would know better how to explain the concept.
It looks like FHG decided that x3dom is now considered community (only?)
supported. This probably means it will be out of sync as newer web browsers
arrive, or webgl is updated.
I explored Aframe a bit more. It will be popular for VR. It is still in
flux and evolves rapidly. The developers (mozilla) focus on its basic
architecture (which is non-hierarchical, a composable component system) and
the form of shareable components). So it is quite different, fun for
developers, and for basic scenes easy for consumers. Since most mobile VR
content at this point is basic (mostly video spheres and panos), it is a
good solution for many.
(As a test I also implemented indexedfaceset as an Aframe component, and it
was pretty easy - after learning some Three.js. So it would be possible to
have x3d geometry nodes on top of aframe. Protos, events and routes are
another matter but also may not be impossible).
There is still space for x3d as a more permanent, and optionally
sophisticated 3d content format on the web.
Event system: My limited understanding is that on a web page, the browser
emits events when certain things happen. Custom events can also be emitted
by js code (via DOM functions) for any purpose. (All ?) events have a time
stamp and can have data attached. Then, events can be listened to. There is
no restriction to listening, eg. all existing events are available to any
listener. A listener then invokes a handler which does something related to
the event. js code can consume, cancel, or relay events as needed (via DOM
functions). It is not unusual that many events are managed on a web page.
events can be used to guarantee that there is a sequence of processing.
So how does the x3d event system relate ? There is a cascade, and
directivity. How long does an event live ? one frame ? Until it fully
cascaded through the scene graph ?
Since x3dom and cobweb are currently the only options, from a practical
stand point a question to ask may be this: what is needed to make x3dom and
cobweb easy to use and interact with on a web page ? Typically, the web
page would provide an UI, the connection to databases or other sources of
data, and the x3d scene is responsible for rendering, and interacting with
the 3d content. For VR, the UI would need to be in the scene, but
connections and data sources would still be handled by the web page.
Cobweb in effect allows use of the defined SAI functions. Is it possible to
define a wrapper around these functions to allow a DOM like API
(createElement, element.setAttribute .. element = null) ? It may be since
they are similar anyways and it would go a long way. But it still would not
be sufficient to let other js libraries such as D3.js or react control and
modify a scene since they would expect x3d nodes to be real DOM elements.
VR: A current issue is control devices. It would be probably useful to go
over the spec. and see where there is an implicit assumption that mouse or
keyboard input is available. VR HMDs have different controls (head position
and orientation(pose), one button) and hand held controllers (gamepads,
special sticks with their own position/orientations) or the tracked hands
themselves become more popular. In VR, you do want to your hands in some
Perhaps, it makes sense to have <Right/LeftHand/> nodes paralleling
<Viewpoint/> with position/orientation fields which can be routed to
transforms to manipulate objects ?
How a browser would feed the <Hand> nodes would be up to the browser.
InstantReality has a generic IOSensor.
On Wed, Jun 1, 2016 at 4:53 PM, Roy Walmsley <roy.walmsley at ntlworld.com>
> *X3D HTML integration*
> *X3D V4.0 Open discussion/workshop*
> *June 8st 2016 at 1500-1700 UTC (0800-1000 PDT, 1500-1700 GMT, 1700-1900
> A discussion/workshop on version 4.0 of X3D which aims to consider the
> following questions, and suggest potential solutions.
> · What level of X3D integration into HTML5 do we want?
> o Do we want to be fully integrated like SVG?
> · Do we want/need a DOM spec? If so:
> o Which DOM version should it be based on?
> o Do we want to fully support all DOM/HTML features?
> · Do we want to maximize the backwards compatibility of V4.0 with
> V3.3? Or break away completely?
> o Do we want to retain SAI?
> · What features do we want? For example,
> o How is animation to be handled? The X3D way of TimeSensor and ROUTEs,
> o How is user interaction to be handled? The X3D way of Sensors, or the
> HTML way with event handlers?
> o Do we need any different nodes? One example might be a mesh node?
> o Do we want Scripts and Prototypes in HTML5?
> o How do we want to handle styling?
> · What profile(s) do we need for HTML?
> The discussion/workshop will be held on the Web3D teleconference line. It
> is open to anyone interested in X3D. Please e-mail
> roy.walmsley at ntlworld.com or brutzman at nps.edu for teleconference details.
> If you can’t join in the discussion, but would still like to contribute to
> the debate, your comments would be welcomed on the X3D public mailing list
> at x3d-public at web3d.org.
> Roy Walmsley
> X3D WG Co-chair
39 Barbara Rd.
Waltham, MA 02453
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the x3d-public