[x3d-public] V4.0 Open discussion/workshop on X3D HTML integration

Don Brutzman brutzman at nps.edu
Wed Jun 8 20:15:44 PDT 2016


Thank you Philipp.  Hope you are well, am happy to hear that your progress continues.  Apologies for delayed read/response, am traveling and had some mailer difficulty.

I certainly agree that strings are inefficient.  Meanwhile our current efforts for X3D 4 design improvements are mostly listing and evaluating potential solutions, letting HTML/DOM be what it is and seeing how to best align.  If multiple solutions can work compatibly, that is certainly an interesting possibility.

I hope that you and your team are at Web3D Conference and SIGGRAPH - are you presenting?  Beyond that, it would be good if we might find a time & place in the schedule for group discussion.


On 6/5/2016 12:07 PM, Philipp Slusallek wrote:
> Hi Don,
>
> Am 05.06.2016 um 20:32 schrieb Don Brutzman:
>> External to an X3D scene graph, meaning via a browser using the Scene
>> Access Interface (SAI), mechanisms are similarly well defined.  Since
>> the DOM is string based, and since any X3D event value can be expressed
>> as a string, it seems like we have a straight connect-the-dots approach
>> awaiting us.  Forgive me for using a four-letter word, but if interested
>> individuals might actually _read_ the HTML5/DOM and X3D specifications,
>> then the answers to most implementation & alignment questions are likely
>> spelled out for us.
>
> Please do not go the route (sic!) of a string-based interface for
> implementing X3D routes. Yes, the DOM has a generic string-based
> interface, which is really important in general. But not for efficiently
> handling big 3D data. Any DOM node can additionally provide a "binary"
> JS API as well, ideally using Typed Aarrays in JS.
>
> Converting to strings and back will cause huge overhead and will rule
> out any GPU-based computation and acceleration. The latter is a must in
> today's environments, especially on mobiles. You do not want to create
> this overhead for large arrays of vertices or such and have to parse all
> the numbers again and again. It can also cause numerical inaccuracies in
> the conversion that may lead to inconsistencies in the binary
> representation, which can cause gaps in supposedly closed geometry.
>
> BTW, this is exactly why we have created Xflow: To efficiently be able
> to specify generic typed data arrays (available as GPU buffers in the
> engine as early as possible), flexibly composite individual buffers into
> sets of buffers (<data> elements that are define all the data the input
> for efficient draw calls), and also to process the data as necessary
> along the way (e.g. flexible animation, image processing, procedural
> shading, transitions, etc.).
>
> Xflow is actually much more powerful than routes and it fits much better
> to HTML5 -- in my opinion at least. Funded by Intel we are just
> extending Xflow to automatically make use of e.g. SIMD instructions (via
> SIMD.js) and other JS acceleration techniques. We are also looking at
> WebAssembly here for better performance even if not going to the GPU.
>
>
> BTW, if anyone wants to know more, you find all our related papers since
> when we started XML3D a few years ago on our XML3D publication page:
> 	http://xml3d.org/papers/
>
>
> Best,
>
> 	Philipp
>
>>
>> It will be great to get our wiki organized for clarity and go-forward
>> action on these important X3D version 4 design issues.
>>
>> Thanks for letting us know about your other explanations - very
>> interesting!  8)
>>
>> Looking forward to continuing, sustainable evolution and progress together.
>>
>> v/r Don
>>
>>
>>
>> On 6/3/2016 11:40 AM, Andreas Plesch wrote:
>>> Hi Roy, group,
>>>
>>> I would like to join the 2h open discussion on Wednesday but will be
>>> in an all day meeting, actually on the West Coast.
>>>
>>> These are excellent questions and trying to answer them will help
>>> getting closer.
>>>
>>> Here are some thoughts.
>>>
>>> The discussion on introducing an id field seemed to point towards the
>>> need to have fuller integration in the sense that it is difficult to
>>> isolate features. It may be necessary to define a x3d dom similar to
>>> the svg dom, with the corresponding interfaces. svg is very successful
>>> on the web but it took a long time to arrive there.
>>>
>>> x3dom has a dual graph approach. There is the x3d graph and in
>>> parallel the page dom graph which are kept in sync but are both fully
>>> populated. Johannes would know better how to explain the concept.
>>>
>>> It looks like FHG decided that x3dom is now considered community
>>> (only?) supported. This probably means it will be out of sync as newer
>>> web browsers arrive, or webgl is updated.
>>>
>>> I explored Aframe a bit more. It will be popular for VR. It is still
>>> in flux and evolves rapidly. The developers (mozilla) focus on its
>>> basic architecture (which is non-hierarchical, a composable component
>>> system) and expects users to use javascript to develop more advanced
>>> functionality (in the form of shareable components). So it is quite
>>> different, fun for developers, and for basic scenes easy for
>>> consumers. Since most mobile VR content at this point is basic (mostly
>>> video spheres and panos), it is a good solution for many.
>>>
>>> (As a test I also implemented indexedfaceset as an Aframe component,
>>> and it was pretty easy - after learning some Three.js. So it would be
>>> possible to have x3d geometry nodes on top of aframe. Protos, events
>>> and routes are another matter but also may not be impossible).
>>>
>>> There is still space for x3d as a more permanent, and optionally
>>> sophisticated 3d content format on the web.
>>>
>>> Event system: My limited understanding is that on a web page, the
>>> browser emits events when certain things happen. Custom events can
>>> also be emitted by js code (via DOM functions) for any purpose. (All
>>> ?) events have a time stamp and can have data attached. Then, events
>>> can be listened to. There is no restriction to listening, eg. all
>>> existing events are available to any listener. A listener then invokes
>>> a handler which does something related to the event. js code can
>>> consume, cancel, or relay events as needed (via DOM functions). It is
>>> not unusual that many events are managed on a web page. events can be
>>> used to guarantee that there is a sequence of processing.
>>>
>>> So how does the x3d event system relate ? There is a cascade, and
>>> directivity. How long does an event live ? one frame ? Until it fully
>>> cascaded through the scene graph ?
>>>
>>> Since x3dom and cobweb are currently the only options, from a
>>> practical stand point a question to ask may be this: what is needed to
>>> make x3dom and cobweb easy to use and interact with on a web page ?
>>> Typically, the web page would provide an UI, the connection to
>>> databases or other sources of data, and the x3d scene is responsible
>>> for rendering, and interacting with the 3d content. For VR, the UI
>>> would need to be in the scene, but connections and data sources would
>>> still be handled by the web page.
>>>
>>> Cobweb in effect allows use of the defined SAI functions. Is it
>>> possible to define a wrapper around these functions to allow a DOM
>>> like API (createElement, element.setAttribute .. element = null) ? It
>>> may be since they are similar anyways and it would go a long way. But
>>> it still would not be sufficient to let other js libraries such as
>>> D3.js or react control and modify a scene since they would expect x3d
>>> nodes to be real DOM elements.
>>>
>>> VR: A current issue is control devices. It would be probably useful to
>>> go over the spec. and see where there is an implicit assumption that
>>> mouse or keyboard input is available. VR HMDs have different controls
>>> (head position and orientation(pose), one button) and hand held
>>> controllers (gamepads, special sticks with their own
>>> position/orientations) or the tracked hands themselves become more
>>> popular. In VR, you do want to your hands in some way.
>>>
>>> Perhaps, it makes sense to have <Right/LeftHand/> nodes paralleling
>>> <Viewpoint/> with position/orientation fields which can be routed to
>>> transforms to manipulate objects ?
>>> How  a browser would feed the <Hand> nodes would be up to the browser.
>>> InstantReality has a generic IOSensor.
>>>
>>> Andreas
>>>
>>>
>>> On Wed, Jun 1, 2016 at 4:53 PM, Roy Walmsley
>>> <roy.walmsley at ntlworld.com <mailto:roy.walmsley at ntlworld.com>> wrote:
>>>
>>>     *X3D HTML integration____*
>>>
>>>     *__ __*
>>>
>>>     *X3D V4.0 Open discussion/workshop____*
>>>
>>>     *__ __*
>>>
>>>     *June 8^st 2016 at 1500-1700 UTC (0800-1000  PDT, 1500-1700 GMT,
>>> 1700-1900 CET)____*
>>>
>>>     *__ __*
>>>
>>>     A discussion/workshop on version 4.0 of X3D which aims to consider
>>> the following questions, and suggest potential solutions.____
>>>
>>>     __ __
>>>
>>>     __·         __What level of X3D integration into HTML5 do we
>>> want?____
>>>
>>>     __o   __Do we want to be fully integrated like SVG?____
>>>
>>>     __·         __Do we want/need a DOM spec? If so:____
>>>
>>>     __o   __Which DOM version should it be based on?____
>>>
>>>     __o   __Do we want to fully support all DOM/HTML features?____
>>>
>>>     __·         __Do we want to maximize the backwards compatibility
>>> of V4.0 with V3.3? Or break away completely?____
>>>
>>>     __o   __Do we want to retain SAI?____
>>>
>>>     __·         __What features do we want? For example,____
>>>
>>>     __o   __How is animation to be handled? The X3D way of TimeSensor
>>> and ROUTEs, or an HTML way, such as CSS3 animations, or else
>>> JavaScript?____
>>>
>>>     __o   __How is user interaction to be handled? The X3D way of
>>> Sensors, or the HTML way with event handlers?____
>>>
>>>     __o   __Do we need any different nodes? One example might be a
>>> mesh node?____
>>>
>>>     __o   __Do we want Scripts and Prototypes in HTML5?____
>>>
>>>     __o   __How do we want to handle styling?____
>>>
>>>     __·         __What profile(s) do we need for HTML?____
>>>
>>>     __ __
>>>
>>>     The discussion/workshop will be held on the Web3D teleconference
>>> line. It is open to anyone interested in X3D. Please e-mail
>>> roy.walmsley at ntlworld.com <mailto:roy.walmsley at ntlworld.com> or
>>> brutzman at nps.edu <mailto:brutzman at nps.edu> for teleconference
>>> details.____
>>>
>>>     __ __
>>>
>>>     If you can’t join in the discussion, but would still like to
>>> contribute to the debate, your comments would be welcomed on the X3D
>>> public mailing list at x3d-public at web3d.org
>>> <mailto:x3d-public at web3d.org>.____
>>>
>>>     __ __
>>>
>>>     Roy Walmsley____
>>>
>>>     X3D WG Co-chair____
>>>
>>>     __ __
>>>
>>> --
>>> Andreas Plesch
>>> 39 Barbara Rd.
>>> Waltham, MA 02453
>>> _______________________________________________
>>> x3d-public mailing list
>>> x3d-public at web3d.org
>>> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>>
>> all the best, Don
>


all the best, Don
-- 
Don Brutzman  Naval Postgraduate School, Code USW/Br       brutzman at nps.edu
Watkins 270,  MOVES Institute, Monterey CA 93943-5000 USA   +1.831.656.2149
X3D graphics, virtual worlds, navy robotics http://faculty.nps.edu/brutzman



More information about the x3d-public mailing list