[X3D-Public] Fwd: Re: [X3D] X3D HTML5 meeting discussions: Declarative 3D interest group at W3C

Philipp Slusallek slusallek at cs.uni-saarland.de
Wed Jan 5 08:16:11 PST 2011


Hi,

Am 05.01.2011 14:10, schrieb Johannes Behr:
>> In XML3D we have avoided the need for a graph in the first place. Every
>> object has a clear location in the tree, but you may refer to an object
>> somewhere else in the tree through a URL fragment ("#id"). Currently, we
>> only allow references to geometry (leafs in the tree, not to internal
>> nodes with entire subtrees) for simplicity reason. This means you may
>> have to replicate these internal nodes but still share the nodes that
>> really maintain the most data. However, for good reasons we may want
>> eventually to allow references to entire subtrees.
> 
> Which makes your system a DAG :)))

It a hybrid thing: It has some of the nice properties of a DAG without
being one :-). E.g. it does not behave like one as it uses weak links
that additionally can only point to leaf nodes (meshes).

In particular, the object referenced (which is uniquely identifiable)
may be deleted at any time and will NOT still be available elsewhere in
the DAG at the other parents (which would be the case with a true DAG).
They will have dangling references instead that are silently ignored.

This means that any XML3D object is unique identifiable via its path
from the root (a property of a tree). However, nodes may still reuse
geometry (a DAG property, and important for memory saving of the big
resources eaters in a scene). This is a key and really important
difference (as we have discussed before).

The real question is what happens if we generalize this and allow (weak)
references to internal nodes as well. Its still not a full DAG due to
weak links, but would allow nested instantiation and all the other good
stuff. The issue then is how inheritance should be defined in this case.
In the easiest case there would be no inheritance across the weak link
except for the transformation (placement in the scene). This should be
pretty simple to handle in both rasterization and ray tracing, physics
engines and such. The issue is, whether this is still powerful enough
for some of the more demanding applications. I think so but cannot
really back this up quantitatively, yet.

My feeling is that we should not go much further than this (if that far
at all), as it significantly complicates the system and the majority of
users will be confused about most other possible inheritance rules and
tend not to use them anyway. But we may punish the power users and
complicate some exporters from application that do support a more
general scheme.

This is a good direction for some concrete evaluations on real scenes
and usage scenarios -- no "but I want the most general case anyway"
statements please. We have to weigh the cost of this very carefully.


	Philipp


>>> All that is fine I just didn't get the empahsis on modelling behaviors
>>> of interesting UI objects..
>>
>>>> -- What do you mean by automation?
>>>
>>> Well, like having builtin sensors and timers and interpolators and
>>> generally, behavior details, and a reliable event system and easy
>>> authoring because you can just use the browser and a text editor or a
>>> full-fledged authoring system, and being able to lift great gobs of
>>> existing content into my little html5 show.
>>
>> Now the key part of XFlow is that we also have data elements that can
>> refer to a script node and implement any transformation on the input
>> data they refer to (other data elements) and provide as "output" new
>> vertex arrays. We currently do the processing on the CPU and only have
>> predefined "scripts" (using a URI notation), but this is designed to
>> work with our new shading language system such that you can write you
>> own scripts to modify the data. Eventually, we will even be able to
>> optimize the scripts over multiple of these scriptable nodes and pus
>> some of these nodes also into the geometry shader pipeline of OpenGL and
>> such. But this is still work in progress and not in the distributed
>> versions yet.
> 
> So if you allow free references to data in your system to build
> a separate event or data-flow graph (on top of the scene-graph) it't very 
> much like the route system of X3D.
> 
> So what is the advantage again?
> 
>>
>>>> There is a decent API that deals with
>>>> the DOM (including AJAX to download them and many other nice Web
>>>> technologies).
>>>
>>> Great, I have seen some of that and it works and can work everywhere. In
>>> fact WebGL is pushing the limits of XHR and JSON. So, of course we have
>>> access to allthat, given the encoding makes sense.
>>>
>>>> We have extended this to better support access in native
>>>> 3D data types (like WebGL). Javascript is also a pretty decent
>>>> programming language that gets a lot of attention in terms of
>>>> performance speedup and such. All of this we get for free when working
>>>> in the same ecosystem.
>>>
>>> Right, looking at stuff in pure WebGL here is showing lots of agreement
>>> in all the browsers, so this first pass at doc3D only needs to do the
>>> basic features. That makes part of the opportunity easier and gives us
>>> some basic features to build a live DOM that understands the kind of
>>> data expected by the underlying process and is able to make it live.
>>>
>>>>
>>>> The DOM just happens to be used by many times more people world wide on
>>>> a daily basis that all programmers that have ever dealt with a 3D scene
>>>> graph combined. So even if it has flaws, at least it works for a heck
>>>> lot of people and is very well tested :-).
>>>
>>> Interesting. But I think it sounds like you have not examined the X3D
>>> SAI interfaces. Really, it is just like the DOM if you want it to be
>>> that way. To the user it can look like a DOM, just that a child can have
>>> may parents and that there is also a hierarchy of contexts.
>>
>> The API is different enough that Web people have to learn it anew. This
>> argument actually cut even better the other way: Why not replace the SAI
>> with the Web DOM API that millions of people use on a daily basis if the
>> two are so similar anyway?
>>
>>>> -- In case you are referring to Routes, I am not sure I agree with you.
>>>> From my point of view, routes help solving the simple cases but easily
>>>> get into your way if you want to do more complex stuff. But if you
>>>> really need them, they can be easily simulated using other techniques.
>>>> And maybe there are convincing arguments that we should indeed add them.
>>>
>>> That is fine, the SX3D SAi also gives event notifation and attachemts
>>> and that, if yo need them. With little imagination, it gets important to
>>> be able to get information around the scene. Route just gives a
>>> convenient way to send strongly typed data from mode to node, and is
>>> easily readble/settable from the outsde.
>>
>> I definitely see the things one can do with the synchronized event
>> system and data propagation model of X3D. We just do not have this
>> available in HTML. So we could try to add it (but we would have to do
>> this across all of (2D-)HTML, which will be a real challenge). Or we can
>> see if we can live without it or create a different model. We are
>> currently doing without it in XML3D to see how far we can push this (but
>> we have the data references and transformations of XFlow). But I can
>> definitely see that we end up needing something like this eventually.
> 
> I totally agree. The HTML-Event system and the X3D sensor concepts
> are hard to combine in a single system design without breaking one of
> this concepts. 
> 
> This is the reasons, why we didn't include the high-level sensors in the
> HTML-Profile of X3DOM. But time will tell. 
> 
> best regards
> johannes 
> 
> \




More information about the X3D-Public mailing list