[x3d-public] ] V4.0 Opendiscussion/workshopon X3DHTML integration

Joe D Williams joedwil at earthlink.net
Mon Jun 13 09:08:26 PDT 2016


> As you know, there are other ways
of doing skinning and they are better and more efficient in some
situations,

Well, please describe alternative ways of skinning and maybe some 
other techniques are suitable for the standards track. From what you 
are showing I can't even figure out what way or skininning you are 
using. Where is the mesh and where are the connections from the 
skelton tothe mesh, I can't see the skin binding to the skeleton or 
even how the skeleton parts are moved to deform the skin, Please show 
me.

Regardless of what you say about other ways to do skinning, the way 
X3D does it is the most common best practice for long-lived, reusable 
models. Just pick an authoring system and show me how they do it 
differently, please.

What is holding me back from appreciating your animation stuff is that 
the structure of the skeleton that I can't find, the mesh I can't 
fine, and the bindings of skeleton to skin can't find. One imortant 
point about the standard is that the documentation is obvious, In your 
example it is not apparent how things are connectd and how things are 
animated.

> flowgraph is right in front of your eyes -- as nice
> and compact as it gets.

Yes, all I am asking for is a liitle help in finding the data that is 
the basis for skeletal drivven animations. This isnot a case where the 
author gets to simplify to the point where the trail of what it is and 
what is going on is hidden. The point of X3Disto expose details.

Again, please show me the skeleton and the mesh definition and the 
skeleton animation that causes the mesh deformation.

>> Anyway, I can't find the desired interfaces, like how is
>> the skeleton composed, how is the skin bound, how do I control the
>> animations,

Hoping you can show me. The example I showed is very easy to 
understand what is happening, I need more guidance to understand your 
example.

Thanks and Best,
Joe


----- Original Message ----- 
From: "Philipp Slusallek" <philipp.slusallek at dfki.de>
To: "Joe D Williams" <joedwil at earthlink.net>; "doug sanden" 
<highaspirations at hotmail.com>; "'X3D Graphics public mailing list'" 
<x3d-public at web3d.org>
Sent: Sunday, June 12, 2016 9:53 PM
Subject: Re: [x3d-public]] V4.0 Opendiscussion/workshopon X3DHTML 
integration


> Hi Joe, all,
>
> Am 13.06.2016 um 00:59 schrieb Joe D Williams:
>>> https://xml3d.github.io/xml3d-examples/examples/xflowSkin/xflow-skin.html
>>> for
>>> simple skinned and animated characters
>>
>> I don't see it. There are things jumping around, but from code 
>> think not
>> skeletons with skin but just geometries dragged from frame to 
>> frame.
>> Maybe the code is in the protos? Looks like it could be generated 
>> by
>> something that used skeletal animation but just exported geometry 
>> for
>> some keyframes. Anyway, I can't find the desired interfaces, like 
>> how is
>> the skeleton composed, how is the skin bound, how do I control the
>> animations, do my personal animations stand a chance of working in 
>> those
>> rigs? All the questions I consider basic are not there or very far 
>> down
>> in the reading. So show me the code for a skeleton, please,
>
> Joe, I love your friedly style :-(. Just dig a tiny bit deeper, 
> please:
> Indeed the code is behind the protos.xml link
> (https://github.com/xml3d/xml3d-examples/blob/master/examples/xflowSkin/protos.xml).
> Just open it and the >
> It defines its own interface of what data it takes and applies a
> sequence of operations to the data. Note, that it is a functional
> representation (and not a generic script) and thus sideeffect free 
> and
> can nicely be analyzed. The <dataflow> element is an extension to 
> what
> is in the paper and is documented in our spec (see below).
>
> The operators used are pretty selfexplanatory: The data flow 
> computes
> the inverse transformation matricies of each bone transformation 
> given
> separately as translation and rotations (I guess that is how they 
> where
> provided in the game). It then applies the forward kinematics to get 
> the
> final world transforms for each bone. It does so for the bind pose 
> as
> well as for the actual transformations (again trans and rot)
> interpolated from the input data and multiplies them. Finally, 
> skinning
> is applied for position and normal. Adding another skinning operator
> (e.g. for tangents) would add exactly one more line.
>
> It seems that the operator could be optimized by taking the first 
> two
> lines out as they only need to happen once and not for every frame. 
> But
> this is an optimization that is happening within Xflow 
> automatically. If
> a part of the flowgraph defined by the dependencies between the
> operators is not changed, its data is not recomputed. It is using 
> the
> push-pull model do determine what is changes, as described in the 
> paper.
>
> Is this the final best way for doing skeleton animations? No, it 
> does
> not even try to be that. It simply is the set of operations that are
> required for this specific operation given the set of input data as 
> from
> the game. This is actually a good thing, as you can do skeleton
> animation and skinning even if your data does not comply to the 
> Hanim
> spec, simply by changing this flowgraph a bit.
>
> Note that the compute element is NOT a script but a convenient way 
> to
> build a flowgraph without adding to many of the individual data 
> nodes as
> defined in the paper.
>
>
> BTW, all this is nicely documented in our current spec at
> (http://xml3d.org/xml3d/specification/latest/#dataflow-graph-xflow).
> Note that this spec is already in W3C style and the examples used in
> describing Xflow happens to be the one from this demo.
>
> And I am sure you can complain about the spec, if you want. Its not
> perfect. Go ahead.
>
>
> And while I am at it, here is how it is applied in the scene
> (https://github.com/xml3d/xml3d-examples/blob/master/examples/xflowSkin/xflow-skin.html/).
> For each character (see comments with their names), it first creates 
> a
> data flow that computes the entire mesh using a specific ID (e.g.
> heavySkinned). The following nodes then reuse this complete mesh and
> skinned mesh by referencing it
>   <data src="#heavySkinned"></data
> select a subset of inicies from this mesh by simply adding a new 
> data
> element that provides the indicies for that specific part of the 
> body
> and apply a special material to it. This is repeated for each part 
> of
> the body.
>
> Finally, at the end of the file its has a list of elements with the 
> key
> values (time) of each character (e.g. keyHeavy) and keeps updating 
> them
> based on the current time modulus the maximum animation time of each
> character.
>
> BTW, you can put such a specification for an entire character into 
> an
> <asset> element and reference this in its entirety without having to
> look at the details. You see how this looks like in the Recursive 
> Asset
> example
> (https://github.com/xml3d/xml3d-examples/blob/master/examples/recursiveAsset/recursive.html/).
> It simply places several of the Sniper chacters relativ to each 
> other in
> a hierarchical way. In that way we also drive the animation in a 
> more
> flexible way and at different speeds.
>
>
> In respect to the term "generic data". Please, please, please 
> finally
> read the paper and we can talk again.
>
> Here is my definition of generic: Mapping Hanim to your structure 
> using
> a slightly different flowgraph as the one explained above is almost
> trivial and very compact and runtime efficient without a single
> specialized node. On the other hand, using Hanim or any other part 
> of
> X3D to implement a different skinning mode is hard to almost 
> impossible
> and would require custom code to be efficient. I guess lots of 
> routes
> and skripts could eventually do it too but this would likely be very
> inefficient and not be amendable to be mapped to the GPU 
> automatically.
>
>
> Finally, regarding your praise of Hanim: Yes its been very useful,
> indeed. I am not criticising this at all. BUt you have to distinguis
> between defining a common reference model (which is very useful) and
> offering the only option to do things. As you know, there are other 
> ways
> of doing skinning and they are better and more efficient in some
> situations, depending what you input is and output should be. 
> Sometimes
> you want to do things that are not following your "standard". All I 
> am
> saying is that this should be possible and efficient, especially 
> with
> very powerful programmable GPUs underneath you. Why not expose their
> functionality in a clean and elegant way, to be used in a scene 
> graph.
>
> Why must a something define one way of handling things as the only 
> true
> way of doing it? Why not offer a way to use any of them, putting 
> this in
> a nice <dataflow> element and just use it with the original input 
> data,
> compute the output data needed for what I want to do in a shader, 
> and
> map most of the processing onto the GPU automatically? This is 
> pretty
> hard to do in X3D which defines the "one true way" as a special node 
> and
> expects eveyone to follow this. Now, of course, you are free to do 
> this.
> We are simply pointing out that there are other ways that seem much 
> more
> powerful and flexible.
>
> Take it or leave it. But please before making this decison, try to 
> look
> at it and understand it first.
>
>
> Best,
>
> Philipp
>
> P.S.: I will not have the time engage this much forever. I am not
> pushing this onto you. I am simply pointing out that there are 
> things
> outside of X3D that offer options that seem to be useful for a new
> version. Take it of leave it, but check it out first.
>
>
>
>> From the spec, It is important that the skeleton be well defined in
>> terms of names, locations, and interfaces. To me, the great thing 
>> about
>> the x3d representation is clarity about the naming and location of
>> features, and even an initial pose so that animations can be easily
>> transported between characters.
>>
>>> Hanim has
>>> selected one specific way of describing and handling animation and
>>> skinning, which requires a node-specific implementation.
>>
>> Right, hanim documented the best practices for handling skeleton, 
>> and
>> animation, and skinning, I mean for years x3d does it the same way
>> because these are the parameters for the way that everybody does 
>> it.
>>
>> So, it started long ago with the idea that researchers needed a
>> standardized skeleton that would serve for producing a computable
>> replacement for a mechanical armature in humanoid simulations. With
>> medical and robotic folks also involved, they decided to pick a
>> realistic representation that was widely accepted. The hanim and 
>> X3D
>> standards use as the example a 'standard' humanoid with 'typical'
>> dimensions in a realistic humanoid hierarchy, This was easy for 
>> VRML and
>> X3D with a Humanoid container holding skeleton and skin and some 
>> other
>> stuffs.
>>
>> Skeleton is realistic hierarchy of Joints, Segments, and Sites. 
>> Defining
>> the default initial pose was not easy but finally, the choice was
>> probably an artifact of the method used to obtain the greatast 
>> share of
>> samples to define typical joint and surface feature locations. 
>> Anyway,
>> some of the names have changed (segment instead of bone) and some 
>> under
>> the covers stuffs exposed, but basically x3d hanim is 
>> indusry-standard
>> best practices for complete documentation of a realtime animated 
>> character.
>>
>> Later this has evolved for skeleton structures to serve as the 
>> basic
>> model for motion capture data and as the corresponding structure 
>> for the
>> mocap playback model.
>>
>> I mean, this hanim has been the world standard for transportablity 
>> of
>> basic structures and basic functionality. Wouldn't you expect to 
>> get
>> something that represents the core factors for what most all 
>> realtime
>> character animation tools would give you when you start with any 
>> default
>> (fantasitc that there are now so many) humanoid or biped or 
>> something of
>> that category? Of course, and that is true. See X3D HAnim LOA2. 
>> Some
>> names are changed, but that is the generic skeleton.
>>
>> The names may be changed or some hidden interfaces exposed, but if 
>> you
>> look close you will see that x3d hanim does indeed represent 
>> complete
>> documentation required to build and animate the character. That can 
>> be
>> important when you wish to carry your work from one commercial or 
>> open
>> product to another. I mean you used to have to beg for binding and
>> animation curves. At some authoring levels sometimes you can't even 
>> see
>> that stuff.
>>
>> Whatever the authoring system internal data forms, if they rig 
>> skin,
>> then there may or may not be a human readable and kestroke editable
>> listing of the skin vertex bindings and weights. X3D just says that 
>> this
>> very basic stuff has to be in the flie in a logical place and 
>> reasonable
>> form. Any authoring system worthy of your trust should be able to 
>> give
>> you that list just in case you wanted to work with another tool and 
>> use
>> your old rigging. Why is it so carefully defined in x3d hanim? 
>> Because
>> that is the best way to preserve that type of data since basically,
>> everybody has to do it that way down at the metal, to move the 
>> points ro
>> positions that depend upon what time appears in the next frame.
>>
>> That's just way it is. The basic data in close to executable form
>> readable and editable is what x3d hanim requires. Since it is so 
>> basic,
>> data should be able to be exchanged between most any set of 
>> authoring
>> tools and it is. Of course there might be some new technique(s) 
>> because
>> the things advance, but those techniques either remain proprietary 
>> of
>> have not yet made it into the public open source community so would 
>> ot
>> appear in X3D.
>>
>> No outright challenge here but look at what you get when you start 
>> with
>> the default humanoid in any authoring system. Some might hook up 
>> the
>> joints slightly differently with other names or use some other 
>> space
>> than 'standard' hanim humanoid space but the basic goal is realism, 
>> Hey,
>> I think best results when everything is drawn in 'standard' human 
>> space,
>> dimensioned for your preferences. Like the hanim end-effector 
>> surface
>> features are there because experimeters wanted to be able to define 
>> an
>> actual location in human space relative to the skeleton. That was 
>> where
>> the virtual doctor could touch the virtual surgical tool. Anyway, 
>> by the
>> time the standard was set, it was pretty much decided that real 
>> machines
>> would use quats to anmate but X3D stayed with axis-angle as the 
>> minimum
>> requirement for transporting realtime animations (realtime always 
>> needs
>> interpolators and all inbetweens, so sorry euler angles).
>>
>> A Transform extended to a Joint adds some technical features and 
>> the
>> hierarchal structure of Joint, connecting Segment, and Site(s) for
>> surface and internal features are all standard vrml/x3d. Using the 
>> names
>> Humanoid. Joint, Segment, and Site as names for the major
>> functionalities of the basic humanoid with geometries bound to 
>> segments
>> is accomplished by extending Transform using simple prototypes. To 
>> do
>> the skin needs some pretty standard gem script to move the skin 
>> points
>> as the skeleton is animated .
>>
>> I mean that all the information about mesh and binding and what is
>> supposed to happen when it is supposed to happen is very nicely
>> composed. Of course x3d hanim is always interested in new names and
>> locations and styles or techniques that are missing from anywhere 
>> in
>> x3d, but basically it is all there. This reflects the data that is
>> actually used to create the character and realtime animations in
>> human-readable form. And it matches up with detailed vizualization
>> technolofies like medical and cad and physics and is completely 
>> collada
>> friendly.
>>
>> As I said, the example I am interested in exploring is relatively 
>> simple
>> and from what I have seen (with conversion from axis-angle to 
>> unitquats)
>> can probably be represented lossless in glTF.
>>
>>> But this generic data design also allows for creating these
>>> abstractions that would be much harder (if not impossible) to do 
>>> with
>>> the specialized approach that X3D is based on.
>>
>> X3D is a generic data design because it defines generic forms of 
>> data
>> needed to make and animate a common character. The data is indeed
>> generic and no character animation that can produce animated 
>> characters
>> is missing any of this data. Absent proprietary technology they all 
>> use
>> a skeleton and they all have geometry bound to connecting things, 
>> and
>> they all use the same skin bindings.
>>
>> What is specialized? The names and hierarchy? Well the names are
>> probably specialized but the hierarchy and bindings are not. If i 
>> read
>> the above right, then _if_ the generic data design has hard times 
>> with
>> the x3d approach of containing the data then the generic data 
>> design has
>> big problems. I don't think that is what you said, but what part of 
>> the
>> x3d data design is harder? Overall, the hanim is a very generic 
>> data
>> design using very generic 3D hierarchies. Hanim is not at all an 
>> unusual
>> or non-generic scenegraph structure or data structure so I don't
>> understand the problem.
>>
>> Besides, please look at some browsers that do a great job with x3d
>> hanim. There were several more before they went missing.
>>
>> http://www.hypermultimedia.com/x3d/hanim/JoeH-AnimKick1a.txt
>>
>> is the text version of the example I am most interested in, in 
>> classic
>> encoding because I thinki is easier to read. Don't use word wrap.
>>
>> In reality, I don't care how the data is stored for runtime 
>> execution, I
>> care about the readability of the documentation created at 
>> authortime.
>> Sure, X3D HAnim may take a while to learn to read because the 
>> structure
>> is complicated, but all time is not wasted because these are types 
>> of
>> data common to most all efforts of humanoid animation.
>>
>> One piece of automation also used in character animation is 
>> precise,
>> time-driven animation of parts of a geometry, like when a piece of 
>> skin
>> to move independently of any joint rotations, In HAnim this is done 
>> by
>> Displacer, You tell it which points to move and how much to move 
>> them
>> then send it a weight, This is an important little tool. Again, the 
>> data
>> and technique just represents a common way to do it. How would your
>> project define such as operation?
>>
>> Thanks and Best,
>> Joe
>>
>>
>> ----- Original Message ----- From: "Philipp Slusallek"
>> <philipp.slusallek at dfki.de>
>> To: "Joe D Williams" <joedwil at earthlink.net>; "doug sanden"
>> <highaspirations at hotmail.com>; "'X3D Graphics public mailing list'"
>> <x3d-public at web3d.org>
>> Sent: Sunday, June 12, 2016 12:21 AM
>> Subject: Re: [x3d-public]] V4.0 Opendiscussion/workshopon X3DHTML
>> integration
>>
>>
>>> Hi Joe,
>>>
>>> I believe it may even be illuminating to just read a paper to 
>>> understand
>>> the principles of other technologies and consider them for your 
>>> own
>>> design. Also, some more openness to other available technology 
>>> besides
>>> X3D would actually help the discussion here.
>>>
>>> But we actually do have an implementation as well, which is used 
>>> in many
>>> of our projects: See for example
>>> https://xml3d.github.io/xml3d-examples/examples/xflowSkin/xflow-skin.html
>>> for
>>> simple skinned and animated characters that are handled using 
>>> Xflow to
>>> describe the required processing on the triangle meshes. These are
>>> animated characters exported to XML3D/Xflow directly from a 
>>> well-known
>>> game.
>>>
>>> This is just one of many ways of how Xflow can be used. Really, 
>>> the main
>>> point of Xflow is the ability to describe very different 
>>> processing
>>> operations on various data sets in a scene in a declarative way. 
>>> There
>>> are also examples for image processing (e.g.
>>> https://xml3d.github.io/xml3d-examples/examples/xflowIP/histogramm.html),
>>> simple
>>> Augmented Reality
>>> (https://xml3d.github.io/xml3d-examples/examples/xflowAR/ar_flying_teapot.html),
>>>
>>> and others using the exact same basic technique. Our ongoing work 
>>> will
>>> make this even simpler and support different HW mappings better.
>>>
>>> This is made possible by the generic data model in XML3D that I 
>>> have
>>> alluded to several times in my email. It is already useful as nice
>>> abstraction of GPU buffers but also allows for supporting 
>>> programmable
>>> shading. But this generic data design also allows for creating 
>>> these
>>> abstractions that would be much harder (if not impossible) to do 
>>> with
>>> the specialized approach that X3D is based on. However, it does 
>>> work the
>>> other way round: You can map the specialized nodes of X3D to the 
>>> more
>>> general and generic functionality of XML3D/Xflow.
>>>
>>> I think this highlights the difference between our approaches: 
>>> Hanim has
>>> selected one specific way of describing and handling animation and
>>> skinning, which requires a node-specific implementation. On the 
>>> other
>>> hand, we provide a small core engine for any such processing and 
>>> expose
>>> it in a compact and declarative way. The engine can then analyze 
>>> and
>>> optimize the resulting flow-graph, optimize it, and map it to the
>>> available HW independent of what the specific computation and up
>>> representing. On top of this, one can then use WebComponents to 
>>> map any
>>> specific representation (such as Hanim) to this generic 
>>> representation.
>>>
>>> We also did a careful analysis and comparison to X3D/Hanim in our 
>>> papers
>>> (see below for the links). There are several issues that we 
>>> identify:
>>> Need to duplicate the geometry to apply different animations to 
>>> the same
>>> model, or the fact that Hanim cannot handle tangent vectors as 
>>> part of
>>> the model, which may be required if a model has anisotropic 
>>> materials
>>> that need the transformed tangent vectors as vertex attributes for 
>>> the
>>> shader. It is very straight forward to add such processing to an 
>>> Xflow
>>> graph. There are more arguments in the paper.
>>>
>>> We also argue in the paper that Xflow is expressive enough to 
>>> handle
>>> Hanim. Doing a full WebComponent implementation for Hanim is left 
>>> as an
>>> exercise for the reader :-). While certainly useful, we do not see 
>>> this
>>> as the main target of our research work, sorry. But it should not 
>>> be a
>>> difficult exercise.
>>>
>>> BTW, the relevant papers are here:
>>> -- 
>>> https://graphics.cg.uni-saarland.de/fileadmin/cguds/papers/2012/klein_web3d2012/xflow.pdf
>>>
>>> -- 
>>> https://graphics.cg.uni-saarland.de/fileadmin/cguds/papers/2013/klein_web3d2013/xflow-ar.pdf
>>>
>>>
>>> There is also a IEEE CG&A extended version of the first paper 
>>> here:
>>> -- https://www.computer.org/csdl/mags/cg/2013/05/mcg2013050038.pdf
>>>
>>>
>>> Best,
>>>
>>> Philipp
>>>
>>> Am 12.06.2016 um 05:52 schrieb Joe D Williams:
>>>> Hi Philipp,
>>>>
>>>> I would study some of your work, but please help me esablish this
>>>> confidence by showing me what you can do with some relatively 
>>>> complex
>>>> X3D. This is skeleton animation of joints and segments as used
>>>> everywhere (no matter which interfaces are actually exposed by 
>>>> the
>>>> authoring system) and a deformable mesh skin bound to the 
>>>> skeleton and
>>>> each skin vertex bound to one or more joint(s) nodes.
>>>>
>>>> http://www.web3d.org/documents/specifications/19774/V1.0/HAnim/ObjectInterfaces.html
>>>>
>>>>
>>>>
>>>> Skin animation is achieved by animating the joints in the 
>>>> skeleton's
>>>> joint hierarachy then weighting each skin vertex displacement 
>>>> according
>>>> to the bound joint(s) rotation (as used everywhere no matter 
>>>> which
>>>> interfaces are actually exposed by the authoring system).
>>>>
>>>> some basics are here:
>>>>
>>>> https://en.wikipedia.org/wiki/Skeletal_animation
>>>>
>>>> is pretty much what X3D does either/both segment geometry (none 
>>>> on this
>>>> model) or skin, like this one, and represents complete 
>>>> documentation of
>>>> the model rigging and animations. Relative to the rest of the 
>>>> world of
>>>> character authoring and animation X3D covers a lot of ground. The 
>>>> only
>>>> 'probem' I know X3D has is that we do not use quaterions for 
>>>> joint
>>>> animation, which is now more or less industry glTF standard 
>>>> instead of
>>>> axis-angle used here. Well, also see that while the interpolators 
>>>> are
>>>> linear, the keytimes may not always be constant intervals.
>>>>
>>>> A couple of X3D browsers will do this fine and BSContact free is 
>>>> my
>>>> reference.
>>>>
>>>> This is a 'standard' LOA3 skeleton with skin vertices mostly 
>>>> taken from
>>>> 'standard' surface feature points. Both skeleton and skin are 
>>>> drawn in
>>>> approximately human scale, using the spec example dimensions as a 
>>>> basis.
>>>> I use an IndexedFaceSet for the skin mesh and depend upon the 
>>>> 'standard'
>>>> X3D browser feature of IFS to generate a default texure map so 
>>>> the
>>>> texture stays bound to the skin as it moves.
>>>>
>>>> Anyway, I hope you can take a look at this because implementation 
>>>> of
>>>> this basic character animation stuff is really not that easy and 
>>>> in the
>>>> past we have seen X3D browser development stall at implementation 
>>>> of
>>>> skeleton based skin animation. Note the hanim displacer node also 
>>>> does
>>>> mesh deformation.
>>>>
>>>> Example is here:
>>>>
>>>> http://www.hypermultimedia.com/x3d/hanim/JoeH-AnimKick1a.x3dv
>>>>
>>>> and attached.
>>>>
>>>> I can get it in .x3d but this version has better documentation of 
>>>> the
>>>> skin-joint bindings.
>>>>
>>>> Thanks and Best,
>>>> Joe
>>>>
>>>>
>>>>
>>>>
>>>> ----- Original Message ----- From: "Philipp Slusallek"
>>>> <philipp.slusallek at dfki.de>
>>>> To: "Joe D Williams" <joedwil at earthlink.net>; "doug sanden"
>>>> <highaspirations at hotmail.com>; "'X3D Graphics public mailing 
>>>> list'"
>>>> <x3d-public at web3d.org>
>>>> Sent: Saturday, June 11, 2016 3:17 AM
>>>> Subject: Re: [x3d-public] [x3d] V4.0 Opendiscussion/workshopon 
>>>> X3DHTML
>>>> integration
>>>>
>>>>
>>>>> Hi Joe,
>>>>>
>>>>> Thanks for the good discussion.
>>>>>
>>>>> But may I humbly suggest that you read our Xflow papers. We have 
>>>>> looked
>>>>> at this problem very carefully and tried different options with
>>>>> Xflow as
>>>>> the result of this. Xflow describes a generic data modeling and
>>>>> processing framework as a direct extension to HTML. It is even
>>>>> independent of XML3D conceptually. I would even call it the most
>>>>> important parts of our system.
>>>>>
>>>>> Its data representation is very close to GPU buffers (by design) 
>>>>> and we
>>>>> have shown that it can be mapped efficiently to very different
>>>>> acceleration API (including plain JS, asm.js, ParallelJS, vertex
>>>>> shaders, and others).The reason is that it is a pure functional 
>>>>> design
>>>>> that is hard to do with X3D Routes for various reasons 
>>>>> (discussed in
>>>>> the
>>>>> papers).
>>>>>
>>>>> Morphing, skinning, and image processing were actually the first
>>>>> examples that we showed how to do with the system. Hanim can be 
>>>>> easily
>>>>> mapped to Xflow (e.g. by a WebComponent), from where it can take
>>>>> advantage of the generic HW acceleration without any further 
>>>>> coding.
>>>>> All
>>>>> that is left on the JS side is a bit of bookkeeping, attribute 
>>>>> updates,
>>>>> and the WebGL calls.
>>>>>
>>>>>
>>>>> And with regard to the need of native implementations as raised 
>>>>> by you
>>>>> earlier: On a plain PC we could do something like 40-50 (would 
>>>>> have to
>>>>> check the exact number) fairly detailed animated characters, 
>>>>> each with
>>>>> their own morphing and skinning in a single scene in pure JS, 
>>>>> even
>>>>> WITHOUT ANY ACCELERATION AT ALL, including rendering and all 
>>>>> other
>>>>> stuff. Yes, faster and more efficient is always better, but (i) 
>>>>> we
>>>>> should not do any premature optimizations unless we can show 
>>>>> that it
>>>>> would actually make a big difference and (ii) this will not be 
>>>>> easy as
>>>>> you should not underestimate the performance of JS with really 
>>>>> good JIT
>>>>> compiler and well-formed code.
>>>>>
>>>>> Unless we have SHOWN that there is a real problem, that JS 
>>>>> CANNOT be
>>>>> pushed further AND there is sufficient significant interest by a 
>>>>> large
>>>>> user base, the browser vendors will not even talk to us about a 
>>>>> native
>>>>> implementation. And maintaining a fork is really, really hard --  
>>>>> trust
>>>>> me that is where we started :-(.
>>>>>
>>>>> And even more importantly, when we should ever get there we 
>>>>> should
>>>>> better have an implementation core that is as small as possible. 
>>>>> Many
>>>>> node types each with its own implementation is not the right 
>>>>> design for
>>>>> that (IMHO). Something like Xflow that many nodes and routes 
>>>>> could be
>>>>> mapped to seems, a much more useful and maintainable option.
>>>>>
>>>>>
>>>>> Right now we are extending shade.js in a project with Intel to 
>>>>> also
>>>>> handle the Xflow processing algorithms to be more general, which 
>>>>> should
>>>>> allow us to have a single code that targets all possible 
>>>>> acceleration
>>>>> targets. Right now you still need separate implementations for 
>>>>> each
>>>>> target.
>>>>>
>>>>>
>>>>> Best,
>>>>>
>>>>> Philipp
>>>>>
>>>>> Am 10.06.2016 um 19:26 schrieb Joe D Williams:
>>>>>>> e6 html integration > route/event/timer
>>>>>>
>>>>>> These are details solved declaratively using .x3d using the
>>>>>> abstractions
>>>>>> of node event in and outs, timesensors, routes, interpolators,
>>>>>> shaders,
>>>>>> and Script directOuts...
>>>>>>
>>>>>> in the <x3d> ... </x3d> environment, everything hat is not
>>>>>> 'built-in' is
>>>>>> created programatically using 'built-in' event emitters, event
>>>>>> listeners, event processors, time devices, scripts, etc.
>>>>>>
>>>>>> So the big difference in event systems might be that in .html 
>>>>>> the time
>>>>>> answers what time was it in the world when you last checked the 
>>>>>> time,
>>>>>> while in ,x3d it is the time to use in creation of the next 
>>>>>> frame. So
>>>>>> this declarative vs programatic just sets a low limit on how 
>>>>>> much
>>>>>> animation automation ought to be included. Both .x3d and <x3d> 
>>>>>> ,,,
>>>>>> </x3d> should preserve the basic event graph declarations.
>>>>>>
>>>>>> This brings up where to stash these organizable lists of routes 
>>>>>> and
>>>>>> interpolators.
>>>>>> The user code of .html is not really designed for these 
>>>>>> detailed
>>>>>> constructions and its basic premise is that the document should
>>>>>> contain
>>>>>> content, not massses of markup. So, are timers and 
>>>>>> interpolators and
>>>>>> routes as used in .x3d content or markup? If they are markup, 
>>>>>> then
>>>>>> it is
>>>>>> clear they should be in style. Besides, in my trusty text 
>>>>>> editor this
>>>>>> gives me a easily read independent event graph to play with.
>>>>>>
>>>>>> Next, if I need to step outside the 'built-in' convenience
>>>>>> abstractions,
>>>>>> or simply to communicate with other players in the DOM which
>>>>>> happens to
>>>>>> be the current embeddiment of my <x3d> ,,, </x3d> then I need 
>>>>>> DOM
>>>>>> event
>>>>>> stuffs and probably a DOM script to deal with DOM events set on 
>>>>>> x3d
>>>>>> syntax.
>>>>>>
>>>>>> So, to me this is the first step: Decide how much of the 
>>>>>> automation is
>>>>>> actually included within <x3d> ... </x3d>?
>>>>>>
>>>>>> Maybe one example is x3d hanim where we define real skin 
>>>>>> vertices
>>>>>> bound
>>>>>> to real joints to achieve realistic deformable skin. In HAnim 
>>>>>> the
>>>>>> first
>>>>>> level of animation complexity is a realistic skeleton of joints 
>>>>>> with
>>>>>> simple binding of shapes to segments in a heirarchy where joint 
>>>>>> center
>>>>>> rotations can produce realitic movements of the skeleton. As a 
>>>>>> joint
>>>>>> center rotates then its children segments and joints move as 
>>>>>> expected
>>>>>> for the skeleton dynamics. For seamless animations across 
>>>>>> segment
>>>>>> shapes, then the technique is to bind each skin vertex to one 
>>>>>> or more
>>>>>> joint objects, then move the skin some weighted displacement as 
>>>>>> the
>>>>>> joint(s) center(s) rotates.
>>>>>>
>>>>>> To document this completely in human-readable and editable 
>>>>>> form, as is
>>>>>> the goal of .x3d HAnim, is very tedious, but that is exactly 
>>>>>> how it is
>>>>>> actually finally computed in the wide world of rigging and in
>>>>>> computationally intensive. Thus, it makes sense for <x3d> ...
>>>>>> </x3d> to
>>>>>> support shapes bound to segments that are children of joints 
>>>>>> but not
>>>>>> demand full support for deformable skin. Hopefully the 
>>>>>> javascript
>>>>>> programmers that are now building the basic foundations to 
>>>>>> support x3d
>>>>>> using webgl features will prove me wrong, but without very high
>>>>>> performance support for reasonable density deformable skin, 
>>>>>> this does
>>>>>> not need to be supported in the (2.) html environment. Of 
>>>>>> course
>>>>>> standalone and embeddable players can do this because they will 
>>>>>> have
>>>>>> access to the high performance code and acceleration that may 
>>>>>> not be
>>>>>> available in .html with webgl.
>>>>>>
>>>>>> Thanks for thinking about this stuff.
>>>>>>
>>>>>> Joe
>>>>>>
>>>>>> http://www.hypermultimedia.com/x3d/hanim/hanimLOA3A8320130611Allanimtests.x3dv
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> http://www.hypermultimedia.com/x3d/hanim/hanimLOA3A8320130611Allanimtests.txt
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> http://www.hypermultimedia.com/x3d/hanim/JoeH-AnimKick1a.x3dv
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> ----- Original Message ----- From: "doug sanden"
>>>>>> <highaspirations at hotmail.com>
>>>>>> To: "'X3D Graphics public mailing list'" <x3d-public at web3d.org>
>>>>>> Sent: Friday, June 10, 2016 7:03 AM
>>>>>> Subject: Re: [x3d-public] [x3d] V4.0 Opendiscussion/workshopon 
>>>>>> X3DHTML
>>>>>> integration
>>>>>>
>>>>>>
>>>>>> 3-step 'Creative Strategy'
>>>>>> http://cup.columbia.edu/book/creative-strategy/9780231160520
>>>>>> https://sites.google.com/site/airdrieinnovationinstitute/creative-strategy
>>>>>>
>>>>>>
>>>>>> 1. break it down (into problem elements)
>>>>>> 2. search (other domains for element solutions)
>>>>>> 3. recombine (element solutions into total solution)
>>>>>>
>>>>>> e - problem element
>>>>>> d - domain offering solution(s) to problem elements
>>>>>> e-d matrix
>>>>>> ______d1________d2______d3__________d4
>>>>>> e1
>>>>>> e2
>>>>>> e3
>>>>>> e4
>>>>>>
>>>>>> Applied to what I think is the overall problem: 'which v4
>>>>>> technologies/specifications' or 'gaining consensus on v4 before
>>>>>> siggraph'.
>>>>>> I don't know if that's the only problem or _the_ problem, so 
>>>>>> this will
>>>>>> be more of an exercise to see if Creative Strategy works in the 
>>>>>> real
>>>>>> world, by using what I can piece together from what your're 
>>>>>> saying
>>>>>> as an
>>>>>> example.
>>>>>> Then I'll leave it to you guys to go through the 3 steps for 
>>>>>> whatever
>>>>>> the true problems are.
>>>>>> Problem: v4 specification finalization
>>>>>> Step1 break it down:
>>>>>> e1 continuity/stability in changing/shifting and multiplying 
>>>>>> target
>>>>>> technologies
>>>>>> e2 html integration > protos
>>>>>> e3 html integration > proto scripts
>>>>>> e4 html integration > inline vs Dom
>>>>>> e5 html integration > node/component simplification
>>>>>> e6 html integration > route/event/timer
>>>>>> e7 html integration > feature simplification ie SAI
>>>>>> e8 siggraph promotion opportunity, among/against competing 3D
>>>>>> formats /
>>>>>> tools
>>>>>>
>>>>>> Step 2 search other domains
>>>>>> d1 compiler domain > take a high-level cross platform language 
>>>>>> and
>>>>>> compile it for target CPU ARM, x86, x64
>>>>>> d2 wrangling: opengl extension wrangler domain > add extensions 
>>>>>> to 15
>>>>>> year old opengl32.dll to make it modern opengl
>>>>>> d3 polyfill: web browser technologies > polyfill - program 
>>>>>> against an
>>>>>> assumed modern browser, and use polyfill.js to discover current
>>>>>> browser
>>>>>> capaiblities and fill in any gaps by emulating
>>>>>> d4 unrolling: mangled-name copies pasted into same scope - 
>>>>>> don't know
>>>>>> what domain its from, but what John is doing when 
>>>>>> proto-expanding, its
>>>>>> like what freewrl did for 10 years for protos
>>>>>> d5 adware / iframe / webcomponents > separate scopes
>>>>>> -
>>>>>> https://blogs.windows.com/msedgedev/2015/07/14/bringing-componentization-to-the-web-an-overview-of-web-components/
>>>>>>
>>>>>>
>>>>>>
>>>>>> -
>>>>>> http://www.benfarrell.com/2015/10/26/es6-web-components-part-1-a-man-without-a-framework/
>>>>>>
>>>>>>
>>>>>>
>>>>>> - React, dojo, polymer, angular, es6, webcomponents.js 
>>>>>> polyfill,
>>>>>> shadoow
>>>>>> dom,import, same-origin iframe
>>>>>>
>>>>>> d6 server > when a client wants something, and says what its
>>>>>> capabilities are, then serve them what they are capable of 
>>>>>> displaying
>>>>>> d7 viral videos
>>>>>>
>>>>>> (its hard to do a table in turtle graphics, so I'll do e/d 
>>>>>> lists)
>>>>>> e1 / d1 compiler: have one high level format which is 
>>>>>> technology
>>>>>> agnostic, with LTS long term stablility, and compile/translate 
>>>>>> to all
>>>>>> other formats which are more technology dependent. Need to 
>>>>>> show/prove
>>>>>> the high level can be transformed/ is transformable to all 
>>>>>> desired
>>>>>> targets like html Dom variants, html Inline variants, and 
>>>>>> desktop
>>>>>> variants
>>>>>> e4 / d1 including compiling to inline or dom variants
>>>>>> e1 / d6 server-time transformation or selection: gets client
>>>>>> capabilities in request, and either
>>>>>> - a) transforms a generic format to target capabilities variant 
>>>>>> or
>>>>>> - b) selects from among prepared variants to match target
>>>>>> capaibilties,
>>>>>> e5 / d1 compiler: can compile static geometry from high level
>>>>>> nurbs/extrusions to indexedfaceset depending on target 
>>>>>> capabilities,
>>>>>> need to have a STATIC keyword in case extrusion is animated?
>>>>>> e6 / d1 compiler transforms routes, timers, events to target 
>>>>>> platform
>>>>>> equivalents
>>>>>>
>>>>>> e5 / d2 extension wrangling > depending on capaiblities of 
>>>>>> target,
>>>>>> during transform stage, substitute Protos for high level nodes, 
>>>>>> when
>>>>>> target browser can't support the component/level directly
>>>>>> e5 / d3 polyfill > when a target doesn't support some feature,
>>>>>> polyfill
>>>>>> so it runs enough to support a stable format
>>>>>>
>>>>>> e8 / d7 create viral video of web3d consortium
>>>>>> deciding/trying-to-decide
>>>>>> something. Maybe creative strategy step 3: decide among matrix
>>>>>> elements
>>>>>> at a session at siggraph with audience watching or 
>>>>>> participating in
>>>>>> special "help us decide" siggraph session.
>>>>>>
>>>>>> e2 / d5 webcomponents and proto scripts: create scripts with/in
>>>>>> different webcomponent scope;
>>>>>> e3 / d5 webcomponents make Scene and ProtoInstance both in a
>>>>>> webcomponent, with hierarchy of webcomponents for nested
>>>>>> protoInstances.
>>>>>> e2+e3 / d4 unrolling + protos > unroll protos and scripts a)
>>>>>> upstream/on
>>>>>> server or transformer b) in client on demand
>>>>>>
>>>>>> e7 / d6 server simplifies featuers ie SAI or not based on 
>>>>>> client
>>>>>> capabilities
>>>>>> e7 / d1 compiler compiles out features not supported by target 
>>>>>> client
>>>>>>
>>>>>> ____d1___d2___d3___d4___d5___d6___d7
>>>>>> e1 __ * _______________________ *
>>>>>> e2 _________________ *___*
>>>>>> e3 _________________ *___*
>>>>>> e4 _*
>>>>>> e5 _*_____*____*
>>>>>> e6 _*
>>>>>> e7 _*_________________________*
>>>>>> e8 ________________________________*
>>>>>>
>>>>>> Or something like that,
>>>>>> But would Step 3 creatively recombine element solutions into 
>>>>>> total
>>>>>> solution still result in deadlock? Or can that deadlock be one 
>>>>>> of the
>>>>>> problem elements, and domain solutions applied? For example 
>>>>>> does the
>>>>>> compiler/transformer workflow idea automatically solve current
>>>>>> deadlock,
>>>>>> or does deadlock need more specific attention ie breakdown into
>>>>>> elements
>>>>>> of deadlock, searching domains for solutions to deadlock 
>>>>>> elements etc.
>>>>>>
>>>>>> HTH
>>>>>> -Doug
>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> x3d-public mailing list
>>>>>> x3d-public at web3d.org
>>>>>> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>>>>>>
>>>>>> _______________________________________________
>>>>>> x3d-public mailing list
>>>>>> x3d-public at web3d.org
>>>>>> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>>>>>
>>>>> -- 
>>>>>
>>>>> -------------------------------------------------------------------------
>>>>>
>>>>> Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) 
>>>>> GmbH
>>>>> Trippstadter Strasse 122, D-67663 Kaiserslautern
>>>>>
>>>>> Geschäftsführung:
>>>>>  Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
>>>>>  Dr. Walter Olthoff
>>>>> Vorsitzender des Aufsichtsrats:
>>>>>  Prof. Dr. h.c. Hans A. Aukes
>>>>>
>>>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313)
>>>>> VAT/USt-Id.Nr.: DE 148 646 973, Steuernummer:  19/673/0060/3
>>>>> ---------------------------------------------------------------------------
>>>>>
>>>>>
>>>>>
>>>
>>> -- 
>>>
>>> -------------------------------------------------------------------------
>>> Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) GmbH
>>> Trippstadter Strasse 122, D-67663 Kaiserslautern
>>>
>>> Geschäftsführung:
>>>  Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
>>>  Dr. Walter Olthoff
>>> Vorsitzender des Aufsichtsrats:
>>>  Prof. Dr. h.c. Hans A. Aukes
>>>
>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313)
>>> VAT/USt-Id.Nr.: DE 148 646 973, Steuernummer:  19/673/0060/3
>>> ---------------------------------------------------------------------------
>>>
>>>
>>
>
> -- 
>
> -------------------------------------------------------------------------
> Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) GmbH
> Trippstadter Strasse 122, D-67663 Kaiserslautern
>
> Geschäftsführung:
>  Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
>  Dr. Walter Olthoff
> Vorsitzender des Aufsichtsrats:
>  Prof. Dr. h.c. Hans A. Aukes
>
> Sitz der Gesellschaft: Kaiserslautern (HRB 2313)
> VAT/USt-Id.Nr.: DE 148 646 973, Steuernummer:  19/673/0060/3
> ---------------------------------------------------------------------------
> 




More information about the x3d-public mailing list