[x3d-public] ] V4.0 Opendiscussion/workshopon X3DHTML integration

Philipp Slusallek philipp.slusallek at dfki.de
Thu Jun 16 22:07:18 PDT 2016


[Resent, as the original email was apparently not relayed through the list]

Hi Joe,

I believe it may even be illuminating to just read a paper to understand
the principles of other technologies and consider them for your own
design. Also, some more openness to other available technology besides
X3D would actually help the discussion here.

But we actually do have an implementation as well, which is used in many
of our projects: See for example
https://xml3d.github.io/xml3d-examples/examples/xflowSkin/xflow-skin.html for
simple skinned and animated characters that are handled using Xflow to
describe the required processing on the triangle meshes. These are
animated characters exported to XML3D/Xflow directly from a well-known game.

This is just one of many ways of how Xflow can be used. Really, the main
point of Xflow is the ability to describe very different processing
operations on various data sets in a scene in a declarative way. There
are also examples for image processing (e.g.
https://xml3d.github.io/xml3d-examples/examples/xflowIP/histogramm.html), simple
Augmented Reality
(https://xml3d.github.io/xml3d-examples/examples/xflowAR/ar_flying_teapot.html),
and others using the exact same basic technique. Our ongoing work will
make this even simpler and support different HW mappings better.

This is made possible by the generic data model in XML3D that I have
alluded to several times in my email. It is already useful as nice
abstraction of GPU buffers but also allows for supporting programmable
shading. But this generic data design also allows for creating these
abstractions that would be much harder (if not impossible) to do with
the specialized approach that X3D is based on. However, it does work the
other way round: You can map the specialized nodes of X3D to the more
general and generic functionality of XML3D/Xflow.

I think this highlights the difference between our approaches: Hanim has
selected one specific way of describing and handling animation and
skinning, which requires a node-specific implementation. On the other
hand, we provide a small core engine for any such processing and expose
it in a compact and declarative way. The engine can then analyze and
optimize the resulting flow-graph, optimize it, and map it to the
available HW independent of what the specific computation and up
representing. On top of this, one can then use WebComponents to map any
specific representation (such as Hanim) to this generic representation.

We also did a careful analysis and comparison to X3D/Hanim in our papers
(see below for the links). There are several issues that we identify:
Need to duplicate the geometry to apply different animations to the same
model, or the fact that Hanim cannot handle tangent vectors as part of
the model, which may be required if a model has anisotropic materials
that need the transformed tangent vectors as vertex attributes for the
shader. It is very straight forward to add such processing to an Xflow
graph. There are more arguments in the paper.

We also argue in the paper that Xflow is expressive enough to handle
Hanim. Doing a full WebComponent implementation for Hanim is left as an
exercise for the reader :-). While certainly useful, we do not see this
as the main target of our research work, sorry. But it should not be a
difficult exercise.

BTW, the relevant papers are here:
--
https://graphics.cg.uni-saarland.de/fileadmin/cguds/papers/2012/klein_web3d2012/xflow.pdf
--
https://graphics.cg.uni-saarland.de/fileadmin/cguds/papers/2013/klein_web3d2013/xflow-ar.pdf

There is also a IEEE CG&A extended version of the first paper here:
-- https://www.computer.org/csdl/mags/cg/2013/05/mcg2013050038.pdf


Best,

	Philipp

Am 12.06.2016 um 05:52 schrieb Joe D Williams:
> Hi Philipp,
> 
> I would study some of your work, but please help me esablish this
> confidence by showing me what you can do with some relatively complex
> X3D. This is skeleton animation of joints and segments as used
> everywhere (no matter which interfaces are actually exposed by the
> authoring system) and a deformable mesh skin bound to the skeleton and
> each skin vertex bound to one or more joint(s) nodes.
> 
> http://www.web3d.org/documents/specifications/19774/V1.0/HAnim/ObjectInterfaces.html
> 
> 
> Skin animation is achieved by animating the joints in the skeleton's
> joint hierarachy then weighting each skin vertex displacement according
> to the bound joint(s) rotation (as used everywhere no matter which
> interfaces are actually exposed by the authoring system).
> 
> some basics are here:
> 
> https://en.wikipedia.org/wiki/Skeletal_animation
> 
> is pretty much what X3D does either/both segment geometry (none on this
> model) or skin, like this one, and represents complete documentation of
> the model rigging and animations. Relative to the rest of the world of
> character authoring and animation X3D covers a lot of ground. The only
> 'probem' I know X3D has is that we do not use quaterions for joint
> animation, which is now more or less industry glTF standard instead of
> axis-angle used here. Well, also see that while the interpolators are
> linear, the keytimes may not always be constant intervals.
> 
> A couple of X3D browsers will do this fine and BSContact free is my
> reference.
> 
> This is a 'standard' LOA3 skeleton with skin vertices mostly taken from
> 'standard' surface feature points. Both skeleton and skin are drawn in
> approximately human scale, using the spec example dimensions as a basis.
> I use an IndexedFaceSet for the skin mesh and depend upon the 'standard'
> X3D browser feature of IFS to generate a default texure map so the
> texture stays bound to the skin as it moves.
> 
> Anyway, I hope you can take a look at this because implementation of
> this basic character animation stuff is really not that easy and in the
> past we have seen X3D browser development stall at implementation of
> skeleton based skin animation. Note the hanim displacer node also does
> mesh deformation.
> 
> Example is here:
> 
> http://www.hypermultimedia.com/x3d/hanim/JoeH-AnimKick1a.x3dv
> 
> and attached.
> 
> I can get it in .x3d but this version has better documentation of the
> skin-joint bindings.
> 
> Thanks and Best,
> Joe
> 
> 
> 
> 
> ----- Original Message ----- From: "Philipp Slusallek"
> <philipp.slusallek at dfki.de>
> To: "Joe D Williams" <joedwil at earthlink.net>; "doug sanden"
> <highaspirations at hotmail.com>; "'X3D Graphics public mailing list'"
> <x3d-public at web3d.org>
> Sent: Saturday, June 11, 2016 3:17 AM
> Subject: Re: [x3d-public] [x3d] V4.0 Opendiscussion/workshopon X3DHTML
> integration
> 
> 
>> Hi Joe,
>>
>> Thanks for the good discussion.
>>
>> But may I humbly suggest that you read our Xflow papers. We have looked
>> at this problem very carefully and tried different options with Xflow as
>> the result of this. Xflow describes a generic data modeling and
>> processing framework as a direct extension to HTML. It is even
>> independent of XML3D conceptually. I would even call it the most
>> important parts of our system.
>>
>> Its data representation is very close to GPU buffers (by design) and we
>> have shown that it can be mapped efficiently to very different
>> acceleration API (including plain JS, asm.js, ParallelJS, vertex
>> shaders, and others).The reason is that it is a pure functional design
>> that is hard to do with X3D Routes for various reasons (discussed in the
>> papers).
>>
>> Morphing, skinning, and image processing were actually the first
>> examples that we showed how to do with the system. Hanim can be easily
>> mapped to Xflow (e.g. by a WebComponent), from where it can take
>> advantage of the generic HW acceleration without any further coding. All
>> that is left on the JS side is a bit of bookkeeping, attribute updates,
>> and the WebGL calls.
>>
>>
>> And with regard to the need of native implementations as raised by you
>> earlier: On a plain PC we could do something like 40-50 (would have to
>> check the exact number) fairly detailed animated characters, each with
>> their own morphing and skinning in a single scene in pure JS, even
>> WITHOUT ANY ACCELERATION AT ALL, including rendering and all other
>> stuff. Yes, faster and more efficient is always better, but (i) we
>> should not do any premature optimizations unless we can show that it
>> would actually make a big difference and (ii) this will not be easy as
>> you should not underestimate the performance of JS with really good JIT
>> compiler and well-formed code.
>>
>> Unless we have SHOWN that there is a real problem, that JS CANNOT be
>> pushed further AND there is sufficient significant interest by a large
>> user base, the browser vendors will not even talk to us about a native
>> implementation. And maintaining a fork is really, really hard --  trust
>> me that is where we started :-(.
>>
>> And even more importantly, when we should ever get there we should
>> better have an implementation core that is as small as possible. Many
>> node types each with its own implementation is not the right design for
>> that (IMHO). Something like Xflow that many nodes and routes could be
>> mapped to seems, a much more useful and maintainable option.
>>
>>
>> Right now we are extending shade.js in a project with Intel to also
>> handle the Xflow processing algorithms to be more general, which should
>> allow us to have a single code that targets all possible acceleration
>> targets. Right now you still need separate implementations for each
>> target.
>>
>>
>> Best,
>>
>> Philipp
>>
>> Am 10.06.2016 um 19:26 schrieb Joe D Williams:
>>>> e6 html integration > route/event/timer
>>>
>>> These are details solved declaratively using .x3d using the abstractions
>>> of node event in and outs, timesensors, routes, interpolators, shaders,
>>> and Script directOuts...
>>>
>>> in the <x3d> ... </x3d> environment, everything hat is not 'built-in' is
>>> created programatically using 'built-in' event emitters, event
>>> listeners, event processors, time devices, scripts, etc.
>>>
>>> So the big difference in event systems might be that in .html the time
>>> answers what time was it in the world when you last checked the time,
>>> while in ,x3d it is the time to use in creation of the next frame. So
>>> this declarative vs programatic just sets a low limit on how much
>>> animation automation ought to be included. Both .x3d and <x3d> ,,,
>>> </x3d> should preserve the basic event graph declarations.
>>>
>>> This brings up where to stash these organizable lists of routes and
>>> interpolators.
>>> The user code of .html is not really designed for these detailed
>>> constructions and its basic premise is that the document should contain
>>> content, not massses of markup. So, are timers and interpolators and
>>> routes as used in .x3d content or markup? If they are markup, then it is
>>> clear they should be in style. Besides, in my trusty text editor this
>>> gives me a easily read independent event graph to play with.
>>>
>>> Next, if I need to step outside the 'built-in' convenience abstractions,
>>> or simply to communicate with other players in the DOM which happens to
>>> be the current embeddiment of my <x3d> ,,, </x3d> then I need DOM event
>>> stuffs and probably a DOM script to deal with DOM events set on x3d
>>> syntax.
>>>
>>> So, to me this is the first step: Decide how much of the automation is
>>> actually included within <x3d> ... </x3d>?
>>>
>>> Maybe one example is x3d hanim where we define real skin vertices bound
>>> to real joints to achieve realistic deformable skin. In HAnim the first
>>> level of animation complexity is a realistic skeleton of joints with
>>> simple binding of shapes to segments in a heirarchy where joint center
>>> rotations can produce realitic movements of the skeleton. As a joint
>>> center rotates then its children segments and joints move as expected
>>> for the skeleton dynamics. For seamless animations across segment
>>> shapes, then the technique is to bind each skin vertex to one or more
>>> joint objects, then move the skin some weighted displacement as the
>>> joint(s) center(s) rotates.
>>>
>>> To document this completely in human-readable and editable form, as is
>>> the goal of .x3d HAnim, is very tedious, but that is exactly how it is
>>> actually finally computed in the wide world of rigging and in
>>> computationally intensive. Thus, it makes sense for <x3d> ... </x3d> to
>>> support shapes bound to segments that are children of joints but not
>>> demand full support for deformable skin. Hopefully the javascript
>>> programmers that are now building the basic foundations to support x3d
>>> using webgl features will prove me wrong, but without very high
>>> performance support for reasonable density deformable skin, this does
>>> not need to be supported in the (2.) html environment. Of course
>>> standalone and embeddable players can do this because they will have
>>> access to the high performance code and acceleration that may not be
>>> available in .html with webgl.
>>>
>>> Thanks for thinking about this stuff.
>>>
>>> Joe
>>>
>>> http://www.hypermultimedia.com/x3d/hanim/hanimLOA3A8320130611Allanimtests.x3dv
>>>
>>>
>>>
>>> http://www.hypermultimedia.com/x3d/hanim/hanimLOA3A8320130611Allanimtests.txt
>>>
>>>
>>>
>>> http://www.hypermultimedia.com/x3d/hanim/JoeH-AnimKick1a.x3dv
>>>
>>>
>>>
>>>
>>>
>>> ----- Original Message ----- From: "doug sanden"
>>> <highaspirations at hotmail.com>
>>> To: "'X3D Graphics public mailing list'" <x3d-public at web3d.org>
>>> Sent: Friday, June 10, 2016 7:03 AM
>>> Subject: Re: [x3d-public] [x3d] V4.0 Opendiscussion/workshopon X3DHTML
>>> integration
>>>
>>>
>>> 3-step 'Creative Strategy'
>>> http://cup.columbia.edu/book/creative-strategy/9780231160520
>>> https://sites.google.com/site/airdrieinnovationinstitute/creative-strategy
>>>
>>> 1. break it down (into problem elements)
>>> 2. search (other domains for element solutions)
>>> 3. recombine (element solutions into total solution)
>>>
>>> e - problem element
>>> d - domain offering solution(s) to problem elements
>>> e-d matrix
>>> ______d1________d2______d3__________d4
>>> e1
>>> e2
>>> e3
>>> e4
>>>
>>> Applied to what I think is the overall problem: 'which v4
>>> technologies/specifications' or 'gaining consensus on v4 before
>>> siggraph'.
>>> I don't know if that's the only problem or _the_ problem, so this will
>>> be more of an exercise to see if Creative Strategy works in the real
>>> world, by using what I can piece together from what your're saying as an
>>> example.
>>> Then I'll leave it to you guys to go through the 3 steps for whatever
>>> the true problems are.
>>> Problem: v4 specification finalization
>>> Step1 break it down:
>>> e1 continuity/stability in changing/shifting and multiplying target
>>> technologies
>>> e2 html integration > protos
>>> e3 html integration > proto scripts
>>> e4 html integration > inline vs Dom
>>> e5 html integration > node/component simplification
>>> e6 html integration > route/event/timer
>>> e7 html integration > feature simplification ie SAI
>>> e8 siggraph promotion opportunity, among/against competing 3D formats /
>>> tools
>>>
>>> Step 2 search other domains
>>> d1 compiler domain > take a high-level cross platform language and
>>> compile it for target CPU ARM, x86, x64
>>> d2 wrangling: opengl extension wrangler domain > add extensions to 15
>>> year old opengl32.dll to make it modern opengl
>>> d3 polyfill: web browser technologies > polyfill - program against an
>>> assumed modern browser, and use polyfill.js to discover current browser
>>> capaiblities and fill in any gaps by emulating
>>> d4 unrolling: mangled-name copies pasted into same scope - don't know
>>> what domain its from, but what John is doing when proto-expanding, its
>>> like what freewrl did for 10 years for protos
>>> d5 adware / iframe / webcomponents > separate scopes
>>> -
>>> https://blogs.windows.com/msedgedev/2015/07/14/bringing-componentization-to-the-web-an-overview-of-web-components/
>>>
>>>
>>> -
>>> http://www.benfarrell.com/2015/10/26/es6-web-components-part-1-a-man-without-a-framework/
>>>
>>>
>>> - React, dojo, polymer, angular, es6, webcomponents.js polyfill, shadoow
>>> dom,import, same-origin iframe
>>>
>>> d6 server > when a client wants something, and says what its
>>> capabilities are, then serve them what they are capable of displaying
>>> d7 viral videos
>>>
>>> (its hard to do a table in turtle graphics, so I'll do e/d lists)
>>> e1 / d1 compiler: have one high level format which is technology
>>> agnostic, with LTS long term stablility, and compile/translate to all
>>> other formats which are more technology dependent. Need to show/prove
>>> the high level can be transformed/ is transformable to all desired
>>> targets like html Dom variants, html Inline variants, and desktop
>>> variants
>>> e4 / d1 including compiling to inline or dom variants
>>> e1 / d6 server-time transformation or selection: gets client
>>> capabilities in request, and either
>>> - a) transforms a generic format to target capabilities variant or
>>> - b) selects from among prepared variants to match target capaibilties,
>>> e5 / d1 compiler: can compile static geometry from high level
>>> nurbs/extrusions to indexedfaceset depending on target capabilities,
>>> need to have a STATIC keyword in case extrusion is animated?
>>> e6 / d1 compiler transforms routes, timers, events to target platform
>>> equivalents
>>>
>>> e5 / d2 extension wrangling > depending on capaiblities of target,
>>> during transform stage, substitute Protos for high level nodes, when
>>> target browser can't support the component/level directly
>>> e5 / d3 polyfill > when a target doesn't support some feature, polyfill
>>> so it runs enough to support a stable format
>>>
>>> e8 / d7 create viral video of web3d consortium deciding/trying-to-decide
>>> something. Maybe creative strategy step 3: decide among matrix elements
>>> at a session at siggraph with audience watching or participating in
>>> special "help us decide" siggraph session.
>>>
>>> e2 / d5 webcomponents and proto scripts: create scripts with/in
>>> different webcomponent scope;
>>> e3 / d5 webcomponents make Scene and ProtoInstance both in a
>>> webcomponent, with hierarchy of webcomponents for nested protoInstances.
>>> e2+e3 / d4 unrolling + protos > unroll protos and scripts a) upstream/on
>>> server or transformer b) in client on demand
>>>
>>> e7 / d6 server simplifies featuers ie SAI or not based on client
>>> capabilities
>>> e7 / d1 compiler compiles out features not supported by target client
>>>
>>> ____d1___d2___d3___d4___d5___d6___d7
>>> e1 __ * _______________________ *
>>> e2 _________________ *___*
>>> e3 _________________ *___*
>>> e4 _*
>>> e5 _*_____*____*
>>> e6 _*
>>> e7 _*_________________________*
>>> e8 ________________________________*
>>>
>>> Or something like that,
>>> But would Step 3 creatively recombine element solutions into total
>>> solution still result in deadlock? Or can that deadlock be one of the
>>> problem elements, and domain solutions applied? For example does the
>>> compiler/transformer workflow idea automatically solve current deadlock,
>>> or does deadlock need more specific attention ie breakdown into elements
>>> of deadlock, searching domains for solutions to deadlock elements etc.
>>>
>>> HTH
>>> -Doug
>>>
>>>
>>>
>>> _______________________________________________
>>> x3d-public mailing list
>>> x3d-public at web3d.org
>>> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>>>
>>> _______________________________________________
>>> x3d-public mailing list
>>> x3d-public at web3d.org
>>> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>>
>> -- 
>>
>> -------------------------------------------------------------------------
>> Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) GmbH
>> Trippstadter Strasse 122, D-67663 Kaiserslautern
>>
>> Geschäftsführung:
>>  Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
>>  Dr. Walter Olthoff
>> Vorsitzender des Aufsichtsrats:
>>  Prof. Dr. h.c. Hans A. Aukes
>>
>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313)
>> VAT/USt-Id.Nr.: DE 148 646 973, Steuernummer:  19/673/0060/3
>> ---------------------------------------------------------------------------
>>
>>

-- 

-------------------------------------------------------------------------
Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) GmbH
Trippstadter Strasse 122, D-67663 Kaiserslautern

Geschäftsführung:
  Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
  Dr. Walter Olthoff
Vorsitzender des Aufsichtsrats:
  Prof. Dr. h.c. Hans A. Aukes

Sitz der Gesellschaft: Kaiserslautern (HRB 2313)
VAT/USt-Id.Nr.: DE 148 646 973, Steuernummer:  19/673/0060/3
---------------------------------------------------------------------------


-- 

-------------------------------------------------------------------------
Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) GmbH
Trippstadter Strasse 122, D-67663 Kaiserslautern

Geschäftsführung:
  Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
  Dr. Walter Olthoff
Vorsitzender des Aufsichtsrats:
  Prof. Dr. h.c. Hans A. Aukes

Sitz der Gesellschaft: Kaiserslautern (HRB 2313)
VAT/USt-Id.Nr.: DE 148 646 973, Steuernummer:  19/673/0060/3
---------------------------------------------------------------------------



More information about the x3d-public mailing list