[x3d-public] [x3d] V4.0 Opendiscussion/workshopon X3DHTML integration

Philipp Slusallek philipp.slusallek at dfki.de
Thu Jun 16 22:06:36 PDT 2016


[Resent, as the original email was apparently not relayed through the list]

Hi Joe,

Thanks for the good discussion.

But may I humbly suggest that you read our Xflow papers. We have looked
at this problem very carefully and tried different options with Xflow as
the result of this. Xflow describes a generic data modeling and
processing framework as a direct extension to HTML. It is even
independent of XML3D conceptually. I would even call it the most
important parts of our system.

Its data representation is very close to GPU buffers (by design) and we
have shown that it can be mapped efficiently to very different
acceleration API (including plain JS, asm.js, ParallelJS, vertex
shaders, and others).The reason is that it is a pure functional design
that is hard to do with X3D Routes for various reasons (discussed in the
papers).

Morphing, skinning, and image processing were actually the first
examples that we showed how to do with the system. Hanim can be easily
mapped to Xflow (e.g. by a WebComponent), from where it can take
advantage of the generic HW acceleration without any further coding. All
that is left on the JS side is a bit of bookkeeping, attribute updates,
and the WebGL calls.


And with regard to the need of native implementations as raised by you
earlier: On a plain PC we could do something like 40-50 (would have to
check the exact number) fairly detailed animated characters, each with
their own morphing and skinning in a single scene in pure JS, even
WITHOUT ANY ACCELERATION AT ALL, including rendering and all other
stuff. Yes, faster and more efficient is always better, but (i) we
should not do any premature optimizations unless we can show that it
would actually make a big difference and (ii) this will not be easy as
you should not underestimate the performance of JS with really good JIT
compiler and well-formed code.

Unless we have SHOWN that there is a real problem, that JS CANNOT be
pushed further AND there is sufficient significant interest by a large
user base, the browser vendors will not even talk to us about a native
implementation. And maintaining a fork is really, really hard -- trust
me that is where we started :-(.

And even more importantly, when we should ever get there we should
better have an implementation core that is as small as possible. Many
node types each with its own implementation is not the right design for
that (IMHO). Something like Xflow that many nodes and routes could be
mapped to seems, a much more useful and maintainable option.


Right now we are extending shade.js in a project with Intel to also
handle the Xflow processing algorithms to be more general, which should
allow us to have a single code that targets all possible acceleration
targets. Right now you still need separate implementations for each target.


Best,

	Philipp

Am 10.06.2016 um 19:26 schrieb Joe D Williams:
>> e6 html integration > route/event/timer
> 
> These are details solved declaratively using .x3d using the abstractions
> of node event in and outs, timesensors, routes, interpolators, shaders,
> and Script directOuts...
> 
> in the <x3d> ... </x3d> environment, everything hat is not 'built-in' is
> created programatically using 'built-in' event emitters, event
> listeners, event processors, time devices, scripts, etc.
> 
> So the big difference in event systems might be that in .html the time
> answers what time was it in the world when you last checked the time,
> while in ,x3d it is the time to use in creation of the next frame. So
> this declarative vs programatic just sets a low limit on how much
> animation automation ought to be included. Both .x3d and <x3d> ,,,
> </x3d> should preserve the basic event graph declarations.
> 
> This brings up where to stash these organizable lists of routes and
> interpolators.
> The user code of .html is not really designed for these detailed
> constructions and its basic premise is that the document should contain
> content, not massses of markup. So, are timers and interpolators and
> routes as used in .x3d content or markup? If they are markup, then it is
> clear they should be in style. Besides, in my trusty text editor this
> gives me a easily read independent event graph to play with.
> 
> Next, if I need to step outside the 'built-in' convenience abstractions,
> or simply to communicate with other players in the DOM which happens to
> be the current embeddiment of my <x3d> ,,, </x3d> then I need DOM event
> stuffs and probably a DOM script to deal with DOM events set on x3d syntax.
> 
> So, to me this is the first step: Decide how much of the automation is
> actually included within <x3d> ... </x3d>?
> 
> Maybe one example is x3d hanim where we define real skin vertices bound
> to real joints to achieve realistic deformable skin. In HAnim the first
> level of animation complexity is a realistic skeleton of joints with
> simple binding of shapes to segments in a heirarchy where joint center
> rotations can produce realitic movements of the skeleton. As a joint
> center rotates then its children segments and joints move as expected
> for the skeleton dynamics. For seamless animations across segment
> shapes, then the technique is to bind each skin vertex to one or more
> joint objects, then move the skin some weighted displacement as the
> joint(s) center(s) rotates.
> 
> To document this completely in human-readable and editable form, as is
> the goal of .x3d HAnim, is very tedious, but that is exactly how it is
> actually finally computed in the wide world of rigging and in
> computationally intensive. Thus, it makes sense for <x3d> ... </x3d> to
> support shapes bound to segments that are children of joints but not
> demand full support for deformable skin. Hopefully the javascript
> programmers that are now building the basic foundations to support x3d
> using webgl features will prove me wrong, but without very high
> performance support for reasonable density deformable skin, this does
> not need to be supported in the (2.) html environment. Of course
> standalone and embeddable players can do this because they will have
> access to the high performance code and acceleration that may not be
> available in .html with webgl.
> 
> Thanks for thinking about this stuff.
> 
> Joe
> 
> http://www.hypermultimedia.com/x3d/hanim/hanimLOA3A8320130611Allanimtests.x3dv
> 
> 
> http://www.hypermultimedia.com/x3d/hanim/hanimLOA3A8320130611Allanimtests.txt
> 
> 
> http://www.hypermultimedia.com/x3d/hanim/JoeH-AnimKick1a.x3dv
> 
> 
> 
> 
> 
> ----- Original Message ----- From: "doug sanden"
> <highaspirations at hotmail.com>
> To: "'X3D Graphics public mailing list'" <x3d-public at web3d.org>
> Sent: Friday, June 10, 2016 7:03 AM
> Subject: Re: [x3d-public] [x3d] V4.0 Opendiscussion/workshopon X3DHTML
> integration
> 
> 
> 3-step 'Creative Strategy'
> http://cup.columbia.edu/book/creative-strategy/9780231160520
> https://sites.google.com/site/airdrieinnovationinstitute/creative-strategy
> 1. break it down (into problem elements)
> 2. search (other domains for element solutions)
> 3. recombine (element solutions into total solution)
> 
> e - problem element
> d - domain offering solution(s) to problem elements
> e-d matrix
> ______d1________d2______d3__________d4
> e1
> e2
> e3
> e4
> 
> Applied to what I think is the overall problem: 'which v4
> technologies/specifications' or 'gaining consensus on v4 before siggraph'.
> I don't know if that's the only problem or _the_ problem, so this will
> be more of an exercise to see if Creative Strategy works in the real
> world, by using what I can piece together from what your're saying as an
> example.
> Then I'll leave it to you guys to go through the 3 steps for whatever
> the true problems are.
> Problem: v4 specification finalization
> Step1 break it down:
> e1 continuity/stability in changing/shifting and multiplying target
> technologies
> e2 html integration > protos
> e3 html integration > proto scripts
> e4 html integration > inline vs Dom
> e5 html integration > node/component simplification
> e6 html integration > route/event/timer
> e7 html integration > feature simplification ie SAI
> e8 siggraph promotion opportunity, among/against competing 3D formats /
> tools
> 
> Step 2 search other domains
> d1 compiler domain > take a high-level cross platform language and
> compile it for target CPU ARM, x86, x64
> d2 wrangling: opengl extension wrangler domain > add extensions to 15
> year old opengl32.dll to make it modern opengl
> d3 polyfill: web browser technologies > polyfill - program against an
> assumed modern browser, and use polyfill.js to discover current browser
> capaiblities and fill in any gaps by emulating
> d4 unrolling: mangled-name copies pasted into same scope - don't know
> what domain its from, but what John is doing when proto-expanding, its
> like what freewrl did for 10 years for protos
> d5 adware / iframe / webcomponents > separate scopes
> -
> https://blogs.windows.com/msedgedev/2015/07/14/bringing-componentization-to-the-web-an-overview-of-web-components/
> 
> -
> http://www.benfarrell.com/2015/10/26/es6-web-components-part-1-a-man-without-a-framework/
> 
> - React, dojo, polymer, angular, es6, webcomponents.js polyfill, shadoow
> dom,import, same-origin iframe
> 
> d6 server > when a client wants something, and says what its
> capabilities are, then serve them what they are capable of displaying
> d7 viral videos
> 
> (its hard to do a table in turtle graphics, so I'll do e/d lists)
> e1 / d1 compiler: have one high level format which is technology
> agnostic, with LTS long term stablility, and compile/translate to all
> other formats which are more technology dependent. Need to show/prove
> the high level can be transformed/ is transformable to all desired
> targets like html Dom variants, html Inline variants, and desktop variants
> e4 / d1 including compiling to inline or dom variants
> e1 / d6 server-time transformation or selection: gets client
> capabilities in request, and either
> - a) transforms a generic format to target capabilities variant or
> - b) selects from among prepared variants to match target capaibilties,
> e5 / d1 compiler: can compile static geometry from high level
> nurbs/extrusions to indexedfaceset depending on target capabilities,
> need to have a STATIC keyword in case extrusion is animated?
> e6 / d1 compiler transforms routes, timers, events to target platform
> equivalents
> 
> e5 / d2 extension wrangling > depending on capaiblities of target,
> during transform stage, substitute Protos for high level nodes, when
> target browser can't support the component/level directly
> e5 / d3 polyfill > when a target doesn't support some feature, polyfill
> so it runs enough to support a stable format
> 
> e8 / d7 create viral video of web3d consortium deciding/trying-to-decide
> something. Maybe creative strategy step 3: decide among matrix elements
> at a session at siggraph with audience watching or participating in
> special "help us decide" siggraph session.
> 
> e2 / d5 webcomponents and proto scripts: create scripts with/in
> different webcomponent scope;
> e3 / d5 webcomponents make Scene and ProtoInstance both in a
> webcomponent, with hierarchy of webcomponents for nested protoInstances.
> e2+e3 / d4 unrolling + protos > unroll protos and scripts a) upstream/on
> server or transformer b) in client on demand
> 
> e7 / d6 server simplifies featuers ie SAI or not based on client
> capabilities
> e7 / d1 compiler compiles out features not supported by target client
> 
> ____d1___d2___d3___d4___d5___d6___d7
> e1 __ * _______________________ *
> e2 _________________ *___*
> e3 _________________ *___*
> e4 _*
> e5 _*_____*____*
> e6 _*
> e7 _*_________________________*
> e8 ________________________________*
> 
> Or something like that,
> But would Step 3 creatively recombine element solutions into total
> solution still result in deadlock? Or can that deadlock be one of the
> problem elements, and domain solutions applied? For example does the
> compiler/transformer workflow idea automatically solve current deadlock,
> or does deadlock need more specific attention ie breakdown into elements
> of deadlock, searching domains for solutions to deadlock elements etc.
> 
> HTH
> -Doug
> 
> 
> 
> _______________________________________________
> x3d-public mailing list
> x3d-public at web3d.org
> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
> 
> _______________________________________________
> x3d-public mailing list
> x3d-public at web3d.org
> http://web3d.org/mailman/listinfo/x3d-public_web3d.org

-- 

-------------------------------------------------------------------------
Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) GmbH
Trippstadter Strasse 122, D-67663 Kaiserslautern

Geschäftsführung:
  Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
  Dr. Walter Olthoff
Vorsitzender des Aufsichtsrats:
  Prof. Dr. h.c. Hans A. Aukes

Sitz der Gesellschaft: Kaiserslautern (HRB 2313)
VAT/USt-Id.Nr.: DE 148 646 973, Steuernummer:  19/673/0060/3
---------------------------------------------------------------------------


-- 

-------------------------------------------------------------------------
Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) GmbH
Trippstadter Strasse 122, D-67663 Kaiserslautern

Geschäftsführung:
  Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
  Dr. Walter Olthoff
Vorsitzender des Aufsichtsrats:
  Prof. Dr. h.c. Hans A. Aukes

Sitz der Gesellschaft: Kaiserslautern (HRB 2313)
VAT/USt-Id.Nr.: DE 148 646 973, Steuernummer:  19/673/0060/3
---------------------------------------------------------------------------



More information about the x3d-public mailing list