[x3d-public] [x3d] V4.0 Opendiscussion/workshopon X3DHTML integration
doug sanden
highaspirations at hotmail.com
Fri Jun 10 11:53:07 PDT 2016
e9 some things are too compute intensive for certain target capabilities ie hanim in Joes webbrowser
d1 compiling
d4 unrolling
e9/d1,d4
https://www.mixamo.com/motions pre-animated characters
- you would declare in high level with full animation, but for restricted targets you run an intermediate process doing the animations and taking snapshots to regenerate into simple interpolations (time,indexedfaceset).
highlevel universal LTS format = > series of processes to generate for target variants => Joe's webbrowser
________________________________________
From: Joe D Williams <joedwil at earthlink.net>
Sent: June 10, 2016 11:26 AM
To: doug sanden; 'X3D Graphics public mailing list'
Subject: Re: [x3d-public] [x3d] V4.0 Opendiscussion/workshopon X3DHTML integration
> e6 html integration > route/event/timer
These are details solved declaratively using .x3d using the
abstractions of node event in and outs, timesensors, routes,
interpolators, shaders, and Script directOuts...
in the <x3d> ... </x3d> environment, everything hat is not 'built-in'
is created programatically using 'built-in' event emitters, event
listeners, event processors, time devices, scripts, etc.
So the big difference in event systems might be that in .html the time
answers what time was it in the world when you last checked the time,
while in ,x3d it is the time to use in creation of the next frame. So
this declarative vs programatic just sets a low limit on how much
animation automation ought to be included. Both .x3d and <x3d> ,,,
</x3d> should preserve the basic event graph declarations.
This brings up where to stash these organizable lists of routes and
interpolators.
The user code of .html is not really designed for these detailed
constructions and its basic premise is that the document should
contain content, not massses of markup. So, are timers and
interpolators and routes as used in .x3d content or markup? If they
are markup, then it is clear they should be in style. Besides, in my
trusty text editor this gives me a easily read independent event graph
to play with.
Next, if I need to step outside the 'built-in' convenience
abstractions, or simply to communicate with other players in the DOM
which happens to be the current embeddiment of my <x3d> ,,, </x3d>
then I need DOM event stuffs and probably a DOM script to deal with
DOM events set on x3d syntax.
So, to me this is the first step: Decide how much of the automation is
actually included within <x3d> ... </x3d>?
Maybe one example is x3d hanim where we define real skin vertices
bound to real joints to achieve realistic deformable skin. In HAnim
the first level of animation complexity is a realistic skeleton of
joints with simple binding of shapes to segments in a heirarchy where
joint center rotations can produce realitic movements of the skeleton.
As a joint center rotates then its children segments and joints move
as expected for the skeleton dynamics. For seamless animations across
segment shapes, then the technique is to bind each skin vertex to one
or more joint objects, then move the skin some weighted displacement
as the joint(s) center(s) rotates.
To document this completely in human-readable and editable form, as is
the goal of .x3d HAnim, is very tedious, but that is exactly how it is
actually finally computed in the wide world of rigging and in
computationally intensive. Thus, it makes sense for <x3d> ... </x3d>
to support shapes bound to segments that are children of joints but
not demand full support for deformable skin. Hopefully the javascript
programmers that are now building the basic foundations to support x3d
using webgl features will prove me wrong, but without very high
performance support for reasonable density deformable skin, this does
not need to be supported in the (2.) html environment. Of course
standalone and embeddable players can do this because they will have
access to the high performance code and acceleration that may not be
available in .html with webgl.
Thanks for thinking about this stuff.
Joe
http://www.hypermultimedia.com/x3d/hanim/hanimLOA3A8320130611Allanimtests.x3dv
http://www.hypermultimedia.com/x3d/hanim/hanimLOA3A8320130611Allanimtests.txt
http://www.hypermultimedia.com/x3d/hanim/JoeH-AnimKick1a.x3dv
----- Original Message -----
From: "doug sanden" <highaspirations at hotmail.com>
To: "'X3D Graphics public mailing list'" <x3d-public at web3d.org>
Sent: Friday, June 10, 2016 7:03 AM
Subject: Re: [x3d-public] [x3d] V4.0 Opendiscussion/workshopon X3DHTML
integration
3-step 'Creative Strategy'
http://cup.columbia.edu/book/creative-strategy/9780231160520
https://sites.google.com/site/airdrieinnovationinstitute/creative-strategy
1. break it down (into problem elements)
2. search (other domains for element solutions)
3. recombine (element solutions into total solution)
e - problem element
d - domain offering solution(s) to problem elements
e-d matrix
______d1________d2______d3__________d4
e1
e2
e3
e4
Applied to what I think is the overall problem: 'which v4
technologies/specifications' or 'gaining consensus on v4 before
siggraph'.
I don't know if that's the only problem or _the_ problem, so this will
be more of an exercise to see if Creative Strategy works in the real
world, by using what I can piece together from what your're saying as
an example.
Then I'll leave it to you guys to go through the 3 steps for whatever
the true problems are.
Problem: v4 specification finalization
Step1 break it down:
e1 continuity/stability in changing/shifting and multiplying target
technologies
e2 html integration > protos
e3 html integration > proto scripts
e4 html integration > inline vs Dom
e5 html integration > node/component simplification
e6 html integration > route/event/timer
e7 html integration > feature simplification ie SAI
e8 siggraph promotion opportunity, among/against competing 3D formats
/ tools
Step 2 search other domains
d1 compiler domain > take a high-level cross platform language and
compile it for target CPU ARM, x86, x64
d2 wrangling: opengl extension wrangler domain > add extensions to 15
year old opengl32.dll to make it modern opengl
d3 polyfill: web browser technologies > polyfill - program against an
assumed modern browser, and use polyfill.js to discover current
browser capaiblities and fill in any gaps by emulating
d4 unrolling: mangled-name copies pasted into same scope - don't know
what domain its from, but what John is doing when proto-expanding, its
like what freewrl did for 10 years for protos
d5 adware / iframe / webcomponents > separate scopes
-
https://blogs.windows.com/msedgedev/2015/07/14/bringing-componentization-to-the-web-an-overview-of-web-components/
-
http://www.benfarrell.com/2015/10/26/es6-web-components-part-1-a-man-without-a-framework/
- React, dojo, polymer, angular, es6, webcomponents.js polyfill,
shadoow dom,import, same-origin iframe
d6 server > when a client wants something, and says what its
capabilities are, then serve them what they are capable of displaying
d7 viral videos
(its hard to do a table in turtle graphics, so I'll do e/d lists)
e1 / d1 compiler: have one high level format which is technology
agnostic, with LTS long term stablility, and compile/translate to all
other formats which are more technology dependent. Need to show/prove
the high level can be transformed/ is transformable to all desired
targets like html Dom variants, html Inline variants, and desktop
variants
e4 / d1 including compiling to inline or dom variants
e1 / d6 server-time transformation or selection: gets client
capabilities in request, and either
- a) transforms a generic format to target capabilities variant or
- b) selects from among prepared variants to match target
capaibilties,
e5 / d1 compiler: can compile static geometry from high level
nurbs/extrusions to indexedfaceset depending on target capabilities,
need to have a STATIC keyword in case extrusion is animated?
e6 / d1 compiler transforms routes, timers, events to target platform
equivalents
e5 / d2 extension wrangling > depending on capaiblities of target,
during transform stage, substitute Protos for high level nodes, when
target browser can't support the component/level directly
e5 / d3 polyfill > when a target doesn't support some feature,
polyfill so it runs enough to support a stable format
e8 / d7 create viral video of web3d consortium
deciding/trying-to-decide something. Maybe creative strategy step 3:
decide among matrix elements at a session at siggraph with audience
watching or participating in special "help us decide" siggraph
session.
e2 / d5 webcomponents and proto scripts: create scripts with/in
different webcomponent scope;
e3 / d5 webcomponents make Scene and ProtoInstance both in a
webcomponent, with hierarchy of webcomponents for nested
protoInstances.
e2+e3 / d4 unrolling + protos > unroll protos and scripts a)
upstream/on server or transformer b) in client on demand
e7 / d6 server simplifies featuers ie SAI or not based on client
capabilities
e7 / d1 compiler compiles out features not supported by target client
____d1___d2___d3___d4___d5___d6___d7
e1 __ * _______________________ *
e2 _________________ *___*
e3 _________________ *___*
e4 _*
e5 _*_____*____*
e6 _*
e7 _*_________________________*
e8 ________________________________*
Or something like that,
But would Step 3 creatively recombine element solutions into total
solution still result in deadlock? Or can that deadlock be one of the
problem elements, and domain solutions applied? For example does the
compiler/transformer workflow idea automatically solve current
deadlock, or does deadlock need more specific attention ie breakdown
into elements of deadlock, searching domains for solutions to deadlock
elements etc.
HTH
-Doug
_______________________________________________
x3d-public mailing list
x3d-public at web3d.org
http://web3d.org/mailman/listinfo/x3d-public_web3d.org
More information about the x3d-public
mailing list