[x3d-public] ] V4.0 Opendiscussion/workshopon X3DHTMLintegration

Joe D Williams joedwil at earthlink.net
Mon Jun 13 09:10:37 PDT 2016


>> ... this generic data design also allows  ... the specialized
>> approach that X3D is based on

That is what I hope to hear anyway. X3D does use a generically
specialized approach because it is aimed at a specific application. So
here are some categories of data required to create the humanoid and
animations.

skeleton hierarchy
joint center locations
segment lengths
before animation pose
before animation joint rotations
segment geometries
skin geometry
skin vertex to joint bindings

So, the x3d hanim data design is pure 3D database. Easy to build from
a spreadsheet. For the skeleton, there is a base joint, then segments
connect to other joints in a system of parent-child so the thing moves
as expected by rotations applied to joints. The only reason I can
think of why they called the bones segments instead of bones is
because this type of chained animations using
anchorjoint-segment-joint-segment-(etc.)-endeffector structures is
very common over many fields as well as humanoid simulation. Also this
3D structure gives some handles to attach physics elements.

In x3d, the segment geometries are children of the segment so when the
segment moves because the joint is rotated, then the geometry moves.
So, in the x3d data model the animated geometry is defined in the
children user code of the segment it represents, This should be fairly
standard way of including geometry for the character.

In x3d the segment lengths are not given directly, instead it is a
computed distance between parent and child joints (or parent joint and
end effector).

In x3d the before animation skeleton pose is sort of a relaxed
attention pose, with all joints at the default rotation, facing +z,
with +y up, with 0 0 0 at the standing surface between the feet. This
is the animation binding pose which may be different than the skin
binding pose.

The base joint for the skeleton is about crotch level, maybe center of
gravity standing, Then the x y z of joint centers are also set
relative to 0 0 0. The hanim example typical skeleton dimensions are
from a large sample and include a collection of surface features, such
as an xyz location for the top of the head. So the complete important
dimensions of the skeleton are stored as an attribute of each Joint
node. There may be other ways to do this but this seems convenient and
is basically what all authoring tools do.

If the animation includes a deformable mesh, then each vertex of the
skin geometry needs to be hooked up with one or more joint nodes that
are responsible for controlling displacment of each skin vertex as the
skeleton joints are rotated. The skin geometry is defined as a child
of the humanoid so all moves when the base joint moves and each
individual vertex of the skin moves according to rotation of
associated joint(s). So the skin is defined as a single mesh that may
be composed from individual geometries.

The binding of each vertex to one or more joints is given in each
Joint node by listing the number of the vertex in its order of
appearance in the skin geomery definition, along with a weight that
allows computation of the radial displacement to move the vertex as
the associated joint(s)are rotated. So, the individual joint nodes
hold skin binding and weight data, I guess there could be other ways
of cooding this, like frestanding list of each vertex and its
controlling joints but that path was not chosen.

Seem complex? Well how else can it be done? You must know complete
details about the skeleton and skin or else, no luck or turn it over
to a black box. If you use this sort of technology, then this is what
it takes to document the techniques.

In x3d standard, the animation is given by listing the controls, such
as touch and time sensors. For each joint to be animated an
orientation interpolator or script along with a list of routes that
describe the flow of time to each interpolator and the result from
each interpolator to the Joint it controls.  completes the animation
event system,

So that is my version of how the hanim character 'rigging; and
animation data is presented as user code. Especially in hanim there
really aren't many abstractions. It is very concrete because the model
wants to be as complete and realistic as possible.

again > Hanim has
> selected one specific way of describing and handling animation and
> skinning, which requires a node-specific implementation.

No, you just have to recognize the node and know what to do with the
data.
Animation is usually keyframe interpolation, meaning a set of data
that includes a list of times and a list of what the data should be at
that time. The idea is that if you need to create the scene at some
point between the keytimes, then linear interpolation berween adjacent
key data can be computed. This is the way all does realtime. There can
be some differences that simpify if you just wanna make a video. But
again, if you need interplators, there are not many easier or more
straightforward deflcative way to define user code for a keyframe
interpolator.

As for skinning, maybe you would rather have a separate container that
just lists each vertex and and the elements that control it. What is a
better way to document this connection than the way it is done in x3d
hanim?

And yes, skeleton has at its core a node specific implementation that
contains the hierarchy of joints. It is important to define the
hieratchy in the user code so it can be verified against a 'standard'
hierarchy which is part of figuring out whether the character can do
'standard' animations, for example.

So it is just a matter of names of functionality and the data for that
functionality. And the not small matter of putting that functionality
in a structure and syntax so that the data can be easily read,
analyzed, and maybe extracted in a 'standard' form and used according
to processing styles of the supporting 3D application. HAnim made a
giant step forward when back in the day they decided to put the
complete skeleton model and all the data to run the thing directly and
unambigiously in the user code. This made the model concrete and
verifiable as well as personal. Before that all they could do was
submit lists of elements and data to the big hanim simulator in the
sky and it would would send back results.

Some built-in convenience nodes are certainly needed at some point. Is
here a simple way to define keyframe interpolation or do I start with
a script template that depends upon dom events?

Thanks and Best,
Joe


----- Original Message ----- 
From: "Joe D Williams" <joedwil at earthlink.net>
To: "doug sanden" <highaspirations at hotmail.com>; "'X3D Graphics public
mailing list'" <x3d-public at web3d.org>; "Philipp Slusallek"
<philipp.slusallek at dfki.de>
Sent: Sunday, June 12, 2016 3:59 PM
Subject: Re: [x3d-public] ] V4.0 Opendiscussion/workshopon
X3DHTMLintegration


Hi, Philipp,

> https://xml3d.github.io/xml3d-examples/examples/xflowSkin/xflow-skin.html
> for
> simple skinned and animated characters

I don't see it. There are things jumping around, but from code think
not skeletons with skin but just geometries dragged from frame to
frame. Maybe the code is in the protos? Looks like it could be
generated by something that used skeletal animation but just exported
geometry for some keyframes. Anyway, I can't find the desired
interfaces, like how is the skeleton composed, how is the skin bound,
how do I control the animations, do my personal animations stand a
chance of working in those rigs? All the questions I consider basic
are not there or very far down in the reading. So show me the code for
a skeleton, please,

>From the spec, It is important that the skeleton be well defined in
terms of names, locations, and interfaces. To me, the great thing
about the x3d representation is clarity about the naming and location
of features, and even an initial pose so that animations can be easily
transported between characters.

> Hanim has
> selected one specific way of describing and handling animation and
> skinning, which requires a node-specific implementation.

Right, hanim documented the best practices for handling skeleton, and
animation, and skinning, I mean for years x3d does it the same way
because these are the parameters for the way that everybody does it.

So, it started long ago with the idea that researchers needed a
standardized skeleton that would serve for producing a computable
replacement for a mechanical armature in humanoid simulations. With
medical and robotic folks also involved, they decided to pick a
realistic representation that was widely accepted. The hanim and X3D
standards use as the example a 'standard' humanoid with 'typical'
dimensions in a realistic humanoid hierarchy, This was easy for VRML
and X3D with a Humanoid container holding skeleton and skin and some
other stuffs.

Skeleton is realistic hierarchy of Joints, Segments, and Sites.
Defining the default initial pose was not easy but finally, the choice
was probably an artifact of the method used to obtain the greatast
share of samples to define typical joint and surface feature
locations. Anyway, some of the names have changed (segment instead of
bone) and some under the covers stuffs exposed, but basically x3d
hanim is indusry-standard best practices for complete documentation of
a realtime animated character.

Later this has evolved for skeleton structures to serve as the basic
model for motion capture data and as the corresponding structure for
the mocap playback model.

I mean, this hanim has been the world standard for transportablity of
basic structures and basic functionality. Wouldn't you expect to get
something that represents the core factors for what most all realtime
character animation tools would give you when you start with any
default (fantasitc that there are now so many) humanoid or biped or
something of that category? Of course, and that is true. See X3D HAnim
LOA2. Some names are changed, but that is the generic skeleton.

The names may be changed or some hidden interfaces exposed, but if you
look close you will see that x3d hanim does indeed represent complete
documentation required to build and animate the character. That can be
important when you wish to carry your work from one commercial or open
product to another. I mean you used to have to beg for binding and
animation curves. At some authoring levels sometimes you can't even
see that stuff.

Whatever the authoring system internal data forms, if they rig skin,
then there may or may not be a human readable and kestroke editable
listing of the skin vertex bindings and weights. X3D just says that
this very basic stuff has to be in the flie in a logical place and
reasonable form. Any authoring system worthy of your trust should be
able to give you that list just in case you wanted to work with
another tool and use your old rigging. Why is it so carefully defined
in x3d hanim? Because that is the best way to preserve that type of
data since basically, everybody has to do it that way down at the
metal, to move the points ro positions that depend upon what time
appears in the next frame.

That's just way it is. The basic data in close to executable form
readable and editable is what x3d hanim requires. Since it is so
basic, data should be able to be exchanged between most any set of
authoring tools and it is. Of course there might be some new
technique(s) because the things advance, but those techniques either
remain proprietary of have not yet made it into the public open source
community so would ot appear in X3D.

No outright challenge here but look at what you get when you start
with the default humanoid in any authoring system. Some might hook up
the joints slightly differently with other names or use some other
space than 'standard' hanim humanoid space but the basic goal is
realism, Hey, I think best results when everything is drawn in
'standard' human space, dimensioned for your preferences. Like the
hanim end-effector surface features are there because experimeters
wanted to be able to define an actual location in human space relative
to the skeleton. That was where the virtual doctor could touch the
virtual surgical tool. Anyway, by the time the standard was set, it
was pretty much decided that real machines would use quats to anmate
but X3D stayed with axis-angle as the minimum requirement for
transporting realtime animations (realtime always needs interpolators
and all inbetweens, so sorry euler angles).

A Transform extended to a Joint adds some technical features and the
hierarchal structure of Joint, connecting Segment, and Site(s) for
surface and internal features are all standard vrml/x3d. Using the
names Humanoid. Joint, Segment, and Site as names for the major
functionalities of the basic humanoid with geometries bound to
segments is accomplished by extending Transform using simple
prototypes. To do the skin needs some pretty standard gem script to
move the skin points as the skeleton is animated .

I mean that all the information about mesh and binding and what is
supposed to happen when it is supposed to happen is very nicely
composed. Of course x3d hanim is always interested in new names and
locations and styles or techniques that are missing from anywhere in
x3d, but basically it is all there. This reflects the data that is
actually used to create the character and realtime animations in
human-readable form. And it matches up with detailed vizualization
technolofies like medical and cad and physics and is completely
collada friendly.

As I said, the example I am interested in exploring is relatively
simple and from what I have seen (with conversion from axis-angle to
unitquats) can probably be represented lossless in glTF.

> But this generic data design also allows for creating these
> abstractions that would be much harder (if not impossible) to do
> with
> the specialized approach that X3D is based on.

X3D is a generic data design because it defines generic forms of data
needed to make and animate a common character. The data is indeed
generic and no character animation that can produce animated
characters is missing any of this data. Absent proprietary technology
they all use a skeleton and they all have geometry bound to connecting
things, and they all use the same skin bindings.

What is specialized? The names and hierarchy? Well the names are
probably specialized but the hierarchy and bindings are not. If i read
the above right, then _if_ the generic data design has hard times with
the x3d approach of containing the data then the generic data design
has big problems. I don't think that is what you said, but what part
of the x3d data design is harder? Overall, the hanim is a very generic
data design using very generic 3D hierarchies. Hanim is not at all an
unusual or non-generic scenegraph structure or data structure so I
don't understand the problem.

Besides, please look at some browsers that do a great job with x3d
hanim. There were several more before they went missing.

http://www.hypermultimedia.com/x3d/hanim/JoeH-AnimKick1a.txt

is the text version of the example I am most interested in, in classic
encoding because I thinki is easier to read. Don't use word wrap.

In reality, I don't care how the data is stored for runtime execution,
I care about the readability of the documentation created at
authortime. Sure, X3D HAnim may take a while to learn to read because
the structure is complicated, but all time is not wasted because these
are types of data common to most all efforts of humanoid animation.

One piece of automation also used in character animation is precise,
time-driven animation of parts of a geometry, like when a piece of
skin to move independently of any joint rotations, In HAnim this is
done by Displacer, You tell it which points to move and how much to
move them then send it a weight, This is an important little tool.
Again, the data and technique just represents a common way to do it.
How would your project define such as operation?

Thanks and Best,
Joe


----- Original Message ----- 
From: "Philipp Slusallek" <philipp.slusallek at dfki.de>
To: "Joe D Williams" <joedwil at earthlink.net>; "doug sanden"
<highaspirations at hotmail.com>; "'X3D Graphics public mailing list'"
<x3d-public at web3d.org>
Sent: Sunday, June 12, 2016 12:21 AM
Subject: Re: [x3d-public]] V4.0 Opendiscussion/workshopon X3DHTML
integration


> Hi Joe,
>
> I believe it may even be illuminating to just read a paper to
> understand
> the principles of other technologies and consider them for your own
> design. Also, some more openness to other available technology
> besides
> X3D would actually help the discussion here.
>
> But we actually do have an implementation as well, which is used in
> many
> of our projects: See for example
> https://xml3d.github.io/xml3d-examples/examples/xflowSkin/xflow-skin.html
> for
> simple skinned and animated characters that are handled using Xflow
> to
> describe the required processing on the triangle meshes. These are
> animated characters exported to XML3D/Xflow directly from a
> well-known game.
>
> This is just one of many ways of how Xflow can be used. Really, the
> main
> point of Xflow is the ability to describe very different processing
> operations on various data sets in a scene in a declarative way.
> There
> are also examples for image processing (e.g.
> https://xml3d.github.io/xml3d-examples/examples/xflowIP/histogramm.html),
> simple
> Augmented Reality
> (https://xml3d.github.io/xml3d-examples/examples/xflowAR/ar_flying_teapot.html),
> and others using the exact same basic technique. Our ongoing work
> will
> make this even simpler and support different HW mappings better.
>
> This is made possible by the generic data model in XML3D that I have
> alluded to several times in my email. It is already useful as nice
> abstraction of GPU buffers but also allows for supporting
> programmable
> shading. But this generic data design also allows for creating these
> abstractions that would be much harder (if not impossible) to do
> with
> the specialized approach that X3D is based on. However, it does work
> the
> other way round: You can map the specialized nodes of X3D to the
> more
> general and generic functionality of XML3D/Xflow.
>
> I think this highlights the difference between our approaches: Hanim
> has
> selected one specific way of describing and handling animation and
> skinning, which requires a node-specific implementation. On the
> other
> hand, we provide a small core engine for any such processing and
> expose
> it in a compact and declarative way. The engine can then analyze and
> optimize the resulting flow-graph, optimize it, and map it to the
> available HW independent of what the specific computation and up
> representing. On top of this, one can then use WebComponents to map
> any
> specific representation (such as Hanim) to this generic
> representation.
>
> We also did a careful analysis and comparison to X3D/Hanim in our
> papers
> (see below for the links). There are several issues that we
> identify:
> Need to duplicate the geometry to apply different animations to the
> same
> model, or the fact that Hanim cannot handle tangent vectors as part
> of
> the model, which may be required if a model has anisotropic
> materials
> that need the transformed tangent vectors as vertex attributes for
> the
> shader. It is very straight forward to add such processing to an
> Xflow
> graph. There are more arguments in the paper.
>
> We also argue in the paper that Xflow is expressive enough to handle
> Hanim. Doing a full WebComponent implementation for Hanim is left as
> an
> exercise for the reader :-). While certainly useful, we do not see
> this
> as the main target of our research work, sorry. But it should not be
> a
> difficult exercise.
>
> BTW, the relevant papers are here:
> --
> https://graphics.cg.uni-saarland.de/fileadmin/cguds/papers/2012/klein_web3d2012/xflow.pdf
> --
> https://graphics.cg.uni-saarland.de/fileadmin/cguds/papers/2013/klein_web3d2013/xflow-ar.pdf
>
> There is also a IEEE CG&A extended version of the first paper here:
> -- https://www.computer.org/csdl/mags/cg/2013/05/mcg2013050038.pdf
>
>
> Best,
>
> Philipp
>
> Am 12.06.2016 um 05:52 schrieb Joe D Williams:
>> Hi Philipp,
>>
>> I would study some of your work, but please help me esablish this
>> confidence by showing me what you can do with some relatively
>> complex
>> X3D. This is skeleton animation of joints and segments as used
>> everywhere (no matter which interfaces are actually exposed by the
>> authoring system) and a deformable mesh skin bound to the skeleton
>> and
>> each skin vertex bound to one or more joint(s) nodes.
>>
>> http://www.web3d.org/documents/specifications/19774/V1.0/HAnim/ObjectInterfaces.html
>>
>>
>> Skin animation is achieved by animating the joints in the
>> skeleton's
>> joint hierarachy then weighting each skin vertex displacement
>> according
>> to the bound joint(s) rotation (as used everywhere no matter which
>> interfaces are actually exposed by the authoring system).
>>
>> some basics are here:
>>
>> https://en.wikipedia.org/wiki/Skeletal_animation
>>
>> is pretty much what X3D does either/both segment geometry (none on
>> this
>> model) or skin, like this one, and represents complete
>> documentation of
>> the model rigging and animations. Relative to the rest of the world
>> of
>> character authoring and animation X3D covers a lot of ground. The
>> only
>> 'probem' I know X3D has is that we do not use quaterions for joint
>> animation, which is now more or less industry glTF standard instead
>> of
>> axis-angle used here. Well, also see that while the interpolators
>> are
>> linear, the keytimes may not always be constant intervals.
>>
>> A couple of X3D browsers will do this fine and BSContact free is my
>> reference.
>>
>> This is a 'standard' LOA3 skeleton with skin vertices mostly taken
>> from
>> 'standard' surface feature points. Both skeleton and skin are drawn
>> in
>> approximately human scale, using the spec example dimensions as a
>> basis.
>> I use an IndexedFaceSet for the skin mesh and depend upon the
>> 'standard'
>> X3D browser feature of IFS to generate a default texure map so the
>> texture stays bound to the skin as it moves.
>>
>> Anyway, I hope you can take a look at this because implementation
>> of
>> this basic character animation stuff is really not that easy and in
>> the
>> past we have seen X3D browser development stall at implementation
>> of
>> skeleton based skin animation. Note the hanim displacer node also
>> does
>> mesh deformation.
>>
>> Example is here:
>>
>> http://www.hypermultimedia.com/x3d/hanim/JoeH-AnimKick1a.x3dv
>>
>> and attached.
>>
>> I can get it in .x3d but this version has better documentation of
>> the
>> skin-joint bindings.
>>
>> Thanks and Best,
>> Joe
>>
>>
>>
>>
>> ----- Original Message ----- From: "Philipp Slusallek"
>> <philipp.slusallek at dfki.de>
>> To: "Joe D Williams" <joedwil at earthlink.net>; "doug sanden"
>> <highaspirations at hotmail.com>; "'X3D Graphics public mailing list'"
>> <x3d-public at web3d.org>
>> Sent: Saturday, June 11, 2016 3:17 AM
>> Subject: Re: [x3d-public] [x3d] V4.0 Opendiscussion/workshopon
>> X3DHTML
>> integration
>>
>>
>>> Hi Joe,
>>>
>>> Thanks for the good discussion.
>>>
>>> But may I humbly suggest that you read our Xflow papers. We have
>>> looked
>>> at this problem very carefully and tried different options with
>>> Xflow as
>>> the result of this. Xflow describes a generic data modeling and
>>> processing framework as a direct extension to HTML. It is even
>>> independent of XML3D conceptually. I would even call it the most
>>> important parts of our system.
>>>
>>> Its data representation is very close to GPU buffers (by design)
>>> and we
>>> have shown that it can be mapped efficiently to very different
>>> acceleration API (including plain JS, asm.js, ParallelJS, vertex
>>> shaders, and others).The reason is that it is a pure functional
>>> design
>>> that is hard to do with X3D Routes for various reasons (discussed
>>> in the
>>> papers).
>>>
>>> Morphing, skinning, and image processing were actually the first
>>> examples that we showed how to do with the system. Hanim can be
>>> easily
>>> mapped to Xflow (e.g. by a WebComponent), from where it can take
>>> advantage of the generic HW acceleration without any further
>>> coding. All
>>> that is left on the JS side is a bit of bookkeeping, attribute
>>> updates,
>>> and the WebGL calls.
>>>
>>>
>>> And with regard to the need of native implementations as raised by
>>> you
>>> earlier: On a plain PC we could do something like 40-50 (would
>>> have to
>>> check the exact number) fairly detailed animated characters, each
>>> with
>>> their own morphing and skinning in a single scene in pure JS, even
>>> WITHOUT ANY ACCELERATION AT ALL, including rendering and all other
>>> stuff. Yes, faster and more efficient is always better, but (i) we
>>> should not do any premature optimizations unless we can show that
>>> it
>>> would actually make a big difference and (ii) this will not be
>>> easy as
>>> you should not underestimate the performance of JS with really
>>> good JIT
>>> compiler and well-formed code.
>>>
>>> Unless we have SHOWN that there is a real problem, that JS CANNOT
>>> be
>>> pushed further AND there is sufficient significant interest by a
>>> large
>>> user base, the browser vendors will not even talk to us about a
>>> native
>>> implementation. And maintaining a fork is really, really hard --
>>> trust
>>> me that is where we started :-(.
>>>
>>> And even more importantly, when we should ever get there we should
>>> better have an implementation core that is as small as possible.
>>> Many
>>> node types each with its own implementation is not the right
>>> design for
>>> that (IMHO). Something like Xflow that many nodes and routes could
>>> be
>>> mapped to seems, a much more useful and maintainable option.
>>>
>>>
>>> Right now we are extending shade.js in a project with Intel to
>>> also
>>> handle the Xflow processing algorithms to be more general, which
>>> should
>>> allow us to have a single code that targets all possible
>>> acceleration
>>> targets. Right now you still need separate implementations for
>>> each
>>> target.
>>>
>>>
>>> Best,
>>>
>>> Philipp
>>>
>>> Am 10.06.2016 um 19:26 schrieb Joe D Williams:
>>>>> e6 html integration > route/event/timer
>>>>
>>>> These are details solved declaratively using .x3d using the
>>>> abstractions
>>>> of node event in and outs, timesensors, routes, interpolators,
>>>> shaders,
>>>> and Script directOuts...
>>>>
>>>> in the <x3d> ... </x3d> environment, everything hat is not
>>>> 'built-in' is
>>>> created programatically using 'built-in' event emitters, event
>>>> listeners, event processors, time devices, scripts, etc.
>>>>
>>>> So the big difference in event systems might be that in .html the
>>>> time
>>>> answers what time was it in the world when you last checked the
>>>> time,
>>>> while in ,x3d it is the time to use in creation of the next
>>>> frame. So
>>>> this declarative vs programatic just sets a low limit on how much
>>>> animation automation ought to be included. Both .x3d and <x3d>
>>>> ,,,
>>>> </x3d> should preserve the basic event graph declarations.
>>>>
>>>> This brings up where to stash these organizable lists of routes
>>>> and
>>>> interpolators.
>>>> The user code of .html is not really designed for these detailed
>>>> constructions and its basic premise is that the document should
>>>> contain
>>>> content, not massses of markup. So, are timers and interpolators
>>>> and
>>>> routes as used in .x3d content or markup? If they are markup,
>>>> then it is
>>>> clear they should be in style. Besides, in my trusty text editor
>>>> this
>>>> gives me a easily read independent event graph to play with.
>>>>
>>>> Next, if I need to step outside the 'built-in' convenience
>>>> abstractions,
>>>> or simply to communicate with other players in the DOM which
>>>> happens to
>>>> be the current embeddiment of my <x3d> ,,, </x3d> then I need DOM
>>>> event
>>>> stuffs and probably a DOM script to deal with DOM events set on
>>>> x3d
>>>> syntax.
>>>>
>>>> So, to me this is the first step: Decide how much of the
>>>> automation is
>>>> actually included within <x3d> ... </x3d>?
>>>>
>>>> Maybe one example is x3d hanim where we define real skin vertices
>>>> bound
>>>> to real joints to achieve realistic deformable skin. In HAnim the
>>>> first
>>>> level of animation complexity is a realistic skeleton of joints
>>>> with
>>>> simple binding of shapes to segments in a heirarchy where joint
>>>> center
>>>> rotations can produce realitic movements of the skeleton. As a
>>>> joint
>>>> center rotates then its children segments and joints move as
>>>> expected
>>>> for the skeleton dynamics. For seamless animations across segment
>>>> shapes, then the technique is to bind each skin vertex to one or
>>>> more
>>>> joint objects, then move the skin some weighted displacement as
>>>> the
>>>> joint(s) center(s) rotates.
>>>>
>>>> To document this completely in human-readable and editable form,
>>>> as is
>>>> the goal of .x3d HAnim, is very tedious, but that is exactly how
>>>> it is
>>>> actually finally computed in the wide world of rigging and in
>>>> computationally intensive. Thus, it makes sense for <x3d> ...
>>>> </x3d> to
>>>> support shapes bound to segments that are children of joints but
>>>> not
>>>> demand full support for deformable skin. Hopefully the javascript
>>>> programmers that are now building the basic foundations to
>>>> support x3d
>>>> using webgl features will prove me wrong, but without very high
>>>> performance support for reasonable density deformable skin, this
>>>> does
>>>> not need to be supported in the (2.) html environment. Of course
>>>> standalone and embeddable players can do this because they will
>>>> have
>>>> access to the high performance code and acceleration that may not
>>>> be
>>>> available in .html with webgl.
>>>>
>>>> Thanks for thinking about this stuff.
>>>>
>>>> Joe
>>>>
>>>> http://www.hypermultimedia.com/x3d/hanim/hanimLOA3A8320130611Allanimtests.x3dv
>>>>
>>>>
>>>>
>>>> http://www.hypermultimedia.com/x3d/hanim/hanimLOA3A8320130611Allanimtests.txt
>>>>
>>>>
>>>>
>>>> http://www.hypermultimedia.com/x3d/hanim/JoeH-AnimKick1a.x3dv
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> ----- Original Message ----- From: "doug sanden"
>>>> <highaspirations at hotmail.com>
>>>> To: "'X3D Graphics public mailing list'" <x3d-public at web3d.org>
>>>> Sent: Friday, June 10, 2016 7:03 AM
>>>> Subject: Re: [x3d-public] [x3d] V4.0 Opendiscussion/workshopon
>>>> X3DHTML
>>>> integration
>>>>
>>>>
>>>> 3-step 'Creative Strategy'
>>>> http://cup.columbia.edu/book/creative-strategy/9780231160520
>>>> https://sites.google.com/site/airdrieinnovationinstitute/creative-strategy
>>>>
>>>> 1. break it down (into problem elements)
>>>> 2. search (other domains for element solutions)
>>>> 3. recombine (element solutions into total solution)
>>>>
>>>> e - problem element
>>>> d - domain offering solution(s) to problem elements
>>>> e-d matrix
>>>> ______d1________d2______d3__________d4
>>>> e1
>>>> e2
>>>> e3
>>>> e4
>>>>
>>>> Applied to what I think is the overall problem: 'which v4
>>>> technologies/specifications' or 'gaining consensus on v4 before
>>>> siggraph'.
>>>> I don't know if that's the only problem or _the_ problem, so this
>>>> will
>>>> be more of an exercise to see if Creative Strategy works in the
>>>> real
>>>> world, by using what I can piece together from what your're
>>>> saying as an
>>>> example.
>>>> Then I'll leave it to you guys to go through the 3 steps for
>>>> whatever
>>>> the true problems are.
>>>> Problem: v4 specification finalization
>>>> Step1 break it down:
>>>> e1 continuity/stability in changing/shifting and multiplying
>>>> target
>>>> technologies
>>>> e2 html integration > protos
>>>> e3 html integration > proto scripts
>>>> e4 html integration > inline vs Dom
>>>> e5 html integration > node/component simplification
>>>> e6 html integration > route/event/timer
>>>> e7 html integration > feature simplification ie SAI
>>>> e8 siggraph promotion opportunity, among/against competing 3D
>>>> formats /
>>>> tools
>>>>
>>>> Step 2 search other domains
>>>> d1 compiler domain > take a high-level cross platform language
>>>> and
>>>> compile it for target CPU ARM, x86, x64
>>>> d2 wrangling: opengl extension wrangler domain > add extensions
>>>> to 15
>>>> year old opengl32.dll to make it modern opengl
>>>> d3 polyfill: web browser technologies > polyfill - program
>>>> against an
>>>> assumed modern browser, and use polyfill.js to discover current
>>>> browser
>>>> capaiblities and fill in any gaps by emulating
>>>> d4 unrolling: mangled-name copies pasted into same scope - don't
>>>> know
>>>> what domain its from, but what John is doing when
>>>> proto-expanding, its
>>>> like what freewrl did for 10 years for protos
>>>> d5 adware / iframe / webcomponents > separate scopes
>>>> -
>>>> https://blogs.windows.com/msedgedev/2015/07/14/bringing-componentization-to-the-web-an-overview-of-web-components/
>>>>
>>>>
>>>> -
>>>> http://www.benfarrell.com/2015/10/26/es6-web-components-part-1-a-man-without-a-framework/
>>>>
>>>>
>>>> - React, dojo, polymer, angular, es6, webcomponents.js polyfill,
>>>> shadoow
>>>> dom,import, same-origin iframe
>>>>
>>>> d6 server > when a client wants something, and says what its
>>>> capabilities are, then serve them what they are capable of
>>>> displaying
>>>> d7 viral videos
>>>>
>>>> (its hard to do a table in turtle graphics, so I'll do e/d lists)
>>>> e1 / d1 compiler: have one high level format which is technology
>>>> agnostic, with LTS long term stablility, and compile/translate to
>>>> all
>>>> other formats which are more technology dependent. Need to
>>>> show/prove
>>>> the high level can be transformed/ is transformable to all
>>>> desired
>>>> targets like html Dom variants, html Inline variants, and desktop
>>>> variants
>>>> e4 / d1 including compiling to inline or dom variants
>>>> e1 / d6 server-time transformation or selection: gets client
>>>> capabilities in request, and either
>>>> - a) transforms a generic format to target capabilities variant
>>>> or
>>>> - b) selects from among prepared variants to match target
>>>> capaibilties,
>>>> e5 / d1 compiler: can compile static geometry from high level
>>>> nurbs/extrusions to indexedfaceset depending on target
>>>> capabilities,
>>>> need to have a STATIC keyword in case extrusion is animated?
>>>> e6 / d1 compiler transforms routes, timers, events to target
>>>> platform
>>>> equivalents
>>>>
>>>> e5 / d2 extension wrangling > depending on capaiblities of
>>>> target,
>>>> during transform stage, substitute Protos for high level nodes,
>>>> when
>>>> target browser can't support the component/level directly
>>>> e5 / d3 polyfill > when a target doesn't support some feature,
>>>> polyfill
>>>> so it runs enough to support a stable format
>>>>
>>>> e8 / d7 create viral video of web3d consortium
>>>> deciding/trying-to-decide
>>>> something. Maybe creative strategy step 3: decide among matrix
>>>> elements
>>>> at a session at siggraph with audience watching or participating
>>>> in
>>>> special "help us decide" siggraph session.
>>>>
>>>> e2 / d5 webcomponents and proto scripts: create scripts with/in
>>>> different webcomponent scope;
>>>> e3 / d5 webcomponents make Scene and ProtoInstance both in a
>>>> webcomponent, with hierarchy of webcomponents for nested
>>>> protoInstances.
>>>> e2+e3 / d4 unrolling + protos > unroll protos and scripts a)
>>>> upstream/on
>>>> server or transformer b) in client on demand
>>>>
>>>> e7 / d6 server simplifies featuers ie SAI or not based on client
>>>> capabilities
>>>> e7 / d1 compiler compiles out features not supported by target
>>>> client
>>>>
>>>> ____d1___d2___d3___d4___d5___d6___d7
>>>> e1 __ * _______________________ *
>>>> e2 _________________ *___*
>>>> e3 _________________ *___*
>>>> e4 _*
>>>> e5 _*_____*____*
>>>> e6 _*
>>>> e7 _*_________________________*
>>>> e8 ________________________________*
>>>>
>>>> Or something like that,
>>>> But would Step 3 creatively recombine element solutions into
>>>> total
>>>> solution still result in deadlock? Or can that deadlock be one of
>>>> the
>>>> problem elements, and domain solutions applied? For example does
>>>> the
>>>> compiler/transformer workflow idea automatically solve current
>>>> deadlock,
>>>> or does deadlock need more specific attention ie breakdown into
>>>> elements
>>>> of deadlock, searching domains for solutions to deadlock elements
>>>> etc.
>>>>
>>>> HTH
>>>> -Doug
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> x3d-public mailing list
>>>> x3d-public at web3d.org
>>>> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>>>>
>>>> _______________________________________________
>>>> x3d-public mailing list
>>>> x3d-public at web3d.org
>>>> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>>>
>>> -- 
>>>
>>> -------------------------------------------------------------------------
>>> Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) GmbH
>>> Trippstadter Strasse 122, D-67663 Kaiserslautern
>>>
>>> Geschäftsführung:
>>>  Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
>>>  Dr. Walter Olthoff
>>> Vorsitzender des Aufsichtsrats:
>>>  Prof. Dr. h.c. Hans A. Aukes
>>>
>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313)
>>> VAT/USt-Id.Nr.: DE 148 646 973, Steuernummer:  19/673/0060/3
>>> ---------------------------------------------------------------------------
>>>
>>>
>
> -- 
>
> -------------------------------------------------------------------------
> Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) GmbH
> Trippstadter Strasse 122, D-67663 Kaiserslautern
>
> Geschäftsführung:
>  Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
>  Dr. Walter Olthoff
> Vorsitzender des Aufsichtsrats:
>  Prof. Dr. h.c. Hans A. Aukes
>
> Sitz der Gesellschaft: Kaiserslautern (HRB 2313)
> VAT/USt-Id.Nr.: DE 148 646 973, Steuernummer:  19/673/0060/3
> ---------------------------------------------------------------------------
>


_______________________________________________
x3d-public mailing list
x3d-public at web3d.org
http://web3d.org/mailman/listinfo/x3d-public_web3d.org




More information about the x3d-public mailing list