[x3d-public] X3D meeting minutes 1 February 2018: strategy for mapping glTF 2.0 to X3Dv4; name tokenization, event-stream mixing

Michalis Kamburelis michalis.kambi at gmail.com
Sun Feb 3 12:05:09 PST 2019


Brutzman, Donald (Don) (CIV) <brutzman at nps.edu> wrote:
> b. Michalis your animation cross-fading is really interesting.  Perhaps it can be conceptualized declaratively as type-aware Mixer nodes in the X3D Event Utilities component?  We can currently animate any data type, such functionality would be a super addition for authors if it can fit into event chains.  Perhaps designing a new node prototype is a good way to proceed?

The Mixer node as proposed by Andreas looks good, i.e. it's something
simple to describe in the specification and easy to implement.

That said, my approach for cross-fading (without any new nodes,
instead with new internal mechanism in X3D player to propagate "events
that are applied only partially") has some advantages over adding
"Mixer" nodes. Basically, if you want to achieve cross-fading between
any 2 animations in an X3D file, you will need to add a *lot* of Mixer
nodes, and you will need a code to determine proper locations where to
insert these Mixer nodes automatically.

To explain this better:

The correspondence between animations and TimeSensors is trivial: in
X3D, one animation *is* one TimeSensor. The correspondence between
animations and interpolators is more complicated: a TimeSensor may be
routed to multiple interpolators.

To complicate matters further, in theory the same interpolator node
may even be reused in X3D between different animations (may receive
input from multiple TimeSensors). Although my Spine JSON -> X3D
conversion doesn't use this sharing, and glTF -> X3D conversion also
probably will not.

To add the Mixer nodes to cross-fade from animation A to B, one needs
to trace all the outgoing routes of TimeSensors A and B, and place
appropriate Mixer node to cross-fade 2 interpolators (when both
animations affect the same bone) or fade-in (when some bone is
affected only by new animation) or fade-out (when some bone is
affected only by the old animation).

Example:

- Animation A is 1 TimeSensor that sends values to 10
Position/OrientationInterpolator nodes, that in turn send values to 10
Transform nodes.

- Animation B is a different TimeSensor, with different 10
Position/OrientationInterpolator nodes. They control a set of 10
Transform nodes, some of them are also controlled by A, some are not
(and some Transform nodes controlled by A are not controlled by B).

So you need to add between 10 and 20 Mixer nodes to cross-fade from A to B.

Now if you have 10 animations in a model (a modest number for an
average game character in our game "The Unholy Society" :) ), then you
need quite a lot of Mixer nodes, for every possible animation A ->
animation B combination. Namely, 10 * 9 * random(10, 20) ~= 1350 .
That's a lot of new nodes! :)

This is assuming that all nodes and routes are added at the loading
stage, without changing X3D graph later. You could alternatively add
the necessary nodes+routes "on demand", so starting a cross-fade
animation would add 10-20 new nodes, as necessary. Or you could create
a pool of Mixer nodes without connecting routes, and attach their
routes dynamically. Adding or removing routes at runtime has almost
zero cost in CGE, possibly in other X3D players too, so this may be
OK.

None of this is saying "we cannot do that". I agree that Mixer node
would be worthy addition to the X3D specification. But I'm thinking
out load, pointing out practical *possible* problems with using
"Mixer" node for cross-fading on a large scale, and explaining why my
approach (without new nodes, but with internal possibibility to
"propagate event with partial effect") may still make sense for my
use-case :)

Regards,
Michalis



More information about the x3d-public mailing list