[x3d-public] X3D meeting minutes 1 February 2018: strategy for mapping glTF 2.0 to X3Dv4; name tokenization, event-stream mixing

Andreas Plesch andreasplesch at gmail.com
Sun Feb 3 11:13:55 PST 2019


On Sun, Feb 3, 2019 at 12:04 AM Brutzman, Donald (Don) (CIV)
<brutzman at nps.edu> wrote:
>
> Likewise thanks for progressive analysis, really interesting.  Reactions:
>
> a. Naming convention for normalizing/demunging ID nametokens is straightforward enough to define, with extra credit if round-tripping is achieved without obscurantism.  Key question: can more than one animation element (i.e. TimeSensor equivalent) exist in a glTF model?

Yes, a glTF model can have and most often has multiple animations.
Transcoded, they all could have their own TimeSensor. One thing to
recognize is that glTF is not concerned about how the animations are
run relative to each other, or relative to other world content if the
glTF is used in a larger world. It just provides the storage for the
key frames, eg. keys and key values. So if there are multiple
animations they could contain alternative animations targeting the
same node ('walk', 'run') or they could be meant to run
simultaneously. There is no way to know from just the glTF. This means
it is up to the glTF using application how to drive the animations.

In addition, each glTF animation has 'channels'. Channels have a
target (currently only nodes, eg. Transforms, toNode) and a 'path' on
the target (toField), along with input (keys) and output (key values).
Channels are played simultaneously.

The glTF2 loader in x3dom currently uses a dedicated TimeSensor for
each included animation. It looks for the longest duration in all
channels and creates an equivalent TimeSensor to make sure the
animation can run fully. Keys to interpolators are adjusted to the
proper speed given this longest duration. The glTF keys are in
seconds, and are adjusted to fractions.  X3dom immediately starts and
then loops all animations, currently.

That reminds me of another edge case for TimeSensor naming. It is
possible that glTF uses the same name for multiple animations, and
then perhaps uses metadata (extras) to more fully describe the intent
of the animation. For example, an exporter may just have a default
name for animations ('name: animation') in case the author is in a
hurry, and a generic way to export metadata (custom properties in
Blender) as extras.

-Andreas

> On 2/2/2019 1:22 PM, Andreas Plesch wrote:
> > On Sat, Feb 2, 2019 at 2:26 PM Michalis Kamburelis
> > <michalis.kambi at gmail.com> wrote:
> >>
> >>   Andreas Plesch <andreasplesch at gmail.com> wrote:
> >>> A few edges cases are:
> >>>
> >>> https://github.com/KhronosGroup/glTF/blob/master/specification/2.0/README.md#reference-animation
> >>> says that a name is not required. What name should be used then ?
> >>
> >> Hm, that's a curious case. Can such unnamed animation be played by
> >> other glTF APIs/viewers?
> >
> > Yes, since the name in glTF is considered metadata. The authoritative
> > reference to something is its index in an array. All animations in
> > glTF are contained in an animations array. So an animation without a
> > name may be listed as 'first glTF animation' but it is up to the
> > application how to label it.
> >
> >>
> >> Consistently, we could convert unnamed glTF animation to an unnamed
> >> TimeSensor. It would be equally uncomfortable (to access) then, both
> >> in X3D and in glTF :)
> >
> > In glTF, it would accessed as gltf.animations[index] . Actually named
> > animations would also be accessed by their index.
> >
> >>>
> >>> The name can contain any character including spaces which are not
> >>> allowed in DEF names. How to sanitize ?
> >>
> >> We use a "brutal" solution in CGE right now that replaces everything
> >> except ['a'..'z', 'A'..'Z', '0'..'9'] to an underscore,
> >> https://github.com/castle-engine/castle-engine/blob/master/src/x3d/x3dloadinternalutils.pas#L60.
> >>
> >> It's "brutal" in the sense that it knowingly replaces much more than
> >> necessary. I don't have a strong opinion whether this is the right
> >> solution, I could change it to be consistent with other X3D players.
> >
> > There is probably a function somewhere which converts a string to a
> > NMTOKEN , the DEF XML data type:
> >
> > NMTOKEN is an XML term for Name Token. NMTOKEN is a special kind of
> > CDATA string that must match naming requirements for legal characters,
> > with no whitespace characters allowed. Additionally, from XML
> > specification: disallowed initial characters for Names include numeric
> > digits, diacritics (letter with accent or marking), the "." period
> > character (sometimes called full stop) and the "-" hyphen character.
> >
> > The brutal solution still allows for initial numeric digits, so it may
> > have to become really violent ;)
> >
> >>>
> >>> I suppose for blending animations, eg. mixing interpolator output by
> >>> weight, there are additional processing steps. Do you use an actual
> >>> script or just internal computing ?
> >>
> >> I'm using an internal mechanism. It allows to send an event and say
> >> that "the effect of this event should be applied only partially".
> >>
> >> You can watch the demo movie on
> >> https://castle-engine.io/wp/2018/03/21/animation-blending/ , it shows
> >> how it works for both animating Transform.translation/rotation
> >> (skeletal animation from Spine) and Coordinate.point (animation of
> >> mesh coordinates).
> >>
> >> The reasons for doing it without any new X3D nodes:
> >>
> >> - I wanted animation blending to work with any existing X3D animation.
> >> So I didn't want to require X3D authors to use a new node, or to
> >> organize existing TimeSensor/interpolators/routes in any new way.
> >>
> >> - I wanted it to also allow "fade out" of the animation (when the
> >> animation is applied with less and less impact, but no new animation
> >> takes it's place).
> >>
> >> - I wanted it to work for any combination of animations. E.g.
> >> Animation1 affects bones A, B, C, and Animation2 affects bones B, C,
> >> D. ("Bone" here can mean just a "Transform" node for Spine skeletal
> >> animation.) So some bones are controlled by both animations (B, C),
> >> but some not. In all cases, fade out of the old Animation1 and fade-in
> >> of Animation2 should work as expected.
> >
> > There may be a way to accomplish goals 2 and 3 with a Mixer node. Fade
> > out could be mixing an interpolator with a no-op interpolator (zeroes,
> > or ones in the keyValue). Combinations may just require multiple
> > mixers but potentially many to cover all combinations.  It may not be
> > unreasonable to ask scene author to use a new node since it may turn
> > out to be quite similar to defining the blending in the play animation
> > call. Instead of saying blend animations A and B with this weight, one
> > says generate a new interpolator output by mixing interpolator A and B
> > with this weight.
> >
> >>
> >> <details>
> >>
> >> (Be warned, this is one of the more complicated algorithms in CGE :)
> >> It's not a lot of code lines, but understanding and writing this was
> >> hard.)
> >
> > Thanks for writing this up. It will take me a while to digest :)
> >
> >>
> >> I track when we should do cross-fading between animations (it has to
> >> be initilalized by calling TCastleScene.PlayAnimation), and when we
> >> are doing cross-fading, then
> >>
> >
> > Ok, one says cross-fade between A and B within two seconds.
> >
> > This translates to animating the weight/fraction, I believe.
> >
> >> 1. First the TimeSensor of the old animation sends events in a "fake"
> >> way (it sends events despite being active or not), with "partial"
> >> factor falling down from 1.0 to 0.0 as the scene time passes. The
> >> procedure to "send TimeSensor events in a fake way",
> >> TTimeSensorNode.FakeTime , is documented on
> >> https://github.com/castle-engine/castle-engine/blob/master/src/x3d/x3dnodes_standard_time.inc#L343
> >> .
> >>
> >>      The "partial" factor is passed on as one event produces another,
> >> e.g. TimeSensor sends fraction_changed, some interpolator receives it
> >> and then sends value_changed, and the "partial factor" information is
> >> still carried over. Various complicated setups are covered by this
> >> approach, in all cases the "event cascade" carries the information
> >> that "it should be applied only partially".
> >
> >
> >>
> >>      The resulting "partial" values (e.g. Transform.translation, or
> >> Coordinate.point) are stored at the field, with current accumulated
> >> "weight". But they are not immediately applied to the field actual
> >> value, they are only stored in an additional record "attached" to the
> >> field. Some notes about it are at TPartialReceived record in
> >> https://github.com/castle-engine/castle-engine/blob/master/src/x3d/castlefields_x3dfield.inc#L87
> >> .
> >>
> >>      This entire procedure is done by "PartialSendBegin" method which
> >> is in https://github.com/castle-engine/castle-engine/blob/master/src/x3d/castlescenecore.pas
> >> .
> >>
> >> 2. Then the TimeSensor of the new animation sends events in a regular
> >> way (without FakeTime), but also with "partial factor" (growing from
> >> 0.0 to 1.0).
> >>
> >> 3. At the end, for all fields that received some "partial" values in
> >> this frame, we calculate the final field value. This is done by
> >> PartialSendEnd method in
> >> https://github.com/castle-engine/castle-engine/blob/master/src/x3d/castlescenecore.pas
> >> that sends TX3DField.InternalAffectedPartial . We do a lerp between
> >> "PartialValue" (the sum of all values received in this frame, weighted
> >> by their "partial" factor) and a "SettledValue" (last known value when
> >> animation blending did not occur).
> >
> > It sounds like the SettledValue is the old value and PartialValue is
> > the current value, becoming the new value. The settledValue probably
> > is weighted by 1 - sum of partial factor or so.
> >
> >>
> >>      This way the mechanism works e.g. when cross-fading two animations
> >> (old animation has partial = 0.7, new one has 0.3, their final weight
> >> is 1.0 and then the "SettledValue" doesn't matter) or when old
> >> animation fades away (so the "partial" value has decreasing weight,
> >> and is mixed with unchanging "SettledValue").
> >
> > Ok, there is something else to understand.
> >
> >>
> >> </details>
> >
> > Thanks for the detailed description. It would be probably insightful
> > to try do this with a script to then better understand how blending
> > animations could be better supported by X3D. Such blending would be
> > important for HAnim. HAnim examples would be good targets for such an
> > attempt. Currently, they just stop one animation and start another one
> > when a button is clicked.
> >
>
>
> all the best, Don
> --
> Don Brutzman  Naval Postgraduate School, Code USW/Br       brutzman at nps.edu
> Watkins 270,  MOVES Institute, Monterey CA 93943-5000 USA   +1.831.656.2149
> X3D graphics, virtual worlds, navy robotics http://faculty.nps.edu/brutzman



-- 
Andreas Plesch
Waltham, MA 02453



More information about the x3d-public mailing list