[x3d-public] X3D meeting minutes 1 February 2018: strategy for mapping glTF 2.0 to X3Dv4; name tokenization, event-stream mixing

Brutzman, Donald (Don) (CIV) brutzman at nps.edu
Sat Feb 2 21:04:34 PST 2019

Likewise thanks for progressive analysis, really interesting.  Reactions:

a. Naming convention for normalizing/demunging ID nametokens is straightforward enough to define, with extra credit if round-tripping is achieved without obscurantism.  Key question: can more than one animation element (i.e. TimeSensor equivalent) exist in a glTF model?

b. Michalis your animation cross-fading is really interesting.  Perhaps it can be conceptualized declaratively as type-aware Mixer nodes in the X3D Event Utilities component?  We can currently animate any data type, such functionality would be a super addition for authors if it can fit into event chains.  Perhaps designing a new node prototype is a good way to proceed?

	X3D Event Utilities component

	X3D Example Archives: X3D for Web Authors, Chapter 09 Event Utilities Scripting

	X3D Event Utility Nodes: Field Event Diagrams

On 2/2/2019 1:22 PM, Andreas Plesch wrote:
> On Sat, Feb 2, 2019 at 2:26 PM Michalis Kamburelis
> <michalis.kambi at gmail.com> wrote:
>>   Andreas Plesch <andreasplesch at gmail.com> wrote:
>>> A few edges cases are:
>>> https://github.com/KhronosGroup/glTF/blob/master/specification/2.0/README.md#reference-animation
>>> says that a name is not required. What name should be used then ?
>> Hm, that's a curious case. Can such unnamed animation be played by
>> other glTF APIs/viewers?
> Yes, since the name in glTF is considered metadata. The authoritative
> reference to something is its index in an array. All animations in
> glTF are contained in an animations array. So an animation without a
> name may be listed as 'first glTF animation' but it is up to the
> application how to label it.
>> Consistently, we could convert unnamed glTF animation to an unnamed
>> TimeSensor. It would be equally uncomfortable (to access) then, both
>> in X3D and in glTF :)
> In glTF, it would accessed as gltf.animations[index] . Actually named
> animations would also be accessed by their index.
>>> The name can contain any character including spaces which are not
>>> allowed in DEF names. How to sanitize ?
>> We use a "brutal" solution in CGE right now that replaces everything
>> except ['a'..'z', 'A'..'Z', '0'..'9'] to an underscore,
>> https://github.com/castle-engine/castle-engine/blob/master/src/x3d/x3dloadinternalutils.pas#L60.
>> It's "brutal" in the sense that it knowingly replaces much more than
>> necessary. I don't have a strong opinion whether this is the right
>> solution, I could change it to be consistent with other X3D players.
> There is probably a function somewhere which converts a string to a
> NMTOKEN , the DEF XML data type:
> NMTOKEN is an XML term for Name Token. NMTOKEN is a special kind of
> CDATA string that must match naming requirements for legal characters,
> with no whitespace characters allowed. Additionally, from XML
> specification: disallowed initial characters for Names include numeric
> digits, diacritics (letter with accent or marking), the "." period
> character (sometimes called full stop) and the "-" hyphen character.
> The brutal solution still allows for initial numeric digits, so it may
> have to become really violent ;)
>>> I suppose for blending animations, eg. mixing interpolator output by
>>> weight, there are additional processing steps. Do you use an actual
>>> script or just internal computing ?
>> I'm using an internal mechanism. It allows to send an event and say
>> that "the effect of this event should be applied only partially".
>> You can watch the demo movie on
>> https://castle-engine.io/wp/2018/03/21/animation-blending/ , it shows
>> how it works for both animating Transform.translation/rotation
>> (skeletal animation from Spine) and Coordinate.point (animation of
>> mesh coordinates).
>> The reasons for doing it without any new X3D nodes:
>> - I wanted animation blending to work with any existing X3D animation.
>> So I didn't want to require X3D authors to use a new node, or to
>> organize existing TimeSensor/interpolators/routes in any new way.
>> - I wanted it to also allow "fade out" of the animation (when the
>> animation is applied with less and less impact, but no new animation
>> takes it's place).
>> - I wanted it to work for any combination of animations. E.g.
>> Animation1 affects bones A, B, C, and Animation2 affects bones B, C,
>> D. ("Bone" here can mean just a "Transform" node for Spine skeletal
>> animation.) So some bones are controlled by both animations (B, C),
>> but some not. In all cases, fade out of the old Animation1 and fade-in
>> of Animation2 should work as expected.
> There may be a way to accomplish goals 2 and 3 with a Mixer node. Fade
> out could be mixing an interpolator with a no-op interpolator (zeroes,
> or ones in the keyValue). Combinations may just require multiple
> mixers but potentially many to cover all combinations.  It may not be
> unreasonable to ask scene author to use a new node since it may turn
> out to be quite similar to defining the blending in the play animation
> call. Instead of saying blend animations A and B with this weight, one
> says generate a new interpolator output by mixing interpolator A and B
> with this weight.
>> <details>
>> (Be warned, this is one of the more complicated algorithms in CGE :)
>> It's not a lot of code lines, but understanding and writing this was
>> hard.)
> Thanks for writing this up. It will take me a while to digest :)
>> I track when we should do cross-fading between animations (it has to
>> be initilalized by calling TCastleScene.PlayAnimation), and when we
>> are doing cross-fading, then
> Ok, one says cross-fade between A and B within two seconds.
> This translates to animating the weight/fraction, I believe.
>> 1. First the TimeSensor of the old animation sends events in a "fake"
>> way (it sends events despite being active or not), with "partial"
>> factor falling down from 1.0 to 0.0 as the scene time passes. The
>> procedure to "send TimeSensor events in a fake way",
>> TTimeSensorNode.FakeTime , is documented on
>> https://github.com/castle-engine/castle-engine/blob/master/src/x3d/x3dnodes_standard_time.inc#L343
>> .
>>      The "partial" factor is passed on as one event produces another,
>> e.g. TimeSensor sends fraction_changed, some interpolator receives it
>> and then sends value_changed, and the "partial factor" information is
>> still carried over. Various complicated setups are covered by this
>> approach, in all cases the "event cascade" carries the information
>> that "it should be applied only partially".
>>      The resulting "partial" values (e.g. Transform.translation, or
>> Coordinate.point) are stored at the field, with current accumulated
>> "weight". But they are not immediately applied to the field actual
>> value, they are only stored in an additional record "attached" to the
>> field. Some notes about it are at TPartialReceived record in
>> https://github.com/castle-engine/castle-engine/blob/master/src/x3d/castlefields_x3dfield.inc#L87
>> .
>>      This entire procedure is done by "PartialSendBegin" method which
>> is in https://github.com/castle-engine/castle-engine/blob/master/src/x3d/castlescenecore.pas
>> .
>> 2. Then the TimeSensor of the new animation sends events in a regular
>> way (without FakeTime), but also with "partial factor" (growing from
>> 0.0 to 1.0).
>> 3. At the end, for all fields that received some "partial" values in
>> this frame, we calculate the final field value. This is done by
>> PartialSendEnd method in
>> https://github.com/castle-engine/castle-engine/blob/master/src/x3d/castlescenecore.pas
>> that sends TX3DField.InternalAffectedPartial . We do a lerp between
>> "PartialValue" (the sum of all values received in this frame, weighted
>> by their "partial" factor) and a "SettledValue" (last known value when
>> animation blending did not occur).
> It sounds like the SettledValue is the old value and PartialValue is
> the current value, becoming the new value. The settledValue probably
> is weighted by 1 - sum of partial factor or so.
>>      This way the mechanism works e.g. when cross-fading two animations
>> (old animation has partial = 0.7, new one has 0.3, their final weight
>> is 1.0 and then the "SettledValue" doesn't matter) or when old
>> animation fades away (so the "partial" value has decreasing weight,
>> and is mixed with unchanging "SettledValue").
> Ok, there is something else to understand.
>> </details>
> Thanks for the detailed description. It would be probably insightful
> to try do this with a script to then better understand how blending
> animations could be better supported by X3D. Such blending would be
> important for HAnim. HAnim examples would be good targets for such an
> attempt. Currently, they just stop one animation and start another one
> when a button is clicked.

all the best, Don
Don Brutzman  Naval Postgraduate School, Code USW/Br       brutzman at nps.edu
Watkins 270,  MOVES Institute, Monterey CA 93943-5000 USA   +1.831.656.2149
X3D graphics, virtual worlds, navy robotics http://faculty.nps.edu/brutzman

More information about the x3d-public mailing list