[x3d-public] Fw: Re: Blend Shapes and x3d Animation tools

Joe D Williams joedwil at earthlink.net
Sun Oct 27 16:26:04 PDT 2024


More to mention some ideas about Displacer. Again, I am thinking that the x3d CoordinateInterpolator deals with the entire, all points, of the shape, while the Displacer can move one or more points in a defined direction under control of a scalar 0 to to 1 input.   
   
The CoordinateIinterpolator provides a complete set of coordinates, a keyValue, for each key time. 

The Displacer provides a 3D vector, the displacements, for each point, the coordIndex, of the point to be controlled by the 0 to 1 scalar input, the weight.   
   
So, when is the CoordinateInterpolator preferred over Displacer?

Well, first is that for the Displacer, we get a linear  interpolation for movement added by the weight. This means if you don't wish a linear motion with a linear weight input, then we must adapt the weight input to the motion curve we wish. 
   
Same for the CoordinateInterpolator if only two keyValue fields are used, one for time0 and one for time1, with a standard time input, we would get a linear interpolation, or the 0 to1 fraction_changed input could scale the 0 to 1 input to achieve a custom deformation curve, as with the Displacer.
   
However, what if the vertex movement may not be totally composed of a linear motion, but is a real 3D curve in time and space? With x3d Humanoid we have some choices.

First, an example  of a humanoid eyelid where the center of rotation and desired arc is well  understood. Here a simple choice is to bind the eyelid vertices to an eyelid joint, then rotate the joint as required to produce the desired movement.  
   
Also,, a coordinate interpolator can be used, but containing several keyValue sets list to produce the motion. Probably at least one intermediate set of points.

Next, it might be unlikely that the Displacer as is specified now could handle this motion of an eyelid that is by its std orientation and path, probably following a curved path in Y and Z. Since the displacer only can deliver a varying magnitude of a fixed orientation vector then a motion curve for a point in more than one direction may not be possible.   

One added point I learned is that the Blender Relative mode can accept the weight input greater than  1 and less than zero and the tool will extrapolate along the  given vector. This can be very useful, it might seem, for when we want to reuse a Displacer and it is similar enough to another and we could just change the input scale say from 0 to1 to 0.05 to1.25 without having to edit the actual set of vectors. This sounds easy and convenient but probably has some down sides in a close simulation. I think we could recommend a differnt sort of hackforthis. 

While the Displacer is offered as a substitute for coordinate interpolators and reall canhelp tosimplify the user code in many situations. So,realtime runtime. 

Does this set of features offer the opportunity to think about how to update the Displacer? 
For example
(1) to allow multiple keys and vector sets so that more complex point motions can be presented? How about
(2) extending the weight input range to less than 0 and greater than 1 and the runtime extrapolates? 
   
Thanks and Best, 
Joe

   
-----Original Message-----
From: Joe D Williams <joedwil at earthlink.net>
Sent: Oct 21, 2024 5:07 PM
To: <x3d-public at web3d.org>, <hanim at web3d.org>
Subject: Fw: Re: Blend Shapes


-----Forwarded Message-----
From: Joe D Williams
Sent: Oct 21, 2024 5:04 PM
Subject: Re: Blend Shapes

Just as an update to this, in Blender we see BlendShapes in two styles, Absolute and Relative.

BlendShapes Absolute presents the process of a CoordinateInterpolator that deforms or morphs the complete mesh, by allowing author to create a base mesh, then additional mesh shapes times for the complete shape moves from the base shape through various key shapes with intermediate values allowing several styles of interpolation.

BlendShapes Relative allows picking out all or a subset or mesh points to be moved relative to current mesh points location according to a scalar input.

So, the only difference between Blender and X3D is the style of naming the values. .

BlendShape Absolute, it seems like internal data for coordinate points, the X3D keyValue field, and the timer, or weight, the X3d key, could be easily exported. the big problem is that Blender allows several interpolation maths other than linear. Is it time to intrudce other interpolation maths to X3D?

BlendShape Relative, the Blender data is basically the same as for X3D Displacer, consisting of a the list of mesh indices to be operated on, a corresponding list of 3d vectors to be drawn relative to the current mesh points, and a weight input. the biggest difference in the Blendier implementation is that the weight is allowed to be less than zero and greater than 1 and the Blender tool does extrapolation to find the actual value. This weight overrange feature could be very useful when adapting one setup to another in a similar setup.

Thanks and Best,
Joe


---Original Message-----
From: Joe D Williams
Sent: Oct 3, 2024 11:03 AM
To: Myeong Won Lee , Brutzman Donald (Don) (CIV) , ,
Cc: WILLIAM GLASCOE , , , Joe Williams
Subject: Blend Shapes

As we hear more of high level authoring systems used in entertainment, such as USD,we encounter the term Blend Shapes or blendshapes, etc.
Under the covers this is an alternative to skeleton-driven deformable mesh animation where the points are moved according to a weighted function that takes into consideration the motion of associated Joint nodes that could produce direct action on a vertex.

Dependence on this style for detailed animation of individual points or groups of points results in networks of skeleton hierarchy rigs in order to produce shape deformations like lip movements or other muscle simulations. Overall, these structures are interesting and can solve some interesting mesh animation problems it becomes difficult to produce standardizable compositions.
Besides, a better tool that can operate with or independently of the skeletal animations.

One obvious solution is to add CoordinateInterpolator(s) to move points of the mesh. Here the basic problem is that you must
deal with the entire target mesh for every key and it becomes a special case to combine the effects of multiple interpolators on the same mesh.
I think the basic two ways to think about it is the idea of working with a complete mesh and/or just some part(s) of it, a sparse set, an area of points of the same target mesh.

In reality, you want these skeleton-driven and these additional independent deformations to be summed so we get the complete animation.
For instance we would say that for each frame, animation produced by the added displacements are added after all skeleton-driven point animation is complete.

So, with this problem of simulating geometry and parts of geometry morphing from shape to shape under control of some set of Joint nodes along with producing surface movements more or less independently while innovating a design that gets rid of cumbersome Coordinate Interpolator operations, HAnim documented an animation form that lots agreed that this was a best practice, easily standardizable technique.

interface Displacer {
coordIndex [ list of indices ]
description "Some Functionality"
displacements [ list of 3D vectors ]
name "required name"
weight 0 }

The coordIndex field contains a list of all coordinate index values of the target mesh be animated by this Displacer. The index is the order in which the point to appears in the user code. If only points 57, 58, and 59 were to be animated then the coordIndex field would be:
coordIndex [56 57 58]

The displacements field contains a list of 3D vectors that represent the maximum movement from the default or magicallly enough whatever the current location of that point is. Now, to master this you must understand that a point has its own local coordinate system. Like the default coordinate system of a Joint, a point in default, before animation, is the same as its parent Joint, 0 0 1 0]. This means that let's say that those three points are the tip of a finger and the objective is to move those points directly outward 0.01 unit then the vectors would be:
displacements [0 -0.01 0, 0 -0.01 0, 0 -0.01 0]

Now the weight field receives a 0 to1 floating point value that linearly scales the output vector. If zero,then no added movement; if 0.5 then one-half,and if 1, then maximum. If 1 is received,then the point will be moved -0.01 unit in the point's Y-axis of the geometry coordinate system.

A way to compute the displacements value is if you happened to have a CoordinateInterplator you wanted to get rid of is to find the coordinates of the point at minimum, key input 0, and maximum, key input 1, then find the 3D vector that connects those locations. If you are finding these values by examination, then a major consideration is that you want to set these up understanding the default point coordinate system.

So, finally,why did HAnim call it a Displacer? Well, there were a couple of names offered and the HAnim folks just had to pick something different from anyone was actually showing. The idea of a Displacer is very clear, you are telling it to displace a point from its current location in some direction relative to its default coordinate system.

In summary, seems to me the idea of terms I have heard like deformer, morpher, blended shapes, and others all can be represented by the functionality of HAnim Displacer. I hope that is good news.

Thanks and Fun,
Joe











More information about the x3d-public mailing list