[Korea-chapter] [X3D] H-Anim WG meeting, March 6th, 5:00pm PST (7th, 10:00am KST)

Joe D Williams joedwil at earthlink.net
Thu Mar 7 06:23:27 PST 2013


responses below

----- Original Message ----- 
From: "Myeong Won Lee" <mwlee at suwon.ac.kr>
To: <x3d at web3d.org>; <h-anim at web3d.org>; <korea-chapter at web3d.org>
Sent: Tuesday, March 05, 2013 11:40 PM
Subject: [X3D] H-Anim WG meeting, March 6th, 5:00pm PST (7th, 10:00am 
KST)


>
> The H-Anim WG meeting will be held with agenda items as follows:
>
> 1. Review of Joe's comments
>
> Thank you very much for sending many valuable messages and 
> information.
> Below is the list of selected topics starting with the most current:
>
> --------------------------------
> On February 14, Joe mentioned:
>
>> http://www.euclideanspace.com/maths/geometry/rotations/euler/index.htm
>
>>Of current importance, this shows conversion between those two most
>>important forms and the Euler Angle representation. Euler angle is a
>>proposed candidate h-anim joint animation input form due to its
>>current use in consumer motion interaction simulations. However, if 
>>it
>>is the wg intent to actually add Eulerangle -> axisangle conversion
>>internal to the X3D browser, I suggest that it is much more reliable
>>and much more efficient and totally sufficient for an X3D tool to
>>expect any external motion devices to interact with our h-anim
>>character using native forms of rotation and location.
>
> For animation generators or browsers, Euler angles may be used 
> efficiently.
> However, I am thinking that since all current motion capture devices 
> may not provide Euler parameter values,
> then the humanoid file format shouldn’t include them.
> For the file format to include Euler angles, all browsers would have 
> to be able to interpret them.

I am seeing the Euler angles used for recording video where every 
frame is a keyframe with no interpolation.
I think a prototype Joint node with a script to convert the Euler 
angles to the native axis-angle form to drive the Joint rotation input 
would be appropriate for an example but I do not believe we should 
make use of Euler angle  values a requirement for the realtime 
browser.

>
> ---------------------------------
>
> On February 6, Joe mentioned:
>
>>[What is the difference between 'motion capture' and 'scanner' data?
>>I associate 'motion capture' data where some specific surface 
>>features
>>are tracked and recorded in order to recreate motions in the context
>>of a skeleton. I associate 'scanner' data with external surface
>>feature detection in order to recreate an external surface or 'skin'
>>without the need to generate animations based in the context of a
>>skeleton.
>>If a 'scanner' detects motion by tracking features for the purpose 
>>of
>>animating the skeleton hierarchy, then I would call it 'motion
>>capture' instead.
>>If the application is just seeking and tracking various external
>>surface feature points to create a model skin without connection 
>>with
>>a skeleton, I would call it a scanner; thus no h-anim motion.
>
> Motion capture devices usually generate and store location and 
> rotation parameters into bvh files, for example,
> while 3D scanner devices usually generate and store geometric 
> polygon data into ply files, for example.
> Motion capture files usually include motion parameter values for 
> representing the motion of joints.
> 3D scanner data usually includes coordinate values for the entire 
> polygon data set of a scanned object.
> In my thinking, skin feature points are directly relevant to a 3D 
> scanner data set rather than motion captured data.
> In other words, we obtain skin feature points from 3D scanner data 
> and then generate skin motion using motion captured data, if 
> necessary.

This is a complicated process. However, I think the same skin feature 
points can be used for both static scanning and for motion capture. 
Several motion capture systems are now using sensors in joints of the 
motion capture apparatus instead of visual tracking so that is a 
slightly different problem.

>
> ---------------------------------
>
> On February 2, Joe mentioned:
>
>>Displacer nodes can be used to animate any geometry anywhere on or 
>>in
>>the h-anim character without any limits. An X3D author can create 
>>and
>>document any sort of animation conceivable and call it anything the
>>author desires and show it anywhere for free. In order for these
>>animations to be transportable without complex processing and some
>>guesses, the 'skin' must also be transportable. Transportable skin
>>means that the same vertex, or set of vertices, serve the same
>>'standard' feature point in a similar location.
>
> According to Joe's definition, a humanoid model consists of 
> transportable skeleton and transportable skin.
> I am not yet sure of this definition. I don't fully understand why 
> skin should be transportable.
> For a human, skeleton and skin are tightly coupled. The skin or 
> skeleton is usually not used for other humanoids, as it is
> specific to a given humanoid.

First let's use the word 'transportable' only with regard to the 
animation routines.
An animation is transportable between skeletons when the joint 
structure is the same or reasonably similar. That is, the skeleton has 
the same joint structures. It is not necessary that the skeleton1 is 
the same "size" as skelton2.
If both you and I are making a representation of ourselves, we would 
use the same same Joints, but at slightly different locations because 
we are different sizes. In general, the size alone would not restrict 
us from using the same animation routines for both skeletons.

If we include a deformable skin, then each vertex of the skin is 
connected to one of more Joint nodes so that the skin can be animated 
by joint rotations. In this case, an animation of one character is 
transportable when the skins of both characters are hooked up the same 
way.
In the most simple case, if both skins have the same number of 
vertices and corresponding vertices are hooked up to the same Joints 
then the animation will be transportable betwee two skeletons.

Note that both skins would not need to be the same size, just that the
skin vertices are hooked up the same way.

> In my thinking, it is enough to model skin with the skeleton of a 
> person.
> In addition, 'm concerned that skin separately designed from the 
> skeleton could introduce a retargeting problem separate from the 
> case of motion because skin must be adjusted to skeleton and 
> segments.

Great thought. Please note that the h-anim example user code in the 
standard uses the 'typical' xyz non-normative dimensions obtained from 
a big research project which sampled large number of human dimensions. 
This produces a skeleton constructed in 'human' space with 'human' 
dimensions. If I want to customize it for myself then I just change 
the location of joints and length of segments according to my actual 
measurements. As long as my custom skeleton is the same structure 
meaning the same number of joints and segments, I can use an animation 
for any similar skeleton and get similar results.

When I add a continious mesh deformable skin it gets more complex 
because I must connect each skin vertex to the approriate one or two 
joints that control its motion. I would prefer to draw the skin in the 
same 'human' space as the skeleton. In the h-anim standard we defined 
several feature points and gave a 'typical' xyz value having dimension 
in the same skelton space. To me, these feature point markers seem 
like great vertex points for the skin. I just change the xyz locations 
to my measurements. I end up with a custom skeleton and a custom skin. 
Somebody else could use this skin by simply changing the xyz feature 
point locations to match their personal dimensions, just like resizing 
the skeleton.

Now, given that the skeleton is similar structurally and the same skin 
vertices are hooked up the same joints, then animation routines for 
one model could be used for another model.

More later
Best Regards,
Joe

>
> ---------------------------------
>
> On January 28, Joe mentioned:
>
>>Just to pin down what I am discussing as the "skin drawn in skeleton
>>space" please see this part of listing from:
>>http://www.hypermultimedia.com/x3d/hanim/JoeH-AnimKick1a.txt
>
> Re: 389 skin feature points.
> I am wondering how to map humanoid mesh data to the points.
> I’m not sure yet that such a large set of skin feature points is 
> necessary.
> I will study this more, but my considrations are as follows:
>
> - Skin feature points may be needed for tracking the motion of skin.
> If there is no skin motion, feature points would not be necessary.
> Skin usually moves with joints or muscles, while the skin's material 
> does not change with the motion.
> Skin aging is different because here it is better to see that it 
> does not require a change of feature points, but rather a change of 
> materials.
>
> - Rather, muscle feature points may be neccesary because muscle can 
> generate motion.
> However, the problem is that muscle is hidden under skin, and so it 
> is difficult to represent muscle motion using skin.
> Muscle should be represented independently from skin.
>
> - With camera or scanner technology, it is easy to obtain realistic 
> skin using texture mapping.
> In this case, I am not sure if skin feature points are necessary.
>
> - Despite these concerns, if the 389 feature points are used, how 
> will such a large number of points
> be mapped amongst the large data set of a human model?
> A human model may be generated from a 3D scanner or from large mesh 
> data sets by a geometric modeller.
> In the case of a scanner, we would have to define the 389 feature 
> points to generate the H-Anim data, and, then,
> how would we generate animation using the points?
>
> - Another problem is how to know how an assigned feature point for 
> one human maps to a corresponding feature point
> for another human. The tip of the skull is ok,
> but, I am not sure, for example, how we can find out a vertex 
> corresponding to the No. 74 r_neck_base among a number of neck 
> vertices.
> Otherwise, do you mean that a character designer must provide the 
> number among a number of neck vertices?
> Then, do you mean that the designer must provide the 389 numbers 
> amongst a number of whole body skin vertices?
>
> - Some feature points are used for representing humanoid animation 
> as moving points
> around joints and muscles. Therefore, feature points covering the 
> entire human skin may not be necessary.
>
> - If we track such a large number of skin feature points using 
> motion capture devices,
> it seems that their exact position and rotation parameters cannot 
> always be obtained.
>
> ------------------------------------
>
> 2. H-Anim WG Wiki update
>
> http://web3d.org/x3d/wiki/index.php/H-Anim
>
>
>
> 3. Review of the NWIP documents
>
> Please send any comments about the documents.
> They will be amended based on your comments.
>
>
> 4. Scheduling next meeting
>
> (1st Wed) April  3rd at 5:00pm PDT (4th, 9:00am KST)
>
>
>
>
>      -- 
>      Myeong Won Lee
>      Dept. of Information Media, College of IT, U. of Suwon
>      Hwaseong-si, Gyeonggi-do, 445-743 Korea
>      Tel) +82-31-220-2313, +82-10-8904-4634
>      E-mail) mwlee at suwon.ac.kr
>
>


--------------------------------------------------------------------------------


> _______________________________________________
> X3d mailing list
> X3d at web3d.org
> http://web3d.org/mailman/listinfo/x3d_web3d.org
> 




More information about the Korea-chapter mailing list