[Korea-chapter] [h-anim] H-Anim WG meeting, March 6th, 5:00pm PST (7th, 10:00am KST)

Joe D Williams joedwil at earthlink.net
Wed Mar 6 01:25:33 PST 2013

Thank You,

----- Original Message ----- 
From: "Myeong Won Lee" <mwlee at suwon.ac.kr>
To: <x3d at web3d.org>; <h-anim at web3d.org>; <korea-chapter at web3d.org>
Sent: Tuesday, March 05, 2013 11:40 PM
Subject: [h-anim] H-Anim WG meeting, March 6th, 5:00pm PST (7th, 
10:00am KST)

> The H-Anim WG meeting will be held with agenda items as follows:
> 1. Review of Joe's comments
> Thank you very much for sending many valuable messages and 
> information.
> Below is the list of selected topics starting with the most current:
> --------------------------------
> On February 14, Joe mentioned:
>> http://www.euclideanspace.com/maths/geometry/rotations/euler/index.htm
>>Of current importance, this shows conversion between those two most
>>important forms and the Euler Angle representation. Euler angle is a
>>proposed candidate h-anim joint animation input form due to its
>>current use in consumer motion interaction simulations. However, if 
>>is the wg intent to actually add Eulerangle -> axisangle conversion
>>internal to the X3D browser, I suggest that it is much more reliable
>>and much more efficient and totally sufficient for an X3D tool to
>>expect any external motion devices to interact with our h-anim
>>character using native forms of rotation and location.
> For animation generators or browsers, Euler angles may be used 
> efficiently.
> However, I am thinking that since all current motion capture devices 
> may not provide Euler parameter values,
> then the humanoid file format shouldn’t include them.
> For the file format to include Euler angles, all browsers would have 
> to be able to interpret them.
> ---------------------------------
> On February 6, Joe mentioned:
>>[What is the difference between 'motion capture' and 'scanner' data?
>>I associate 'motion capture' data where some specific surface 
>>are tracked and recorded in order to recreate motions in the context
>>of a skeleton. I associate 'scanner' data with external surface
>>feature detection in order to recreate an external surface or 'skin'
>>without the need to generate animations based in the context of a
>>If a 'scanner' detects motion by tracking features for the purpose 
>>animating the skeleton hierarchy, then I would call it 'motion
>>capture' instead.
>>If the application is just seeking and tracking various external
>>surface feature points to create a model skin without connection 
>>a skeleton, I would call it a scanner; thus no h-anim motion.
> Motion capture devices usually generate and store location and 
> rotation parameters into bvh files, for example,
> while 3D scanner devices usually generate and store geometric 
> polygon data into ply files, for example.
> Motion capture files usually include motion parameter values for 
> representing the motion of joints.
> 3D scanner data usually includes coordinate values for the entire 
> polygon data set of a scanned object.
> In my thinking, skin feature points are directly relevant to a 3D 
> scanner data set rather than motion captured data.
> In other words, we obtain skin feature points from 3D scanner data 
> and then generate skin motion using motion captured data, if 
> necessary.
> ---------------------------------
> On February 2, Joe mentioned:
>>Displacer nodes can be used to animate any geometry anywhere on or 
>>the h-anim character without any limits. An X3D author can create 
>>document any sort of animation conceivable and call it anything the
>>author desires and show it anywhere for free. In order for these
>>animations to be transportable without complex processing and some
>>guesses, the 'skin' must also be transportable. Transportable skin
>>means that the same vertex, or set of vertices, serve the same
>>'standard' feature point in a similar location.
> According to Joe's definition, a humanoid model consists of 
> transportable skeleton and transportable skin.
> I am not yet sure of this definition. I don't fully understand why 
> skin should be transportable.
> For a human, skeleton and skin are tightly coupled. The skin or 
> skeleton is usually not used for other humanoids, as it is
> specific to a given humanoid.
> In my thinking, it is enough to model skin with the skeleton of a 
> person.
> In addition, I’m concerned that skin separately designed from the 
> skeleton could introduce a retargeting problem separate from the 
> case of motion because skin must be adjusted to skeleton and 
> segments.
> ---------------------------------
> On January 28, Joe mentioned:
>>Just to pin down what I am discussing as the "skin drawn in skeleton
>>space" please see this part of listing from:
> Re: 389 skin feature points.
> I am wondering how to map humanoid mesh data to the points.
> I’m not sure yet that such a large set of skin feature points is 
> necessary.
> I will study this more, but my considrations are as follows:
> - Skin feature points may be needed for tracking the motion of skin.
> If there is no skin motion, feature points would not be necessary.
> Skin usually moves with joints or muscles, while the skin's material 
> does not change with the motion.
> Skin aging is different because here it is better to see that it 
> does not require a change of feature points, but rather a change of 
> materials.
> - Rather, muscle feature points may be neccesary because muscle can 
> generate motion.
> However, the problem is that muscle is hidden under skin, and so it 
> is difficult to represent muscle motion using skin.
> Muscle should be represented independently from skin.
> - With camera or scanner technology, it is easy to obtain realistic 
> skin using texture mapping.
> In this case, I am not sure if skin feature points are necessary.
> - Despite these concerns, if the 389 feature points are used, how 
> will such a large number of points
> be mapped amongst the large data set of a human model?
> A human model may be generated from a 3D scanner or from large mesh 
> data sets by a geometric modeller.
> In the case of a scanner, we would have to define the 389 feature 
> points to generate the H-Anim data, and, then,
> how would we generate animation using the points?
> - Another problem is how to know how an assigned feature point for 
> one human maps to a corresponding feature point
> for another human. The tip of the skull is ok,
> but, I am not sure, for example, how we can find out a vertex 
> corresponding to the No. 74 r_neck_base among a number of neck 
> vertices.
> Otherwise, do you mean that a character designer must provide the 
> number among a number of neck vertices?
> Then, do you mean that the designer must provide the 389 numbers 
> amongst a number of whole body skin vertices?
> - Some feature points are used for representing humanoid animation 
> as moving points
> around joints and muscles. Therefore, feature points covering the 
> entire human skin may not be necessary.
> - If we track such a large number of skin feature points using 
> motion capture devices,
> it seems that their exact position and rotation parameters cannot 
> always be obtained.
> ------------------------------------
> 2. H-Anim WG Wiki update
> http://web3d.org/x3d/wiki/index.php/H-Anim
> 3. Review of the NWIP documents
> Please send any comments about the documents.
> They will be amended based on your comments.
> 4. Scheduling next meeting
> (1st Wed) April  3rd at 5:00pm PDT (4th, 9:00am KST)
>      -- 
>      Myeong Won Lee
>      Dept. of Information Media, College of IT, U. of Suwon
>      Hwaseong-si, Gyeonggi-do, 445-743 Korea
>      Tel) +82-31-220-2313, +82-10-8904-4634
>      E-mail) mwlee at suwon.ac.kr


> _______________________________________________
> h-anim mailing list
> h-anim at web3d.org
> http://web3d.org/mailman/listinfo/h-anim_web3d.org

More information about the Korea-chapter mailing list