<div style='font-size:10pt;font-family:Dotum;'><style> #mailBodyContentDiv { font-family : Dotum ; background-color: #ffffff;font-size:10pt;} #mailBodyContentDiv BODY { background-color: #ffffff;} #mailBodyContentDiv BODY, TD, TH { color: black; font-family: Dotum; font-size: 10pt; } #mailBodyContentDiv P { margin: 0px; padding:2px;} </style> <div id="mailBodyContentDiv" style="font-size:10pt;font-family : Dotum"> <P> <BR>The H-Anim WG meeting will be held with agenda items as follows: <BR> <BR>1. Review of Joe's comments</P>
<P>Thank you very much for sending many valuable messages and information. <BR>Below is the list of selected topics starting with the most current: </P>
<P>--------------------------------<BR>On February 14, Joe mentioned:</P>
<P>> <A href="http://www.euclideanspace.com/maths/geometry/rotations/euler/index.htm">http://www.euclideanspace.com/maths/geometry/rotations/euler/index.htm</A> </P>
<P>>Of current importance, this shows conversion between those two most <BR>>important forms and the Euler Angle representation. Euler angle is a <BR>>proposed candidate h-anim joint animation input form due to its <BR>>current use in consumer motion interaction simulations. However, if it <BR>>is the wg intent to actually add Eulerangle -> axisangle conversion <BR>>internal to the X3D browser, I suggest that it is much more reliable <BR>>and much more efficient and totally sufficient for an X3D tool to <BR>>expect any external motion devices to interact with our h-anim <BR>>character using native forms of rotation and location. </P>
<P>For animation generators or browsers, Euler angles may be used efficiently. <BR>However, I am thinking that since all current motion capture devices may not provide Euler parameter values, <BR>then the humanoid file format shouldn’t include them. <BR>For the file format to include Euler angles, all browsers would have to be able to interpret them. </P>
<P>---------------------------------</P>
<P>On February 6, Joe mentioned:</P>
<P>>[What is the difference between 'motion capture' and 'scanner' data? <BR>>I associate 'motion capture' data where some specific surface features <BR>>are tracked and recorded in order to recreate motions in the context <BR>>of a skeleton. I associate 'scanner' data with external surface <BR>>feature detection in order to recreate an external surface or 'skin' <BR>>without the need to generate animations based in the context of a <BR>>skeleton. <BR>>If a 'scanner' detects motion by tracking features for the purpose of <BR>>animating the skeleton hierarchy, then I would call it 'motion <BR>>capture' instead. <BR>>If the application is just seeking and tracking various external <BR>>surface feature points to create a model skin without connection with <BR>>a skeleton, I would call it a scanner; thus no h-anim motion. </P>
<P>Motion capture devices usually generate and store location and rotation parameters into bvh files, for example, <BR>while 3D scanner devices usually generate and store geometric polygon data into ply files, for example.<BR>Motion capture files usually include motion parameter values for representing the motion of joints. <BR>3D scanner data usually includes coordinate values for the entire polygon data set of a scanned object.<BR>In my thinking, skin feature points are directly relevant to a 3D scanner data set rather than motion captured data.<BR>In other words, we obtain skin feature points from 3D scanner data and then generate skin motion using motion captured data, if necessary. </P>
<P>---------------------------------</P>
<P>On February 2, Joe mentioned:</P>
<P>>Displacer nodes can be used to animate any geometry anywhere on or in <BR>>the h-anim character without any limits. An X3D author can create and <BR>>document any sort of animation conceivable and call it anything the <BR>>author desires and show it anywhere for free. In order for these <BR>>animations to be transportable without complex processing and some <BR>>guesses, the 'skin' must also be transportable. Transportable skin <BR>>means that the same vertex, or set of vertices, serve the same <BR>>'standard' feature point in a similar location. </P>
<P>According to Joe's definition, a humanoid model consists of transportable skeleton and transportable skin. <BR>I am not yet sure of this definition. I don't fully understand why skin should be transportable.<BR>For a human, skeleton and skin are tightly coupled. The skin or skeleton is usually not used for other humanoids, as it is<BR>specific to a given humanoid. <BR>In my thinking, it is enough to model skin with the skeleton of a person.<BR>In addition, I’m concerned that skin separately designed from the skeleton could introduce a retargeting problem separate from the case of motion because skin must be adjusted to skeleton and segments. </P>
<P>---------------------------------</P>
<P>On January 28, Joe mentioned:</P>
<P>>Just to pin down what I am discussing as the "skin drawn in skeleton <BR>>space" please see this part of listing from: <BR>>http://www.hypermultimedia.com/x3d/hanim/JoeH-AnimKick1a.txt </P>
<P>Re: 389 skin feature points. <BR>I am wondering how to map humanoid mesh data to the points.<BR>I’m not sure yet that such a large set of skin feature points is necessary.<BR>I will study this more, but my considrations are as follows:</P>
<P>- Skin feature points may be needed for tracking the motion of skin. <BR>If there is no skin motion, feature points would not be necessary.<BR>Skin usually moves with joints or muscles, while the skin's material does not change with the motion.<BR>Skin aging is different because here it is better to see that it does not require a change of feature points, but rather a change of materials.</P>
<P>- Rather, muscle feature points may be neccesary because muscle can generate motion. <BR>However, the problem is that muscle is hidden under skin, and so it is difficult to represent muscle motion using skin.<BR>Muscle should be represented independently from skin. </P>
<P>- With camera or scanner technology, it is easy to obtain realistic skin using texture mapping.<BR>In this case, I am not sure if skin feature points are necessary. </P>
<P>- Despite these concerns, if the 389 feature points are used, how will such a large number of points <BR>be mapped amongst the large data set of a human model? <BR>A human model may be generated from a 3D scanner or from large mesh data sets by a geometric modeller.<BR>In the case of a scanner, we would have to define the 389 feature points to generate the H-Anim data, and, then, <BR>how would we generate animation using the points?</P>
<P>- Another problem is how to know how an assigned feature point for one human maps to a corresponding feature point <BR>for another human. The tip of the skull is ok, <BR>but, I am not sure, for example, how we can find out a vertex corresponding to the No. 74 r_neck_base among a number of neck vertices. <BR>Otherwise, do you mean that a character designer must provide the number among a number of neck vertices?<BR>Then, do you mean that the designer must provide the 389 numbers amongst a number of whole body skin vertices?</P>
<P>- Some feature points are used for representing humanoid animation as moving points <BR>around joints and muscles. Therefore, feature points covering the entire human skin may not be necessary. </P>
<P>- If we track such a large number of skin feature points using motion capture devices, <BR>it seems that their exact position and rotation parameters cannot always be obtained. </P>
<P>------------------------------------</P>
<P>2. H-Anim WG Wiki update<BR> <BR> <A href="http://web3d.org/x3d/wiki/index.php/H-Anim">http://web3d.org/x3d/wiki/index.php/H-Anim</A></P>
<P> </P>
<P>3. Review of the NWIP documents </P>
<P>Please send any comments about the documents.<BR>They will be amended based on your comments.</P>
<P><BR>4. Scheduling next meeting<BR> <BR> (1st Wed) April 3rd at 5:00pm PDT (4th, 9:00am KST) <BR> </P>
<P> </P> </div><table border='0' cellpadding='0' cellspacing='0' style='width:100%; padding:30 10 10 10'><tr><td style='font-family:Gulim; font-size:12px;'> <style> #signDiv {background-color: #ffffff;} #signDiv BODY{ background-color: #ffffff;} #signDiv BODY, #signDiv TD, #signDiv TH { color: black; font-family: 굴림; font-size: 12px; } #signDiv TD { border: 0px} #signDiv P { margin: 0px; padding:0px} </style> <div id="signDiv" >-- <BR>Myeong Won Lee <BR>Dept. of Information Media, College of IT, U. of Suwon <BR>Hwaseong-si, Gyeonggi-do, 445-743 Korea <BR>Tel) +82-31-220-2313, +82-10-8904-4634 <BR>E-mail) <A href="mailto:mwlee@suwon.ac.kr">mwlee@suwon.ac.kr</A> <BR> </div>
</td><td width='15'></td></tr></table>
<img src="http://nmail.suwon.ac.kr/mail/PutAck.jsp?ack_args=c2VudF9maWxlPW13bGVlQHN1d29uLmFjLmtyLy5TZW50LzEzNjI1NTU2MTg2NTMuNzYwMi5zdXdvbiZzZW5kX2RhdGU9MjAxMzAzMDYxNjQwMTgmc3ViamVjdD1ILUFuaW0gV0cgbWVldGluZywgTWFyY2ggNnRoLCA1OjAwcG0gUFNUICg3dGgsIDEwOjAwYW0gS1NUKQ==&to_email=korea-chapter@web3d.org_______________________________________________________________________________________________________________________________" width="1" border="0" height="1">