[X3D-Public] remote motion invocation (RMI)

Joe D Williams joedwil at earthlink.net
Sat Mar 5 21:39:21 PST 2011


Hi Philipp,

> Hi Joe,
>
>>> Different tasks sometimes need different abstractions to describe 
>>> and
>>> encode them well.

>> Yes, and while there are many ways, ISO has picked one.
>
> This is a really funny way to respond to my statement that we need
> several different options: You agree only to immediately say that a
> single one is all that is needed ("best practice"). This is what I 
> mean
> with defensive. Why not think about the limitations (there always 
> are
> some, even in h-anim :-) of what we have and explore if and how we 
> may
> advance the state-of-the-art to not get stuck in a local minimum.

OK, there are some limitations in h-anim.

First, what to do when the skeleton space and skin space do not 
coincide at bind time. That is, the initial skeleton pose is not the 
same as the initial skin pose.
Some work on this has been done and there is one working 
implementation (vivaty) that has facilites to do what I call normalize 
the skin to the skeleton. The candidate h-anim update is available to 
members.

Next there is the issue of what happens when multiple animation 
routines want to send data to the same joint in the same or 
approximate time stamp and it is the author's wish for these to be 
smoothly combined. For instance, we are walking and want to do a wave 
to the crowd. Instead of creating a new routine, the indivdual 
gestures are somehow combined and the result is a smooth gesture 
wtihout unexpected interruption of the walk motion.

Number 3 is we need a 'standard' continious mesh skin that connects up 
to the 'standard' joints and is painted using a 'standard' texture. 
The standard does a fair job of providing the basics for a 'standard' 
skeleton and typical standard feature sites but we are missing an 
example of the 'standard' skeleton hooked up to a 'standard' skin.

Finally, we need to get the Displacer nodes working or specified will 
enough to be implemented. This is of great importance when we want to 
get serious about showing details using body language.

Well, not to forget that a very important audience may be addressed by 
the various MPEG efforts relating to applications of an h-anim 
character.

And ooh yes, got quats? We need 'em.

>
> Note again also that I was NOT criticizing h-anim. I reasonable well
> understand what it does and it seems to do it well. However, there 
> are
> also other options. For example, COLLADA (just to cite another, 
> broadly
> used industry standard) has chosen a very different approach for
> describing animations and kinematics.

I was following that but may need an update.
http://www.cs175.org/downloads/FinalProjectPages/Malinowski_Adam/Malinowski_A/implementation.html
Please tell what is different and does not fit into the h-anim model 
when the h-anim charater is considered as a prototpe instance and the 
Collada data is imported according to the keywords used in the 
standard X3D h-anim model. This has been foing on for a while and a 
lot of it fits together as expected.

> I am not saying its better, but its different.

if different, then X3D may be a bit behind Collada or not include some 
features of the high$ seats, but the core is common to both. I will 
look for differences soon.

> Maybe just h-anim be improved in what we have by looking at others 
> or even exploring new territory.

Good, I think that is a continuation in the long development of 
h-anim.

> Also, reuse is very good but not the only goal.

I would maintain that the main goal and really the only reason to 
attempt to standardize anything is reuse of animation data. H-Anim 
just happended to include a typical humanoid as the defintion for 
skeleton and skin space.

> Funny enough, most of the mocap animation data available on the Web 
> seems to use
> neither h-anim nor COLLADA :-).

Right, all they care about is the coordinates of the sensor buttons 
keyframes for some ffps video compositization. I think, if they want 
to actually control the 'skin' then they somehow compute a 
joint/segment structure, connect the skin, then animate the joints to 
get behaviors that have not been recorded. Exactly how to synthesize 
the joint/segment structure might be a candidate for standardization 
if anyone cares.

>
> The discussion was, however, not about h-anim itself but about 
> sending animations (including but not necessarily limited to human 
> characters) across the net. I dared to state that transmitting 
> h-anim parameters (like joint angles as was suggested) is not the 
> only option. What is wrong about this statement?

Nothing. There are a couple of choices maybe depending upon how much 
smarts are in the client. First (and please forgive gaps), is that we 
are either moving the skin vertices directly, or we are moving the 
joints which result in deformation of the skin, and/or we are 
attaching geometries to Segments or Sites that are moved when the 
Segments are moved by rotating the Joint(s). X3D doesn't deal with a 
character that does not include Joints and Segments in the specified 
hierarchies.

Next, either the client is smart enough to retain and compute 
behaviors or you must tell it what to do frame-by-frame, or both. If 
you wish to send a single command to get into the car, then where 
should the computation or data store to tell the model what to do 
reside? I think the current h-anim model allows inclusion of scripts 
and timers/interpolators to store behaviors 'internally' and allows 
'external' control via SAI.

Of course what we really want is a deformable skin with the option of 
attaching geometries to segments and sites as well as control of each 
vertex of the skin.

> I would note that don't minJoint parameters might just be to 
> detailed
> or not detailed enough depending on the application or the current 
> task.
> See my first email for a list of situations where using h-anim
> parameters to describe animations is just not the best option.

OK, I will look back there but please bring forward any important 
points.

>
> I also mentioned that with XFlow we are trying a different route 
> than
> the more monolithic h-anim standard. Our building blocks approach 
> will
> allow us to implement h-anim but also to use the same building 
> blocks to
> describe other types of animations.

Sounds fine and I will look more. X3D binary OK? Sorry I haven't 
followed up on looking at XFlow. If Xflow works, then surely it would 
work for a 'standard' h-anim character and in any X3D scene.

> Since we are writing this up already, allow me to refer you to the 
> upcoming paper for the details.
> Its not optimal, I know, but it saves me a lot of time.

I will be very honored to have a look when links are available.

>
>
> Thanks,
>
> Philipp
>

Thanks again, Philipp for your work and thoughts on this very 
important part of X3D.
Best Regards,
Joe





>
> Am 05.03.2011 18:53, schrieb Joe D Williams:
>>> Different tasks sometimes need different abstractions to describe 
>>> and
>>> encode them well.
>>
>> Yes, and while there are many ways, ISO has picked one. It defines 
>> a
>> best practice for design and implementation of a humanoid character
>> where the end goal is transport of animation routines between 
>> characters.
>>
>>> I just wish that this community would be a bit more open and 
>>> receptive
>>> to new ideas or issues raised
>>
>> Misinterpreted. Sorry, if if there was a new idea of issue raised 
>> that I
>> did not respond to or if it sounds defensive or whatever. The topic 
>> is
>> features of a character that can represent a humanoid in the scene, 
>> and
>> its animation. We must have a standard at some level if you wish to
>> reuse animations. At theis point we can walk into the h-anim spec 
>> and
>> see that what we have is a realitic representation of a humanoid
>> character. It gets this way becasue we define a coordinate space 
>> for the
>> humanoid skeleton and connect it to vertices of the skin in skin 
>> space.
>> The player does the skin deformation when the joints are 
>> transformed or
>> rotated.
>> If you don't wish to do that, then attach shapes to segments.
>>
>> As you can see by operation of this character in BSContact and 
>> Instant
>> that the math to animate the skin works very fast because the code 
>> is
>> close to the metal. maybe even using parallel hardware to do some 
>> maths.
>> So, once you figure out how many joints you wish to use depending 
>> upon
>> the level of articulation you need and connect up the skin, you 
>> have a
>> working model.
>> Now you create animations by applying pitch, roll, and yaw to one 
>> of
>> more joints. SInce a joint is just a transform with some special
>> features the hierarchal structure of child joints and segments 
>> produce
>> movement like you would expect from analysis of a real joint and 
>> related
>> structuresin a real human.
>>
>> http://www.web3d.org/x3d/workgroups/h-anim/
>>
>> "The Humanoid Animation (H-Anim) standard supports representative 
>> human
>> models incorporating open best practices of haptic and kinematic
>> interfaces and a certain default skeleton pose in order to enable 
>> shared
>> animations."
>>
>> I believe all that introductory stuff describing h-anim as pointing 
>> to
>> the best open solution out there and we considered hooking it up 
>> with
>> physics end effectors from the beginning. Even though there may be 
>> a
>> different standards-track approach in applications where a 
>> generalized
>> blob character is sufficient, I still think that before we trek in 
>> a
>> direction we have to appreciate the completeness of h-anim, even 
>> though
>> it is not yet complete.
>>
>>> and not immediately go into defense mode.
>>
>> If defense mode is asking to see some use case(s) that are not 
>> covered
>> by h-anim then ok.
>> Sorry if I am seeming overeagar but I have not been dismissive, I 
>> think,
>> just honking from the peanut gallery.
>> But hey, show what you have. I may be able to imagine the amorphous
>> bag/cloud with morphing shapes controlled by tensor fields, but we 
>> need
>> something that works and is transportable here. There should always 
>> be
>> room for the people that will build their own thing and don't care 
>> about
>> some silly library of stock animations. Also, we need a solid 
>> baseline
>> for authors and users with long range plans for their character. I 
>> mean,
>> suppose you want it to become a brain surgeon?.
>>
>> Thanks again and Best Regards,
>> Joe
>>
>>
>> ----- Original Message ----- From: "Philipp Slusallek"
>> <slusallek at cs.uni-saarland.de>
>> To: "Joe D Williams" <joedwil at earthlink.net>
>> Cc: <info at 3dnetproductions.com>; <luca.chittaro at uniud.it>;
>> <x3d-public at web3d.org>
>> Sent: Friday, March 04, 2011 10:36 PM
>> Subject: Re: [X3D-Public] remote motion invocation (RMI)
>>
>>
>>> Hi Joe,
>>>
>>> I do not feel like going into another instance of our lengthy
>>> "over-Christmas" discussions on this list. So, here are just a few
>>> general comments.
>>>
>>> This discussion started from the rather general issue of 
>>> describing
>>> animations to be sent via the net. Yes, h-anim is a very useful 
>>> approach
>>> and I was not criticizing h-anim for this. All I did was point out 
>>> that
>>> there is a whole range of issues that might benefit from different
>>> approaches and that a "one size fits all" to me (at least) seems 
>>> not the
>>> right approach for sending animations across the network.
>>>
>>> Just as a starter, please look at the really interesting concept 
>>> of
>>> point-based animation/simulation models for soft bodies and tell 
>>> me how
>>> to describe them with h-anim. Not everything in a scene is 
>>> "humanoid" or
>>> can be mapped to it nicely and efficiently. Different tasks 
>>> sometimes
>>> need different abstractions to describe and encode them well.
>>>
>>> I just wish that this community would be a bit more open and 
>>> receptive
>>> to new ideas or issues raised and not immediately go into defense 
>>> mode.
>>>
>>>
>>> Thanks,
>>>
>>> Philipp
>>>
>>>
>>> Am 05.03.2011 02:51, schrieb Joe D Williams:
>>>> excuse my typing today but there were some comments embedded in
>>>> Philipp's message.
>>>> Joe
>>>>
>>>> ----- Original Message ----- From: "Joe D Williams"
>>>> <joedwil at earthlink.net>
>>>> To: <info at 3dnetproductions.com>; "'Philipp Slusallek'"
>>>> <slusallek at cs.uni-saarland.de>
>>>> Cc: <luca.chittaro at uniud.it>; <x3d-public at web3d.org>
>>>> Sent: Friday, March 04, 2011 5:41 PM
>>>> Subject: Re: [X3D-Public] remote motion invocation (RMI)
>>>>
>>>>
>>>>> The h-anim model offers parametric modelling of the highest 
>>>>> level.
>>>>> What else is needed. I would like to hear more about it.
>>>>> FOr example, what in
>>>>>
>>>>>>
>>>>>> Seems to be parametric modeling of the highest level.
>>>>>> We need an app for that; a humanoid modeling app that
>>>>>> would allow export to VRML/X3D. We could then plug H-Anim
>>>>>> into it. Any suggestions? Lauren
>>>>>>
>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: Philipp Slusallek [mailto:slusallek at cs.uni-
>>>>>>> saarland.de]
>>>>>>> Sent: Friday, March 04, 2011 5:06 PM
>>>>>>> To: Joe D Williams
>>>>>>> Cc: info at 3dnetproductions.com; 'Sven-Erik Tiberg';
>>>>>>> npolys at vt.edu; x3d-public at web3d.org;
>>>>>>> luca.chittaro at uniud.it
>>>>>>> Subject: Re: [X3D-Public] remote motion invocation (RMI)
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I have roughly followed this exchange, so let me
>>>>>>> (carefully) add a few
>>>>>>> comments from my point of view. My position in short:
>>>>>>> There simply is
>>>>>>> not a single solution to this problem (such as "use h-
>>>>>>> anim").
>>>>>>>
>>>>>>> Sometime you want/need all the compression you can get
>>>>>>> such that sending
>>>>>>> a command like "walk from point A to B" (maybe even
>>>>>>> "while avoiding
>>>>>>> obstacles") is the right thing with something else
>>>>>>> filling in the
>>>>>>> blanks,
>>>>>
>>>>> Fine, something has to fill in the blanks. The means your 
>>>>> command
>>>>> generates joint rotations an diplscements to animate the skin, 
>>>>> or just
>>>>> anmates some mesh. Either way, the combination of skin 
>>>>> deformation is
>>>>> handled by use of joint animation and by displacers. I am 
>>>>> anxious to
>>>>> find out what else is needed. If the use case is listed in 
>>>>> detail
>>>>> maybe I can respond.
>>>>>
>>>>>
>>>>>> sometime animating joints of a hierarchical
>>>>>>> skeleton with or
>>>>>>> without skinning and morphing is the right thing,
>>>>>
>>>>> OK
>>>>>
>>>>>>> sometime all you have
>>>>>>> is the end position of an effector/hand where you need
>>>>>>> Inverse
>>>>>>> Kinematics to work out the rest,
>>>>>
>>>>> That is covered in h-anim by designating Site features that can 
>>>>> serve
>>>>> as sensors.
>>>>>
>>>>>> sometimes you have soft
>>>>>>> body animations
>>>>>>> that do not have simple skeletons in the first place and
>>>>>>> you need to
>>>>>>> resolve to animating all vertices or some course mesh
>>>>>>> from which you
>>>>>>> interpolate the rest,
>>>>>
>>>>> THat is aa big drawback when realism is required. As I have 
>>>>> seen,
>>>>> these are generally data that is produced by one form of 
>>>>> animation
>>>>> then exported as timed keyframes for the mesh.
>>>>>
>>>>> sometimes you also need to preserve
>>>>>>> the volume of
>>>>>>> the object in some way -- and sometime you need a
>>>>>>> combination of
>>>>>>> all/some of the above.
>>>>>
>>>>>
>>>>> Seeing an application like that would help me see the need and 
>>>>> how it
>>>>> could be solved with h-anim.
>>>>>
>>>>>> And I am sure there are even more
>>>>>>> options that
>>>>>>> people have come up with and use :-).
>>>>>
>>>>> Yes, the history of this is long and there are many very old 
>>>>> styles of
>>>>> doing this. Now, I think the processing atthe client is capable 
>>>>> of
>>>>> mesh deformation using joint connections, or geometry attached 
>>>>> to
>>>>> segments or by displacers.
>>>>>
>>>>>>>
>>>>>>> So my conclusion is that instead of a single spec like h-
>>>>>>> anim, what we
>>>>>>> really need are building blocks that implement the basic
>>>>>>> functionality
>>>>>
>>>>>
>>>>> GReat, please show how to break p this functionality into 
>>>>> blocks. In
>>>>> the h-anim spec it is a given that a skeleton exists. WIthout 
>>>>> that
>>>>> there is no need for a standard humanoid.
>>>>> You can either attach teh geometry to segments, or go seamless 
>>>>> with a
>>>>> continiu=ous mesh.
>>>>>
>>>>>>> and can be flexibly combined with each other to provide
>>>>>>> the
>>>>>>> functionality that a scene designer needs for the
>>>>>>> specific task at hand.
>>>>>
>>>>>
>>>>> Tight, let's put toghether a set of use cases that show this.
>>>>> Otherwise, I beliee the pertainent use cases are covered in X3D 
>>>>> h-anim.
>>>>>
>>>>>>> We can then still put them together with something like a
>>>>>>> prototype/template/class mechanism for reuse in a scene
>>>>>>> (any build
>>>>>>> something like h-anim).
>>>>>
>>>>> I think if you study and understand the h-anim spec and work 
>>>>> with some
>>>>> implementations you will drop some of these ideas that cases are 
>>>>> not
>>>>> covered.
>>>>>
>>>>>> Note that you also need a
>>>>>>> flexible protocol that
>>>>>>> allows for sending these different commands and data.
>>>>>
>>>>> I do not think you will find a shortcoming in the protocol.
>>>>>
>>>>>>>
>>>>>>> BTW: The above is what we are trying to provide via the
>>>>>>> XFlow extension
>>>>>>> to XML3D. We presented the basic ideas already at the
>>>>>>> last Web3D
>>>>>>> conference -- a paper with the details is in the works
>>>>>>> and should be
>>>>>>> available on xml3d.org soon (as soon as it is done).
>>>>>
>>>>> Great, once we understand the data we want to send to the avatar 
>>>>> then
>>>>> we can see if xflow can work.
>>>>> At present once you have the abatar, there are many opotions for
>>>>> deveoping the animation data inbernally closely coupled with the
>>>>> specific avatar, or sent in from the outside.
>>>>>
>>>>> So, please give some examples and let's see how it works.
>>>>> In my mind, a character without an LOA3 skeleton loses points 
>>>>> for
>>>>> animatability.
>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Philipp
>>>>>
>>>>>>>>> ATM all of our avatars are from Avatar Studio 1 & 2. I
>>>>>
>>>>> let's see some source code for an animated figure and let's 
>>>>> evaluate.
>>>>>
>>>>> All the best,
>>>>> Joe
>>>>>
>>>>>
>>>>>>>
>>>>>>> Am 04.03.2011 22:10, schrieb Joe D Williams:
>>>>>>>>
>>>>>>>>
>>>>>>>>> ATM all of our avatars are from Avatar Studio 1 & 2. I
>>>>>>>> completely agree that they are simple and there is
>>>>>>> nothing
>>>>>>>> we want more than implementing H-Anim. But we need to
>>>>>>>> figure out the best way to do it first.
>>>>>>>>
>>>>>>>> I think there is basically two different ways:
>>>>>>>>
>>>>>>>> 1. create a 'skin' and perform the animation by
>>>>>>> directly rewriting each
>>>>>>>> skin vertex each 'frame' of the animation.
>>>>>>>> This is characterized by animation scripts that take a
>>>>>>> set of vertices,
>>>>>>>> like the skin for the arm, and to move each vertex to
>>>>>>> make the gesture.
>>>>>>>> In this case, the aimation can easily be transferred to
>>>>>>> any avator skin
>>>>>>>> that has the same number of verices in the same
>>>>>>> coordinate space.
>>>>>>>> In this case, the avatar is relatively simple because
>>>>>>> all it consists of
>>>>>>>> is a 'skin' or outer surface that has no real
>>>>>>> hierarchy.
>>>>>>>>
>>>>>>>> 2. create a skeleton with a known hierarchy of joints
>>>>>>> and bones so that
>>>>>>>> when you move the shouldr joint, then the child elbow,
>>>>>>> wrist, fingers,
>>>>>>>> etc move as expected.
>>>>>>>> Now create the skin and connect each vertex to one or
>>>>>>> more joints with a
>>>>>>>> weighting so that as the joint is rotated the skin is
>>>>>>> deformed as expected.
>>>>>>>> Now you have an avatar you can control by rotating and
>>>>>>> displacing
>>>>>>>> joints. This animation can be transported between
>>>>>>> avatars when the joint
>>>>>>>> structure is similar and given that the initial binding
>>>>>>> positions of the
>>>>>>>> skeleton and skin spaces similar.
>>>>>>>>
>>>>>>>>> That is why I thought John was onto something in his
>>>>>>> May 2010
>>>>>>>> thread/questions. I think Joe you're the best person I
>>>>>>>> know that can answer this for us. Let me quote him back
>>>>>>>> here:
>>>>>>>>
>>>>>>>>> "I agree, if I want my avatar to dance as an end user,
>>>>>>> I
>>>>>>>> shouldn't have to deal with transforms, points etc. I
>>>>>>>> should be able to download the dancing behavior I want
>>>>>>> and
>>>>>>>> apply it to my avatar. Let's make a market out of
>>>>>>> selling
>>>>>>>> motion. Cars are a wonderful example of this. Here's
>>>>>>>> another:
>>>>>>> http://playstation.joystiq.com/2009/06/19/quantic
>>>>>>>> -d ream-selling-motion-capture-libraries/
>>>>>>>>
>>>>>>>> Actually, the H-Anim spec is built with that as the
>>>>>>> main purpose - to
>>>>>>>> transport animations between avatars.
>>>>>>>> Note that the spec is really set up so that the avatar
>>>>>>> is a prototype
>>>>>>>> that accepts data from 'internal' or external'
>>>>>>> generators. Each action
>>>>>>>> is controlled by a set of data generated by
>>>>>>> interpolators or scripts.
>>>>>>>> So, if you build the h-anim avatar and animate it,
>>>>>>> anyone who is using a
>>>>>>>> similar avatar (and at the high end they all are,
>>>>>>> except for initial
>>>>>>>> binding positions) will be able to use it in X3D. Like
>>>>>>> any X3D scene,
>>>>>>>> the SAI allows importing sets of these data
>>>>>>> dynamically, so libraries of
>>>>>>>> gestures could be stored anywhere.
>>>>>>>>
>>>>>>>>> Let's convert free
>>>>>>>> mocap library data to stuff that X3D and H-Anim can
>>>>>>>> use...see: http://mocap.cs.cmu.edu/ I'm such a newbie,
>>>>>>> I
>>>>>>>> don't even know what kind of mocap data H-Anim, X3D or
>>>>>>>> VRML takes! Can someone fill me in?
>>>>>>>>
>>>>>>>> Well, with mocap data, you may be only capturing skin
>>>>>>> vertex motions. If
>>>>>>>> you want to take control of the skin to have it perfom
>>>>>>> stuff that it has
>>>>>>>> no mocap for, then the X3D way would be to create the
>>>>>>> skeleton and some
>>>>>>>> mapping between the skin verts and your skeleton
>>>>>>> joints.
>>>>>>>>
>>>>>>>> THe only other thing is that X3D is really realtime,
>>>>>>> conceptually, so
>>>>>>>> mocap stuff data set up for fixed frame per second
>>>>>>> rendition can usually
>>>>>>>> be greatly simplified by deleting unnecessary
>>>>>>> keyframes.
>>>>>>>>
>>>>>>>>> I see 123 .bvh files
>>>>>>>> on the web. Can they be converted to h-anim? It looks
>>>>>>>> like you can go from .asf/.amc to .bvh to something
>>>>>>>> standard (from NIST) Here's all the asf and amc files
>>>>>>> on
>>>>>>>> the CMU site:
>>>>>>> http://mocap.cs.cmu.edu:8080/allasfamc.zip"
>>>>>>>>
>>>>>>>>
>>>>>>>> I have no idea. If it is intended for video, it may be
>>>>>>> great lists of
>>>>>>>> skin keyframes shown at each video frame.
>>>>>>>> Get samples of the thing and put up some data that is
>>>>>>> contained there
>>>>>>>> and maybe we can interpret. .
>>>>>>>>
>>>>>>>> Thank You and Best Regards,
>>>>>>>> Joe
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> -----Original Message-----
>>>>>>>>> From: Joe D Williams [mailto:joedwil at earthlink.net]
>>>>>>>>> Sent: Thursday, March 03, 2011 11:04 PM
>>>>>>>>> To: info at 3dnetproductions.com; 'Sven-Erik Tiberg';
>>>>>>>>> npolys at vt.edu; x3d-public at web3d.org
>>>>>>>>> Cc: luca.chittaro at uniud.it
>>>>>>>>> Subject: Re: [X3D-Public] remote motion invocation
>>>>>>> (RMI)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I've got to start back at the beginning and ask how
>>>>>>> are
>>>>>>>>> you
>>>>>>>>> constructing you avatar?
>>>>>>>>> Is it just a skin or bones and skin? Do you animate by
>>>>>>>>> chaning the
>>>>>>>>> joints or just moving skin around?
>>>>>>>>>
>>>>>>>>>> but having said the above, I hope
>>>>>>>>>> you'll see why I would be reluctant to implement the
>>>>>>>>>> previously suggested approach.
>>>>>>>>>
>>>>>>>>> I haven't done much on this, just hearing some ideas,
>>>>>>> but
>>>>>>>>> many avators
>>>>>>>>> I have seen are just simple skins where the movement
>>>>>>> is
>>>>>>>>> made just by
>>>>>>>>> directly moving sets of vertices around. Of course
>>>>>>> this
>>>>>>>>> leads to
>>>>>>>>> relativelly simple avatars and low client processing
>>>>>>>>> requirements, but
>>>>>>>>> seems not sufficient to me.
>>>>>>>>>
>>>>>>>>> Good Luck,
>>>>>>>>> Joe
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> How much bandwidth is needed to send rotation and
>>>>>>>>>>> maybe
>>>>>>>>>>> position data to 20 joints? 20 SFVec3f points is Not
>>>>>>>>> that
>>>>>>>>>>> much.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> It may not seem much data in itself for one avatar
>>>>>>> but,
>>>>>>>>>> even with efficient coding, you'd still have 20 times
>>>>>>>>> as
>>>>>>>>>> much data at the very least... No? Probably a lot
>>>>>>> more.
>>>>>>>>>> Concentrating only on gestures here, I'd rather have
>>>>>>>>> for
>>>>>>>>>> example 'action=run', or 'a=r' (3 bytes) in a stream,
>>>>>>>>> than
>>>>>>>>>> say 'translation=20 1 203&rotation=1 0 0 4.712', or
>>>>>>>>> 't=20
>>>>>>>>>> 1 203&r=1 0 0 4.712' (24 bytes X 20 = 480 bytes).
>>>>>>>>> That's a
>>>>>>>>>> factor of 16,000%. Considering that most decently
>>>>>>>>> featured
>>>>>>>>>> stand-alone MU servers can optimistically handle up
>>>>>>> to
>>>>>>>>>> just about 100 simultaneous avatars and/or shared
>>>>>>>>> objects,
>>>>>>>>>> I think that would translate in a huge waste of
>>>>>>>>> resources.
>>>>>>>>>> In other words, with that setup, you might only be
>>>>>>> able
>>>>>>>>> to
>>>>>>>>>> run say 3 to 5 avatars instead of 100. Those are
>>>>>>>>>> admittedly very gross calculations, but MU servers
>>>>>>> have
>>>>>>>>> to
>>>>>>>>>> be able to send data to clients in rapid-fire
>>>>>>> fashion,
>>>>>>>>> and
>>>>>>>>>> being able to keep as much of that data in low level
>>>>>>>>>> memory is crucial. It is difficult to know what
>>>>>>> exactly
>>>>>>>>> is
>>>>>>>>>> happening down there, but don't forget, other apps
>>>>>>> will
>>>>>>>>> be
>>>>>>>>>> running too, consuming much of the memory, perhaps
>>>>>>>>> forcing
>>>>>>>>>> more data from L1 cache to L2 and back (or L2 to L3,
>>>>>>>>> etc.
>>>>>>>>>> depending your setup, circumstances, running apps,
>>>>>>>>> etc.),
>>>>>>>>>> all of which contributing to slow down the entire
>>>>>>>>> process.
>>>>>>>>>> And we haven't yet really touched on the data
>>>>>>>>> processing
>>>>>>>>>> itself to support it. I believe the server should be
>>>>>>> as
>>>>>>>>>> bare and as close to the metal as possible. That's
>>>>>>> the
>>>>>>>>>> point I was making. The simpler we can keep it on the
>>>>>>>>>> server side, the more efficient it's going to be,
>>>>>>>>>> obviously. I don't think there is much disagreement,
>>>>>>>>> but
>>>>>>>>>> no matter how much parallel hardware you have, the
>>>>>>>>> memory
>>>>>>>>>> bandwidth issue never really goes away due to all the
>>>>>>>>>> inefficiencies that implies. The way we are planning
>>>>>>> to
>>>>>>>>>> solve this with X3Daemon at Office Towers, is to
>>>>>>>>> dedicate
>>>>>>>>>> servers to specific zones that are part of the same
>>>>>>>>> world.
>>>>>>>>>> So for example, if you go to this or that room, or
>>>>>>>>>> building, or area, then this or that server will take
>>>>>>>>>> over. Since it is unlikely that 100 avatars will be
>>>>>>> in
>>>>>>>>> the
>>>>>>>>>> same room/building/area all at once, that should work
>>>>>>>>> for
>>>>>>>>>> us. In other words, we conceive that, let say, 10
>>>>>>>>> servers
>>>>>>>>>> or more could in theory process and supply the MU
>>>>>>> data
>>>>>>>>> for
>>>>>>>>>> a single world as required, for say 1000 simultaneous
>>>>>>>>>> users or whatever the case may be. Like on the web,
>>>>>>> you
>>>>>>>>>> can seamlessly go from one server to the other, but,
>>>>>>> in
>>>>>>>>>> our case, while staying in the same world. We see
>>>>>>> here
>>>>>>>>>> real potential for scalability. Instead of piling
>>>>>>> data
>>>>>>>>> in
>>>>>>>>>> parallel hardware (which we did consider but would
>>>>>>>>> rather
>>>>>>>>>> go with symmetric multiprocessing) however connected,
>>>>>>>>> each
>>>>>>>>>> machine would run independently of each other. Trying
>>>>>>>>> not
>>>>>>>>>> to get into details, but having said the above, I
>>>>>>> hope
>>>>>>>>>> you'll see why I would be reluctant to implement the
>>>>>>>>>> previously suggested approach.
>>>>>>>>>>
>>>>>>>>>> Cheers,
>>>>>>>>>> Lauren
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> -----Original Message-----
>>>>>>>>>>> From: Joe D Williams [mailto:joedwil at earthlink.net]
>>>>>>>>>>> Sent: Thursday, March 03, 2011 1:10 PM
>>>>>>>>>>> To: info at 3dnetproductions.com; 'Sven-Erik Tiberg';
>>>>>>>>>>> npolys at vt.edu; x3d-public at web3d.org
>>>>>>>>>>> Cc: luca.chittaro at uniud.it
>>>>>>>>>>> Subject: Re: [X3D-Public] remote motion invocation
>>>>>>>>> (RMI)
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Hi Lauren,
>>>>>>>>>>> I was thinking about it in terms of essentially
>>>>>>>>>>> substituting
>>>>>>>>>>> 'external' 'streams' of data instead of using
>>>>>>>>> 'internal'
>>>>>>>>>>> data
>>>>>>>>>>> generated by scripts, timers, and interpolators. The
>>>>>>>>>>> point is, when
>>>>>>>>>>> the joint receives a joint translation or rotation,
>>>>>>>>> then
>>>>>>>>>>> it is up to
>>>>>>>>>>> the client player to move the child segments and
>>>>>>> joints
>>>>>>>>>>> and deform the
>>>>>>>>>>> skin to achieve the movement.. So, the data needed
>>>>>>> to
>>>>>>>>>>> animate the
>>>>>>>>>>> humanoid is really not that much; the client does
>>>>>>> all
>>>>>>>>> the
>>>>>>>>>>> actual
>>>>>>>>>>> moving. How much bandwidth is needed to send
>>>>>>> rotation
>>>>>>>>> and
>>>>>>>>>>> maybe
>>>>>>>>>>> position data to 20 joints? 20 SFVec3f points is Not
>>>>>>>>> that
>>>>>>>>>>> much.
>>>>>>>>>>> However, still probably more reliable and maybe
>>>>>>>>> efficient
>>>>>>>>>>> and even
>>>>>>>>>>> faster to just use 'internal' optimized timers and
>>>>>>>>>>> interpolators. The
>>>>>>>>>>> secret of course seems to be getting as much of that
>>>>>>>>>>> computation as
>>>>>>>>>>> possible done in the parallel hardware.
>>>>>>>>>>>
>>>>>>>>>>> Note that I am not even considering the idea of just
>>>>>>>>>>> sending new
>>>>>>>>>>> fields of vertices to replace the ones currently in
>>>>>>>>>>> memory. I think
>>>>>>>>>>> this is essentially done in fixed frame per second
>>>>>>>>>>> applications where
>>>>>>>>>>> the backroom app does all the computation to achieve
>>>>>>>>> the
>>>>>>>>>>> mesh for the
>>>>>>>>>>> next timed frame, then just 'streams' the mesh to
>>>>>>> the
>>>>>>>>>>> renderer where
>>>>>>>>>>> the picture is made.
>>>>>>>>>>> Thanks and Best Regards,
>>>>>>>>>>> Joe
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> On the other hand, I could see a setup where live
>>>>>>>>> joint
>>>>>>>>>>>> rotation and
>>>>>>>>>>>> displacement data was streamed into the scene and
>>>>>>>>> routed
>>>>>>>>>>>> to joints to
>>>>>>>>>>>> affect the humanoid in real time, rather than using
>>>>>>>>>>>> client-side
>>>>>>>>>>>> interpolators and combiners.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Thanks for your reply. Yes, that is better;
>>>>>>> 'streaming'
>>>>>>>>>>> data does seem closer to something conceivable, even
>>>>>>>>>>> interesting. However, since talking about humanoids
>>>>>>>>>>> likely
>>>>>>>>>>> implies multi-users in most applications, I'd start
>>>>>>>>>>> getting worried about processors' cache/memory
>>>>>>>>> bandwidth;
>>>>>>>>>>> I don't think we are there yet to support a great
>>>>>>> many
>>>>>>>>>>> users this way. Even with high clock speed, multi-
>>>>>>> core
>>>>>>>>>>> processors and of course plenty of network
>>>>>>> bandwidth,
>>>>>>>>> low
>>>>>>>>>>> level memory management will remain an issue as the
>>>>>>>>>>> number
>>>>>>>>>>> of users increases, IMO. It already is with much
>>>>>>>>> simpler
>>>>>>>>>>> systems. That is why I like to have as much done on
>>>>>>> the
>>>>>>>>>>> client side as possible. Other than helping lowly
>>>>>>>>>>> devices,
>>>>>>>>>>> I'd be curious about the main advantages of the
>>>>>>> above
>>>>>>>>>>> approach? Open to ideas but, as far as mobile
>>>>>>> devices,
>>>>>>>>>>> hopefully Moore's law will continue to hold true for
>>>>>>>>> some
>>>>>>>>>>> time...
>>>>>>>>>>>
>>>>>>>>>>> Cheers,
>>>>>>>>>>> Lauren
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>   _____  ___      __   __ ___  ______   ___   ___
>>>>>>>>> ___
>>>>>>>>>>>  /__  / / _ \    /  | / // _ \/_  __/  / _ \ / _ \ /
>>>>>>> _
>>>>>>>>> \
>>>>>>>>>>> _/_  / / // /   / /||/ //  __/ / /    / ___//   _//
>>>>>>> //
>>>>>>>>> /
>>>>>>>>>>> /____/ /____/   /_/ |__/ \___/ /_/    /_/   /_/\_\
>>>>>>>>> \___/
>>>>>>>>>>>
>>>>>>> ______________________________________________________
>>>>>>>>>>> * * Interactive Multimedia - Internet Management * *
>>>>>>>>>>>  * * Virtual Reality - Application Programming  * *
>>>>>>>>>>>   * 3D Net Productions  www.3dnetproductions.com *
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> -----Original Message-----
>>>>>>>>>>>> From: Joe D Williams [mailto:joedwil at earthlink.net]
>>>>>>>>>>>> Sent: Tuesday, March 01, 2011 7:49 PM
>>>>>>>>>>>> To: info at 3dnetproductions.com; 'Sven-Erik Tiberg';
>>>>>>>>>>>> npolys at vt.edu; x3d-public at web3d.org
>>>>>>>>>>>> Cc: luca.chittaro at uniud.it
>>>>>>>>>>>> Subject: Re: [X3D-Public] remote motion invocation
>>>>>>>>> (RMI)
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Hello Joe,
>>>>>>>>>>>>>
>>>>>>>>>>>>> While we are on the subject, I have asked you this
>>>>>>>>>>>> before
>>>>>>>>>>>>> but I think you missed it. There is a link at
>>>>>>>>>>>>>
>>>>>>>>>>>>> http://www.web3d.org/x3d/workgroups/h-anim/
>>>>>>>>>>>>>
>>>>>>>>>>>>> for ISO-IEC-19775 X3D Component 26 Humanoid
>>>>>>>>> Animation
>>>>>>>>>>>>> (H-Anim) which describes H-Anim interfaces in
>>>>>>> terms
>>>>>>>>> of
>>>>>>>>>>>> the
>>>>>>>>>>>>> X3D scenegraph...
>>>>>>>>>>>>>
>>>>>>>>>>>>> That link is broken. Where else can I obtain this?
>>>>>>>>>>>>
>>>>>>>>>>>> goto:
>>>>>>>>>>>>
>>>>>>>>>>>> http://www.web3d.org/x3d/specifications/ISO-IEC-
>>>>>>> 19775-
>>>>>>>>>>>> 1.2-X3D-
>>>>>>>>>>>> AbstractSpecification/Part01/components/hanim.html
>>>>>>>>>>>>
>>>>>>>>>>>> the latest spec changed base and we lost the link.
>>>>>>>>>>>> I will tell the webmaster
>>>>>>>>>>>>
>>>>>>>>>>>>> If I am
>>>>>>>>>>>>> asking the wrong person then, can someone else
>>>>>>> help?
>>>>>>>>>>>>
>>>>>>>>>>>> I think the most current spec should have the same
>>>>>>>>> link
>>>>>>>>>>>> forever.
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Apologies for hijacking this thread, maybe I am
>>>>>>> not
>>>>>>>>>>>>> getting this right, but, I have difficulties
>>>>>>> getting
>>>>>>>>>>>>> around my head how server side rasterization can
>>>>>>>>> work
>>>>>>>>>>>> with
>>>>>>>>>>>>> X3D other than with the MovieTexture node.
>>>>>>>>>>>>
>>>>>>>>>>>> Well, i think server side rasterization could work
>>>>>>>>> when
>>>>>>>>>>>> you have a
>>>>>>>>>>>> simple mesh avatar and all you want to do is just
>>>>>>> show
>>>>>>>>>>>> apparent
>>>>>>>>>>>> vertices move so you send sort of a set of gifs on
>>>>>>> a
>>>>>>>>>>>> billboard down to
>>>>>>>>>>>> the client. Don't get it? me either:).
>>>>>>>>>>>> On the other hand, I could see a setup where live
>>>>>>>>> joint
>>>>>>>>>>>> rotation and
>>>>>>>>>>>> displacement data was streamed into the scene and
>>>>>>>>> routed
>>>>>>>>>>>> to joints to
>>>>>>>>>>>> affect the humanoid in real time, rather than using
>>>>>>>>>>>> client-side
>>>>>>>>>>>> interpolators and combiners. I think even mobile
>>>>>>> will
>>>>>>>>>>>> always have
>>>>>>>>>>>> power to do reasonable h-anim skin and bone
>>>>>>> characters
>>>>>>>>>>>> but low end
>>>>>>>>>>>> performance devices may need a more simple
>>>>>>> character
>>>>>>>>>>> with
>>>>>>>>>>>> fewer joints
>>>>>>>>>>>> and more open mesh and different textures.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>> Lauren
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks and Best Regards,
>>>>>>>>>>>> Joe
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>> -----Original Message-----
>>>>>>>>>>>>>> From: x3d-public-bounces at web3d.org [mailto:x3d-
>>>>>>>>> public-
>>>>>>>>>>>>>> bounces at web3d.org] On Behalf Of Joe D Williams
>>>>>>>>>>>>>> Sent: Tuesday, March 01, 2011 12:09 PM
>>>>>>>>>>>>>> To: Sven-Erik Tiberg; npolys at vt.edu; x3d-
>>>>>>>>>>>> public at web3d.org
>>>>>>>>>>>>>> Cc: luca.chittaro at uniud.it
>>>>>>>>>>>>>> Subject: Re: [X3D-Public] remote motion
>>>>>>> invocation
>>>>>>>>>>>> (RMI)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> ----- Original Message -----
>>>>>>>>>>>>>> From: "Sven-Erik Tiberg" <Sven-
>>>>>>> Erik.Tiberg at ltu.se>
>>>>>>>>>>>>>> To: <npolys at vt.edu>; <x3d-public at web3d.org>
>>>>>>>>>>>>>> Cc: <luca.chittaro at uniud.it>
>>>>>>>>>>>>>> Sent: Tuesday, March 01, 2011 6:14 AM
>>>>>>>>>>>>>> Subject: Re: [X3D-Public] remote motion
>>>>>>> invocation
>>>>>>>>>>>> (RMI)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hi
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> A note.
>>>>>>>>>>>>>>> Would it be possible to create a object similar
>>>>>>> to
>>>>>>>>>>>>>> dynamic h-anim,
>>>>>>>>>>>>>>> looking and behaving like a moose.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> THe basis for h-anim comes from general-purpose
>>>>>>>>>>>>>> hierarchical animation
>>>>>>>>>>>>>> needs. That is, the shoulder joint is connected
>>>>>>> to
>>>>>>>>> the
>>>>>>>>>>>>>> elbow joint,
>>>>>>>>>>>>>> for example, such that when the shoulder joint is
>>>>>>>>>>>> moved,
>>>>>>>>>>>>>> then the
>>>>>>>>>>>>>> elbow joint is translated accordingly as
>>>>>>> expected.
>>>>>>>>> To
>>>>>>>>>>>>>> follow the same
>>>>>>>>>>>>>> technique for a different character, just
>>>>>>> position
>>>>>>>>> the
>>>>>>>>>>>>>> joints and
>>>>>>>>>>>>>> segments you chose to use depending upon the
>>>>>>> level
>>>>>>>>> of
>>>>>>>>>>>>>> articulation you
>>>>>>>>>>>>>> need in skeleton space then build the skin in
>>>>>>> skin
>>>>>>>>>>>> space
>>>>>>>>>>>>>> (hopefully
>>>>>>>>>>>>>> the same as skeleton space), connect it up and
>>>>>>>>> animate
>>>>>>>>>>>>>> the thing.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> No I'm not kidding, in the future we hope to
>>>>>>>>> create
>>>>>>>>>>> a
>>>>>>>>>>>>>> X3D-sceen for
>>>>>>>>>>>>>>> driver simulator with "living" moose and
>>>>>>> reindeers
>>>>>>>>> (
>>>>>>>>>>>>>> they are not so
>>>>>>>>>>>>>>> exotic up here but more real and a hazard on the
>>>>>>>>>>>> roads
>>>>>>>>>>>>>> ).
>>>>>>>>>>>>>>> Could be that the same motion pattern can be
>>>>>>> used
>>>>>>>>>>> for
>>>>>>>>>>>>>> deers and elks
>>>>>>>>>>>>>>> and ..
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> possibly. Note that the selection of initial
>>>>>>> binding
>>>>>>>>>>>> time
>>>>>>>>>>>>>> joint
>>>>>>>>>>>>>> rotation is important.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Good Luck and Best Regards,
>>>>>>>>>>>>>> Joe
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> /Sven-Erik Tiberg
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> -----Original Message-----
>>>>>>>>>>>>>>> From: x3d-public-bounces at web3d.org
>>>>>>>>>>>>>>> [mailto:x3d-public-bounces at web3d.org] On Behalf
>>>>>>> Of
>>>>>>>>>>>>>> Sven-Erik Tiberg
>>>>>>>>>>>>>>> Sent: den 1 mars 2011 08:51
>>>>>>>>>>>>>>> To: npolys at vt.edu; x3d-public at web3d.org
>>>>>>>>>>>>>>> Subject: Re: [X3D-Public] remote motion
>>>>>>> invocation
>>>>>>>>>>>>>> (RMI)
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> If there are a need for extending the h-anim
>>>>>>>>>>>> kinematic
>>>>>>>>>>>>>> capacity I
>>>>>>>>>>>>>>> would have a look at Modelica
>>>>>>>>>>>> https://www.modelica.org/
>>>>>>>>>>>>>> And  (
>>>>>>>>>>>>>>> openmodelica ) https://www.modelica.org/ for
>>>>>>>>>>>>>> calculation of dynamic
>>>>>>>>>>>>>>> behavior.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> IMHO it would not be impossible to interface an
>>>>>>>>>>>>>> openmodelica model
>>>>>>>>>>>>>>> that runs in Real Time to a X3D sceen.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On animiation of human I would like to suggest
>>>>>>>>> that
>>>>>>>>>>>> you
>>>>>>>>>>>>>> take a look
>>>>>>>>>>>>>>> at http://hcilab.uniud.it/index.html  and it's
>>>>>>>>>>> editor
>>>>>>>>>>>>>> H-Animator
>>>>>>>>>>>>>>> http://hcilab.uniud.it/demos-videos/item11.html
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> /Sven-Erik Tiberg
>>>>>>>>>>>>>>> Lulea Univ of Technology
>>>>>>>>>>>>>>> Sweden
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> -----Original Message-----
>>>>>>>>>>>>>>> From: x3d-public-bounces at web3d.org
>>>>>>>>>>>>>>> [mailto:x3d-public-bounces at web3d.org] On Behalf
>>>>>>> Of
>>>>>>>>>>>>>> Nicholas F. Polys
>>>>>>>>>>>>>>> Sent: den 28 februari 2011 21:34
>>>>>>>>>>>>>>> To: x3d-public at web3d.org
>>>>>>>>>>>>>>> Subject: Re: [X3D-Public] remote motion
>>>>>>> invocation
>>>>>>>>>>>>>> (RMI)
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> One of our members was working on this spec some
>>>>>>>>>>> time
>>>>>>>>>>>>>> ago (2001),
>>>>>>>>>>>>>>> perhaps a good integration target with H-Anim
>>>>>>> for
>>>>>>>>>>>> such
>>>>>>>>>>>>>> a platform or
>>>>>>>>>>>>>>> communication layer: HumanML
>>>>>>>>>>>>>>> http://www.oasis-
>>>>>>>>>>>>>>
>>>>>>> open.org/committees/tc_home.php?wg_abbrev=humanmarku
>>>>>>>>> p
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> it does mention kinesthetics in the overview but
>>>>>>> I
>>>>>>>>>>> am
>>>>>>>>>>>>>> really not
>>>>>>>>>>>>>>> sure where it has been adopted.
>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> br,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> _n_polys
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On 2/28/2011 1:54 PM, Joe D Williams wrote:
>>>>>>>>>>>>>>>>> And what if the script could recive states (
>>>>>>>>>>> ROUTED
>>>>>>>>>>>>>> events ) from
>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>> 3D browser and do a animation on the fly using
>>>>>>>>>>>>>> inverse kinematics,
>>>>>>>>>>>>>>>>> that would be cool.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I think the basic needed structure for that is
>>>>>>> in
>>>>>>>>>>>> the
>>>>>>>>>>>>>> X3D H-Anim
>>>>>>>>>>>>>>>> spec.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Best Regards,
>>>>>>>>>>>>>>>> Joe
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ----- Original Message ----- From: "Sven-Erik
>>>>>>>>>>>> Tiberg"
>>>>>>>>>>>>>>>> <Sven-Erik.Tiberg at ltu.se>
>>>>>>>>>>>>>>>> To: "John Carlson"
>>>>>>> <john.carlson3 at sbcglobal.net>;
>>>>>>>>>>>>>>>> <x3d-public at web3d.org>
>>>>>>>>>>>>>>>> Sent: Sunday, February 27, 2011 11:50 PM
>>>>>>>>>>>>>>>> Subject: Re: [X3D-Public] remote motion
>>>>>>>>> invocation
>>>>>>>>>>>>>> (RMI)
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> What if using an from the browser external
>>>>>>>>> motion
>>>>>>>>>>>>>> engine as you
>>>>>>>>>>>>>>>>> mention, but let the motion engine host in the
>>>>>>>>>>>> client
>>>>>>>>>>>>>> computer.
>>>>>>>>>>>>>>>>> Communicating new states to the 3D-browser (to
>>>>>>> a
>>>>>>>>>>>> h-
>>>>>>>>>>>>>> amin model )
>>>>>>>>>>>>>>>>> trough a script.
>>>>>>>>>>>>>>>>> And what if the script could recive states (
>>>>>>>>>>> ROUTED
>>>>>>>>>>>>>> events ) from
>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>> 3D browser and do a animation on the fly using
>>>>>>>>>>>>>> inverse kinematics,
>>>>>>>>>>>>>>>>> that would be cool.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Thanks for bringing up this subject.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> /Sven-Erik Tiberg
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> -----Original Message-----
>>>>>>>>>>>>>>>>> From: x3d-public-bounces at web3d.org
>>>>>>>>>>>>>>>>> [mailto:x3d-public-bounces at web3d.org] On
>>>>>>> Behalf
>>>>>>>>> Of
>>>>>>>>>>>>>> John Carlson
>>>>>>>>>>>>>>>>> Sent: den 26 februari 2011 05:49
>>>>>>>>>>>>>>>>> To: <x3d-public at web3d.org> mailing list
>>>>>>>>>>>>>>>>> Subject: [X3D-Public] remote motion invocation
>>>>>>>>>>>> (RMI)
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Instead of downloading motion into a 3D scene,
>>>>>>>>> how
>>>>>>>>>>>>>> about uploading
>>>>>>>>>>>>>>>>> motion into a remote social network server or
>>>>>>>>>>>> robot?
>>>>>>>>>>>>>> One could
>>>>>>>>>>>>>>>>> upload motions to a remote server, and then
>>>>>>>>> invoke
>>>>>>>>>>>>>> them with
>>>>>>>>>>>>>>>>> parameters.  One could create classes
>>>>>>>>> aggregating
>>>>>>>>>>>>>> motion methods,
>>>>>>>>>>>>>>>>> instantiate classes to create behaviors in
>>>>>>>>> avatars
>>>>>>>>>>>> or
>>>>>>>>>>>>>> robots.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> What if H-Anim was used to upload and control
>>>>>>>>>>>> avatars
>>>>>>>>>>>>>> remotely,
>>>>>>>>>>>>>>>>> instead of downloaded models?  What I am
>>>>>>>>> thinking
>>>>>>>>>>>> of
>>>>>>>>>>>>>> is something
>>>>>>>>>>>>>>>>> like Kinect/PrimeSense for primitive motion
>>>>>>>>> input
>>>>>>>>>>> +
>>>>>>>>>>>> a
>>>>>>>>>>>>>> programming
>>>>>>>>>>>>>>>>> language for
>>>>>>>>>>>>>> repetition/parameterization/classes/instantation.
>>>>>>>>>>>>>>>>> What
>>>>>>>>>>>>>>>>> if I chose a protocol/programming such as
>>>>>>>>>>>> JavaScript?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Does DIS do this?  Are there any patents to be
>>>>>>>>>>>> aware
>>>>>>>>>>>>>> of?  If we
>>>>>>>>>>>>>>>>> use
>>>>>>>>>>>>>>>>> something like Caja on the server, or sandbox
>>>>>>>>> the
>>>>>>>>>>>>>> motion on the
>>>>>>>>>>>>>>>>> server, could we insure the security of such
>>>>>>> an
>>>>>>>>>>>>>> endeavor?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Initially, I am thinking only of video
>>>>>>> download.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> John Carlson
>>>>>>>>>>>>>>>>>
>>>>>>> _______________________________________________
>>>>>>>>>>>>>>>>> X3D-Public mailing list
>>>>>>>>>>>>>>>>> X3D-Public at web3d.org
>>>>>>>>>>>>>>>>> http://web3d.org/mailman/listinfo/x3d-
>>>>>>>>>>>>>> public_web3d.org
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>> _______________________________________________
>>>>>>>>>>>>>>>>> X3D-Public mailing list
>>>>>>>>>>>>>>>>> X3D-Public at web3d.org
>>>>>>>>>>>>>>>>> http://web3d.org/mailman/listinfo/x3d-
>>>>>>>>>>>>>> public_web3d.org
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>>> X3D-Public mailing list
>>>>>>>>>>>>>>>> X3D-Public at web3d.org
>>>>>>>>>>>>>>>> http://web3d.org/mailman/listinfo/x3d-
>>>>>>>>>>>> public_web3d.org
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> -- 
>>>>>>>>>>>>>>> Nicholas F. Polys Ph.D.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Director of Visual Computing
>>>>>>>>>>>>>>> Virginia Tech Information Technology
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Affiliate Research Professor
>>>>>>>>>>>>>>> Virginia Tech Computer Science
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>> X3D-Public mailing list
>>>>>>>>>>>>>>> X3D-Public at web3d.org
>>>>>>>>>>>>>>> http://web3d.org/mailman/listinfo/x3d-
>>>>>>>>>>>> public_web3d.org
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>> X3D-Public mailing list
>>>>>>>>>>>>>>> X3D-Public at web3d.org
>>>>>>>>>>>>>>> http://web3d.org/mailman/listinfo/x3d-
>>>>>>>>>>>> public_web3d.org
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>> X3D-Public mailing list
>>>>>>>>>>>>>>> X3D-Public at web3d.org
>>>>>>>>>>>>>>> http://web3d.org/mailman/listinfo/x3d-
>>>>>>>>>>>> public_web3d.org
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>> X3D-Public mailing list
>>>>>>>>>>>>>> X3D-Public at web3d.org
>>>>>>>>>>>>>> http://web3d.org/mailman/listinfo/x3d-
>>>>>>>>> public_web3d.org
>>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> X3D-Public mailing list
>>>>>>>> X3D-Public at web3d.org
>>>>>>>> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> X3D-Public mailing list
>>>>> X3D-Public at web3d.org
>>>>> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>>>>
>>>
>>
> 




More information about the X3D-Public mailing list