[X3D-Public] remote motion invocation (RMI)

Philipp Slusallek slusallek at cs.uni-saarland.de
Fri Mar 4 22:36:11 PST 2011


Hi Joe,

I do not feel like going into another instance of our lengthy
"over-Christmas" discussions on this list. So, here are just a few
general comments.

This discussion started from the rather general issue of describing
animations to be sent via the net. Yes, h-anim is a very useful approach
and I was not criticizing h-anim for this. All I did was point out that
there is a whole range of issues that might benefit from different
approaches and that a "one size fits all" to me (at least) seems not the
right approach for sending animations across the network.

Just as a starter, please look at the really interesting concept of
point-based animation/simulation models for soft bodies and tell me how
to describe them with h-anim. Not everything in a scene is "humanoid" or
can be mapped to it nicely and efficiently. Different tasks sometimes
need different abstractions to describe and encode them well.

I just wish that this community would be a bit more open and receptive
to new ideas or issues raised and not immediately go into defense mode.


Thanks,

	Philipp


Am 05.03.2011 02:51, schrieb Joe D Williams:
> excuse my typing today but there were some comments embedded in
> Philipp's message.
> Joe
> 
> ----- Original Message ----- From: "Joe D Williams" <joedwil at earthlink.net>
> To: <info at 3dnetproductions.com>; "'Philipp Slusallek'"
> <slusallek at cs.uni-saarland.de>
> Cc: <luca.chittaro at uniud.it>; <x3d-public at web3d.org>
> Sent: Friday, March 04, 2011 5:41 PM
> Subject: Re: [X3D-Public] remote motion invocation (RMI)
> 
> 
>> The h-anim model offers parametric modelling of the highest level.
>> What else is needed. I would like to hear more about it.
>> FOr example, what in
>>
>>>
>>> Seems to be parametric modeling of the highest level.
>>> We need an app for that; a humanoid modeling app that
>>> would allow export to VRML/X3D. We could then plug H-Anim
>>> into it. Any suggestions? Lauren
>>>
>>>
>>>> -----Original Message-----
>>>> From: Philipp Slusallek [mailto:slusallek at cs.uni-
>>>> saarland.de]
>>>> Sent: Friday, March 04, 2011 5:06 PM
>>>> To: Joe D Williams
>>>> Cc: info at 3dnetproductions.com; 'Sven-Erik Tiberg';
>>>> npolys at vt.edu; x3d-public at web3d.org;
>>>> luca.chittaro at uniud.it
>>>> Subject: Re: [X3D-Public] remote motion invocation (RMI)
>>>>
>>>> Hi,
>>>>
>>>> I have roughly followed this exchange, so let me
>>>> (carefully) add a few
>>>> comments from my point of view. My position in short:
>>>> There simply is
>>>> not a single solution to this problem (such as "use h-
>>>> anim").
>>>>
>>>> Sometime you want/need all the compression you can get
>>>> such that sending
>>>> a command like "walk from point A to B" (maybe even
>>>> "while avoiding
>>>> obstacles") is the right thing with something else
>>>> filling in the
>>>> blanks,
>>
>> Fine, something has to fill in the blanks. The means your command
>> generates joint rotations an diplscements to animate the skin, or just
>> anmates some mesh. Either way, the combination of skin deformation is
>> handled by use of joint animation and by displacers. I am anxious to
>> find out what else is needed. If the use case is listed in detail
>> maybe I can respond.
>>
>>
>>> sometime animating joints of a hierarchical
>>>> skeleton with or
>>>> without skinning and morphing is the right thing,
>>
>> OK
>>
>>>> sometime all you have
>>>> is the end position of an effector/hand where you need
>>>> Inverse
>>>> Kinematics to work out the rest,
>>
>> That is covered in h-anim by designating Site features that can serve
>> as sensors.
>>
>>> sometimes you have soft
>>>> body animations
>>>> that do not have simple skeletons in the first place and
>>>> you need to
>>>> resolve to animating all vertices or some course mesh
>>>> from which you
>>>> interpolate the rest,
>>
>> THat is aa big drawback when realism is required. As I have seen,
>> these are generally data that is produced by one form of animation
>> then exported as timed keyframes for the mesh.
>>
>> sometimes you also need to preserve
>>>> the volume of
>>>> the object in some way -- and sometime you need a
>>>> combination of
>>>> all/some of the above.
>>
>>
>> Seeing an application like that would help me see the need and how it
>> could be solved with h-anim.
>>
>>> And I am sure there are even more
>>>> options that
>>>> people have come up with and use :-).
>>
>> Yes, the history of this is long and there are many very old styles of
>> doing this. Now, I think the processing atthe client is capable of
>> mesh deformation using joint connections, or geometry attached to
>> segments or by displacers.
>>
>>>>
>>>> So my conclusion is that instead of a single spec like h-
>>>> anim, what we
>>>> really need are building blocks that implement the basic
>>>> functionality
>>
>>
>> GReat, please show how to break p this functionality into blocks. In
>> the h-anim spec it is a given that a skeleton exists. WIthout that
>> there is no need for a standard humanoid.
>> You can either attach teh geometry to segments, or go seamless with a
>> continiu=ous mesh.
>>
>>>> and can be flexibly combined with each other to provide
>>>> the
>>>> functionality that a scene designer needs for the
>>>> specific task at hand.
>>
>>
>> Tight, let's put toghether a set of use cases that show this.
>> Otherwise, I beliee the pertainent use cases are covered in X3D h-anim.
>>
>>>> We can then still put them together with something like a
>>>> prototype/template/class mechanism for reuse in a scene
>>>> (any build
>>>> something like h-anim).
>>
>> I think if you study and understand the h-anim spec and work with some
>> implementations you will drop some of these ideas that cases are not
>> covered.
>>
>>> Note that you also need a
>>>> flexible protocol that
>>>> allows for sending these different commands and data.
>>
>> I do not think you will find a shortcoming in the protocol.
>>
>>>>
>>>> BTW: The above is what we are trying to provide via the
>>>> XFlow extension
>>>> to XML3D. We presented the basic ideas already at the
>>>> last Web3D
>>>> conference -- a paper with the details is in the works
>>>> and should be
>>>> available on xml3d.org soon (as soon as it is done).
>>
>> Great, once we understand the data we want to send to the avatar then
>> we can see if xflow can work.
>> At present once you have the abatar, there are many opotions for
>> deveoping the animation data inbernally closely coupled with the
>> specific avatar, or sent in from the outside.
>>
>> So, please give some examples and let's see how it works.
>> In my mind, a character without an LOA3 skeleton loses points for
>> animatability.
>>
>>>>
>>>>
>>>> Philipp
>>
>>>>>> ATM all of our avatars are from Avatar Studio 1 & 2. I
>>
>> let's see some source code for an animated figure and let's evaluate.
>>
>> All the best,
>> Joe
>>
>>
>>>>
>>>> Am 04.03.2011 22:10, schrieb Joe D Williams:
>>>>>
>>>>>
>>>>>> ATM all of our avatars are from Avatar Studio 1 & 2. I
>>>>> completely agree that they are simple and there is
>>>> nothing
>>>>> we want more than implementing H-Anim. But we need to
>>>>> figure out the best way to do it first.
>>>>>
>>>>> I think there is basically two different ways:
>>>>>
>>>>> 1. create a 'skin' and perform the animation by
>>>> directly rewriting each
>>>>> skin vertex each 'frame' of the animation.
>>>>> This is characterized by animation scripts that take a
>>>> set of vertices,
>>>>> like the skin for the arm, and to move each vertex to
>>>> make the gesture.
>>>>> In this case, the aimation can easily be transferred to
>>>> any avator skin
>>>>> that has the same number of verices in the same
>>>> coordinate space.
>>>>> In this case, the avatar is relatively simple because
>>>> all it consists of
>>>>> is a 'skin' or outer surface that has no real
>>>> hierarchy.
>>>>>
>>>>> 2. create a skeleton with a known hierarchy of joints
>>>> and bones so that
>>>>> when you move the shouldr joint, then the child elbow,
>>>> wrist, fingers,
>>>>> etc move as expected.
>>>>> Now create the skin and connect each vertex to one or
>>>> more joints with a
>>>>> weighting so that as the joint is rotated the skin is
>>>> deformed as expected.
>>>>> Now you have an avatar you can control by rotating and
>>>> displacing
>>>>> joints. This animation can be transported between
>>>> avatars when the joint
>>>>> structure is similar and given that the initial binding
>>>> positions of the
>>>>> skeleton and skin spaces similar.
>>>>>
>>>>>> That is why I thought John was onto something in his
>>>> May 2010
>>>>> thread/questions. I think Joe you're the best person I
>>>>> know that can answer this for us. Let me quote him back
>>>>> here:
>>>>>
>>>>>> "I agree, if I want my avatar to dance as an end user,
>>>> I
>>>>> shouldn't have to deal with transforms, points etc. I
>>>>> should be able to download the dancing behavior I want
>>>> and
>>>>> apply it to my avatar. Let's make a market out of
>>>> selling
>>>>> motion. Cars are a wonderful example of this. Here's
>>>>> another:
>>>> http://playstation.joystiq.com/2009/06/19/quantic
>>>>> -d ream-selling-motion-capture-libraries/
>>>>>
>>>>> Actually, the H-Anim spec is built with that as the
>>>> main purpose - to
>>>>> transport animations between avatars.
>>>>> Note that the spec is really set up so that the avatar
>>>> is a prototype
>>>>> that accepts data from 'internal' or external'
>>>> generators. Each action
>>>>> is controlled by a set of data generated by
>>>> interpolators or scripts.
>>>>> So, if you build the h-anim avatar and animate it,
>>>> anyone who is using a
>>>>> similar avatar (and at the high end they all are,
>>>> except for initial
>>>>> binding positions) will be able to use it in X3D. Like
>>>> any X3D scene,
>>>>> the SAI allows importing sets of these data
>>>> dynamically, so libraries of
>>>>> gestures could be stored anywhere.
>>>>>
>>>>>> Let's convert free
>>>>> mocap library data to stuff that X3D and H-Anim can
>>>>> use...see: http://mocap.cs.cmu.edu/ I'm such a newbie,
>>>> I
>>>>> don't even know what kind of mocap data H-Anim, X3D or
>>>>> VRML takes! Can someone fill me in?
>>>>>
>>>>> Well, with mocap data, you may be only capturing skin
>>>> vertex motions. If
>>>>> you want to take control of the skin to have it perfom
>>>> stuff that it has
>>>>> no mocap for, then the X3D way would be to create the
>>>> skeleton and some
>>>>> mapping between the skin verts and your skeleton
>>>> joints.
>>>>>
>>>>> THe only other thing is that X3D is really realtime,
>>>> conceptually, so
>>>>> mocap stuff data set up for fixed frame per second
>>>> rendition can usually
>>>>> be greatly simplified by deleting unnecessary
>>>> keyframes.
>>>>>
>>>>>> I see 123 .bvh files
>>>>> on the web. Can they be converted to h-anim? It looks
>>>>> like you can go from .asf/.amc to .bvh to something
>>>>> standard (from NIST) Here's all the asf and amc files
>>>> on
>>>>> the CMU site:
>>>> http://mocap.cs.cmu.edu:8080/allasfamc.zip"
>>>>>
>>>>>
>>>>> I have no idea. If it is intended for video, it may be
>>>> great lists of
>>>>> skin keyframes shown at each video frame.
>>>>> Get samples of the thing and put up some data that is
>>>> contained there
>>>>> and maybe we can interpret. .
>>>>>
>>>>> Thank You and Best Regards,
>>>>> Joe
>>>>>
>>>>>
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Joe D Williams [mailto:joedwil at earthlink.net]
>>>>>> Sent: Thursday, March 03, 2011 11:04 PM
>>>>>> To: info at 3dnetproductions.com; 'Sven-Erik Tiberg';
>>>>>> npolys at vt.edu; x3d-public at web3d.org
>>>>>> Cc: luca.chittaro at uniud.it
>>>>>> Subject: Re: [X3D-Public] remote motion invocation
>>>> (RMI)
>>>>>>
>>>>>>
>>>>>> I've got to start back at the beginning and ask how
>>>> are
>>>>>> you
>>>>>> constructing you avatar?
>>>>>> Is it just a skin or bones and skin? Do you animate by
>>>>>> chaning the
>>>>>> joints or just moving skin around?
>>>>>>
>>>>>>> but having said the above, I hope
>>>>>>> you'll see why I would be reluctant to implement the
>>>>>>> previously suggested approach.
>>>>>>
>>>>>> I haven't done much on this, just hearing some ideas,
>>>> but
>>>>>> many avators
>>>>>> I have seen are just simple skins where the movement
>>>> is
>>>>>> made just by
>>>>>> directly moving sets of vertices around. Of course
>>>> this
>>>>>> leads to
>>>>>> relativelly simple avatars and low client processing
>>>>>> requirements, but
>>>>>> seems not sufficient to me.
>>>>>>
>>>>>> Good Luck,
>>>>>> Joe
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>> How much bandwidth is needed to send rotation and
>>>>>>>> maybe
>>>>>>>> position data to 20 joints? 20 SFVec3f points is Not
>>>>>> that
>>>>>>>> much.
>>>>>>>
>>>>>>>
>>>>>>> It may not seem much data in itself for one avatar
>>>> but,
>>>>>>> even with efficient coding, you'd still have 20 times
>>>>>> as
>>>>>>> much data at the very least... No? Probably a lot
>>>> more.
>>>>>>> Concentrating only on gestures here, I'd rather have
>>>>>> for
>>>>>>> example 'action=run', or 'a=r' (3 bytes) in a stream,
>>>>>> than
>>>>>>> say 'translation=20 1 203&rotation=1 0 0 4.712', or
>>>>>> 't=20
>>>>>>> 1 203&r=1 0 0 4.712' (24 bytes X 20 = 480 bytes).
>>>>>> That's a
>>>>>>> factor of 16,000%. Considering that most decently
>>>>>> featured
>>>>>>> stand-alone MU servers can optimistically handle up
>>>> to
>>>>>>> just about 100 simultaneous avatars and/or shared
>>>>>> objects,
>>>>>>> I think that would translate in a huge waste of
>>>>>> resources.
>>>>>>> In other words, with that setup, you might only be
>>>> able
>>>>>> to
>>>>>>> run say 3 to 5 avatars instead of 100. Those are
>>>>>>> admittedly very gross calculations, but MU servers
>>>> have
>>>>>> to
>>>>>>> be able to send data to clients in rapid-fire
>>>> fashion,
>>>>>> and
>>>>>>> being able to keep as much of that data in low level
>>>>>>> memory is crucial. It is difficult to know what
>>>> exactly
>>>>>> is
>>>>>>> happening down there, but don't forget, other apps
>>>> will
>>>>>> be
>>>>>>> running too, consuming much of the memory, perhaps
>>>>>> forcing
>>>>>>> more data from L1 cache to L2 and back (or L2 to L3,
>>>>>> etc.
>>>>>>> depending your setup, circumstances, running apps,
>>>>>> etc.),
>>>>>>> all of which contributing to slow down the entire
>>>>>> process.
>>>>>>> And we haven't yet really touched on the data
>>>>>> processing
>>>>>>> itself to support it. I believe the server should be
>>>> as
>>>>>>> bare and as close to the metal as possible. That's
>>>> the
>>>>>>> point I was making. The simpler we can keep it on the
>>>>>>> server side, the more efficient it's going to be,
>>>>>>> obviously. I don't think there is much disagreement,
>>>>>> but
>>>>>>> no matter how much parallel hardware you have, the
>>>>>> memory
>>>>>>> bandwidth issue never really goes away due to all the
>>>>>>> inefficiencies that implies. The way we are planning
>>>> to
>>>>>>> solve this with X3Daemon at Office Towers, is to
>>>>>> dedicate
>>>>>>> servers to specific zones that are part of the same
>>>>>> world.
>>>>>>> So for example, if you go to this or that room, or
>>>>>>> building, or area, then this or that server will take
>>>>>>> over. Since it is unlikely that 100 avatars will be
>>>> in
>>>>>> the
>>>>>>> same room/building/area all at once, that should work
>>>>>> for
>>>>>>> us. In other words, we conceive that, let say, 10
>>>>>> servers
>>>>>>> or more could in theory process and supply the MU
>>>> data
>>>>>> for
>>>>>>> a single world as required, for say 1000 simultaneous
>>>>>>> users or whatever the case may be. Like on the web,
>>>> you
>>>>>>> can seamlessly go from one server to the other, but,
>>>> in
>>>>>>> our case, while staying in the same world. We see
>>>> here
>>>>>>> real potential for scalability. Instead of piling
>>>> data
>>>>>> in
>>>>>>> parallel hardware (which we did consider but would
>>>>>> rather
>>>>>>> go with symmetric multiprocessing) however connected,
>>>>>> each
>>>>>>> machine would run independently of each other. Trying
>>>>>> not
>>>>>>> to get into details, but having said the above, I
>>>> hope
>>>>>>> you'll see why I would be reluctant to implement the
>>>>>>> previously suggested approach.
>>>>>>>
>>>>>>> Cheers,
>>>>>>> Lauren
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: Joe D Williams [mailto:joedwil at earthlink.net]
>>>>>>>> Sent: Thursday, March 03, 2011 1:10 PM
>>>>>>>> To: info at 3dnetproductions.com; 'Sven-Erik Tiberg';
>>>>>>>> npolys at vt.edu; x3d-public at web3d.org
>>>>>>>> Cc: luca.chittaro at uniud.it
>>>>>>>> Subject: Re: [X3D-Public] remote motion invocation
>>>>>> (RMI)
>>>>>>>>
>>>>>>>>
>>>>>>>> Hi Lauren,
>>>>>>>> I was thinking about it in terms of essentially
>>>>>>>> substituting
>>>>>>>> 'external' 'streams' of data instead of using
>>>>>> 'internal'
>>>>>>>> data
>>>>>>>> generated by scripts, timers, and interpolators. The
>>>>>>>> point is, when
>>>>>>>> the joint receives a joint translation or rotation,
>>>>>> then
>>>>>>>> it is up to
>>>>>>>> the client player to move the child segments and
>>>> joints
>>>>>>>> and deform the
>>>>>>>> skin to achieve the movement.. So, the data needed
>>>> to
>>>>>>>> animate the
>>>>>>>> humanoid is really not that much; the client does
>>>> all
>>>>>> the
>>>>>>>> actual
>>>>>>>> moving. How much bandwidth is needed to send
>>>> rotation
>>>>>> and
>>>>>>>> maybe
>>>>>>>> position data to 20 joints? 20 SFVec3f points is Not
>>>>>> that
>>>>>>>> much.
>>>>>>>> However, still probably more reliable and maybe
>>>>>> efficient
>>>>>>>> and even
>>>>>>>> faster to just use 'internal' optimized timers and
>>>>>>>> interpolators. The
>>>>>>>> secret of course seems to be getting as much of that
>>>>>>>> computation as
>>>>>>>> possible done in the parallel hardware.
>>>>>>>>
>>>>>>>> Note that I am not even considering the idea of just
>>>>>>>> sending new
>>>>>>>> fields of vertices to replace the ones currently in
>>>>>>>> memory. I think
>>>>>>>> this is essentially done in fixed frame per second
>>>>>>>> applications where
>>>>>>>> the backroom app does all the computation to achieve
>>>>>> the
>>>>>>>> mesh for the
>>>>>>>> next timed frame, then just 'streams' the mesh to
>>>> the
>>>>>>>> renderer where
>>>>>>>> the picture is made.
>>>>>>>> Thanks and Best Regards,
>>>>>>>> Joe
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> On the other hand, I could see a setup where live
>>>>>> joint
>>>>>>>>> rotation and
>>>>>>>>> displacement data was streamed into the scene and
>>>>>> routed
>>>>>>>>> to joints to
>>>>>>>>> affect the humanoid in real time, rather than using
>>>>>>>>> client-side
>>>>>>>>> interpolators and combiners.
>>>>>>>>
>>>>>>>>
>>>>>>>> Thanks for your reply. Yes, that is better;
>>>> 'streaming'
>>>>>>>> data does seem closer to something conceivable, even
>>>>>>>> interesting. However, since talking about humanoids
>>>>>>>> likely
>>>>>>>> implies multi-users in most applications, I'd start
>>>>>>>> getting worried about processors' cache/memory
>>>>>> bandwidth;
>>>>>>>> I don't think we are there yet to support a great
>>>> many
>>>>>>>> users this way. Even with high clock speed, multi-
>>>> core
>>>>>>>> processors and of course plenty of network
>>>> bandwidth,
>>>>>> low
>>>>>>>> level memory management will remain an issue as the
>>>>>>>> number
>>>>>>>> of users increases, IMO. It already is with much
>>>>>> simpler
>>>>>>>> systems. That is why I like to have as much done on
>>>> the
>>>>>>>> client side as possible. Other than helping lowly
>>>>>>>> devices,
>>>>>>>> I'd be curious about the main advantages of the
>>>> above
>>>>>>>> approach? Open to ideas but, as far as mobile
>>>> devices,
>>>>>>>> hopefully Moore's law will continue to hold true for
>>>>>> some
>>>>>>>> time...
>>>>>>>>
>>>>>>>> Cheers,
>>>>>>>> Lauren
>>>>>>>>
>>>>>>>>
>>>>>>>>   _____  ___      __   __ ___  ______   ___   ___
>>>>>> ___
>>>>>>>>  /__  / / _ \    /  | / // _ \/_  __/  / _ \ / _ \ /
>>>> _
>>>>>> \
>>>>>>>> _/_  / / // /   / /||/ //  __/ / /    / ___//   _//
>>>> //
>>>>>> /
>>>>>>>> /____/ /____/   /_/ |__/ \___/ /_/    /_/   /_/\_\
>>>>>> \___/
>>>>>>>>
>>>> ______________________________________________________
>>>>>>>> * * Interactive Multimedia - Internet Management * *
>>>>>>>>  * * Virtual Reality - Application Programming  * *
>>>>>>>>   * 3D Net Productions  www.3dnetproductions.com *
>>>>>>>>
>>>>>>>>
>>>>>>>>> -----Original Message-----
>>>>>>>>> From: Joe D Williams [mailto:joedwil at earthlink.net]
>>>>>>>>> Sent: Tuesday, March 01, 2011 7:49 PM
>>>>>>>>> To: info at 3dnetproductions.com; 'Sven-Erik Tiberg';
>>>>>>>>> npolys at vt.edu; x3d-public at web3d.org
>>>>>>>>> Cc: luca.chittaro at uniud.it
>>>>>>>>> Subject: Re: [X3D-Public] remote motion invocation
>>>>>> (RMI)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Hello Joe,
>>>>>>>>>>
>>>>>>>>>> While we are on the subject, I have asked you this
>>>>>>>>> before
>>>>>>>>>> but I think you missed it. There is a link at
>>>>>>>>>>
>>>>>>>>>> http://www.web3d.org/x3d/workgroups/h-anim/
>>>>>>>>>>
>>>>>>>>>> for ISO-IEC-19775 X3D Component 26 Humanoid
>>>>>> Animation
>>>>>>>>>> (H-Anim) which describes H-Anim interfaces in
>>>> terms
>>>>>> of
>>>>>>>>> the
>>>>>>>>>> X3D scenegraph...
>>>>>>>>>>
>>>>>>>>>> That link is broken. Where else can I obtain this?
>>>>>>>>>
>>>>>>>>> goto:
>>>>>>>>>
>>>>>>>>> http://www.web3d.org/x3d/specifications/ISO-IEC-
>>>> 19775-
>>>>>>>>> 1.2-X3D-
>>>>>>>>> AbstractSpecification/Part01/components/hanim.html
>>>>>>>>>
>>>>>>>>> the latest spec changed base and we lost the link.
>>>>>>>>> I will tell the webmaster
>>>>>>>>>
>>>>>>>>>> If I am
>>>>>>>>>> asking the wrong person then, can someone else
>>>> help?
>>>>>>>>>
>>>>>>>>> I think the most current spec should have the same
>>>>>> link
>>>>>>>>> forever.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Apologies for hijacking this thread, maybe I am
>>>> not
>>>>>>>>>> getting this right, but, I have difficulties
>>>> getting
>>>>>>>>>> around my head how server side rasterization can
>>>>>> work
>>>>>>>>> with
>>>>>>>>>> X3D other than with the MovieTexture node.
>>>>>>>>>
>>>>>>>>> Well, i think server side rasterization could work
>>>>>> when
>>>>>>>>> you have a
>>>>>>>>> simple mesh avatar and all you want to do is just
>>>> show
>>>>>>>>> apparent
>>>>>>>>> vertices move so you send sort of a set of gifs on
>>>> a
>>>>>>>>> billboard down to
>>>>>>>>> the client. Don't get it? me either:).
>>>>>>>>> On the other hand, I could see a setup where live
>>>>>> joint
>>>>>>>>> rotation and
>>>>>>>>> displacement data was streamed into the scene and
>>>>>> routed
>>>>>>>>> to joints to
>>>>>>>>> affect the humanoid in real time, rather than using
>>>>>>>>> client-side
>>>>>>>>> interpolators and combiners. I think even mobile
>>>> will
>>>>>>>>> always have
>>>>>>>>> power to do reasonable h-anim skin and bone
>>>> characters
>>>>>>>>> but low end
>>>>>>>>> performance devices may need a more simple
>>>> character
>>>>>>>> with
>>>>>>>>> fewer joints
>>>>>>>>> and more open mesh and different textures.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Lauren
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Thanks and Best Regards,
>>>>>>>>> Joe
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> -----Original Message-----
>>>>>>>>>>> From: x3d-public-bounces at web3d.org [mailto:x3d-
>>>>>> public-
>>>>>>>>>>> bounces at web3d.org] On Behalf Of Joe D Williams
>>>>>>>>>>> Sent: Tuesday, March 01, 2011 12:09 PM
>>>>>>>>>>> To: Sven-Erik Tiberg; npolys at vt.edu; x3d-
>>>>>>>>> public at web3d.org
>>>>>>>>>>> Cc: luca.chittaro at uniud.it
>>>>>>>>>>> Subject: Re: [X3D-Public] remote motion
>>>> invocation
>>>>>>>>> (RMI)
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> ----- Original Message -----
>>>>>>>>>>> From: "Sven-Erik Tiberg" <Sven-
>>>> Erik.Tiberg at ltu.se>
>>>>>>>>>>> To: <npolys at vt.edu>; <x3d-public at web3d.org>
>>>>>>>>>>> Cc: <luca.chittaro at uniud.it>
>>>>>>>>>>> Sent: Tuesday, March 01, 2011 6:14 AM
>>>>>>>>>>> Subject: Re: [X3D-Public] remote motion
>>>> invocation
>>>>>>>>> (RMI)
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> Hi
>>>>>>>>>>>>
>>>>>>>>>>>> A note.
>>>>>>>>>>>> Would it be possible to create a object similar
>>>> to
>>>>>>>>>>> dynamic h-anim,
>>>>>>>>>>>> looking and behaving like a moose.
>>>>>>>>>>>
>>>>>>>>>>> THe basis for h-anim comes from general-purpose
>>>>>>>>>>> hierarchical animation
>>>>>>>>>>> needs. That is, the shoulder joint is connected
>>>> to
>>>>>> the
>>>>>>>>>>> elbow joint,
>>>>>>>>>>> for example, such that when the shoulder joint is
>>>>>>>>> moved,
>>>>>>>>>>> then the
>>>>>>>>>>> elbow joint is translated accordingly as
>>>> expected.
>>>>>> To
>>>>>>>>>>> follow the same
>>>>>>>>>>> technique for a different character, just
>>>> position
>>>>>> the
>>>>>>>>>>> joints and
>>>>>>>>>>> segments you chose to use depending upon the
>>>> level
>>>>>> of
>>>>>>>>>>> articulation you
>>>>>>>>>>> need in skeleton space then build the skin in
>>>> skin
>>>>>>>>> space
>>>>>>>>>>> (hopefully
>>>>>>>>>>> the same as skeleton space), connect it up and
>>>>>> animate
>>>>>>>>>>> the thing.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> No I'm not kidding, in the future we hope to
>>>>>> create
>>>>>>>> a
>>>>>>>>>>> X3D-sceen for
>>>>>>>>>>>> driver simulator with "living" moose and
>>>> reindeers
>>>>>> (
>>>>>>>>>>> they are not so
>>>>>>>>>>>> exotic up here but more real and a hazard on the
>>>>>>>>> roads
>>>>>>>>>>> ).
>>>>>>>>>>>> Could be that the same motion pattern can be
>>>> used
>>>>>>>> for
>>>>>>>>>>> deers and elks
>>>>>>>>>>>> and ..
>>>>>>>>>>>
>>>>>>>>>>> possibly. Note that the selection of initial
>>>> binding
>>>>>>>>> time
>>>>>>>>>>> joint
>>>>>>>>>>> rotation is important.
>>>>>>>>>>>
>>>>>>>>>>> Good Luck and Best Regards,
>>>>>>>>>>> Joe
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> /Sven-Erik Tiberg
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> -----Original Message-----
>>>>>>>>>>>> From: x3d-public-bounces at web3d.org
>>>>>>>>>>>> [mailto:x3d-public-bounces at web3d.org] On Behalf
>>>> Of
>>>>>>>>>>> Sven-Erik Tiberg
>>>>>>>>>>>> Sent: den 1 mars 2011 08:51
>>>>>>>>>>>> To: npolys at vt.edu; x3d-public at web3d.org
>>>>>>>>>>>> Subject: Re: [X3D-Public] remote motion
>>>> invocation
>>>>>>>>>>> (RMI)
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> If there are a need for extending the h-anim
>>>>>>>>> kinematic
>>>>>>>>>>> capacity I
>>>>>>>>>>>> would have a look at Modelica
>>>>>>>>> https://www.modelica.org/
>>>>>>>>>>> And  (
>>>>>>>>>>>> openmodelica ) https://www.modelica.org/ for
>>>>>>>>>>> calculation of dynamic
>>>>>>>>>>>> behavior.
>>>>>>>>>>>>
>>>>>>>>>>>> IMHO it would not be impossible to interface an
>>>>>>>>>>> openmodelica model
>>>>>>>>>>>> that runs in Real Time to a X3D sceen.
>>>>>>>>>>>>
>>>>>>>>>>>> On animiation of human I would like to suggest
>>>>>> that
>>>>>>>>> you
>>>>>>>>>>> take a look
>>>>>>>>>>>> at http://hcilab.uniud.it/index.html  and it's
>>>>>>>> editor
>>>>>>>>>>> H-Animator
>>>>>>>>>>>> http://hcilab.uniud.it/demos-videos/item11.html
>>>>>>>>>>>>
>>>>>>>>>>>> /Sven-Erik Tiberg
>>>>>>>>>>>> Lulea Univ of Technology
>>>>>>>>>>>> Sweden
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> -----Original Message-----
>>>>>>>>>>>> From: x3d-public-bounces at web3d.org
>>>>>>>>>>>> [mailto:x3d-public-bounces at web3d.org] On Behalf
>>>> Of
>>>>>>>>>>> Nicholas F. Polys
>>>>>>>>>>>> Sent: den 28 februari 2011 21:34
>>>>>>>>>>>> To: x3d-public at web3d.org
>>>>>>>>>>>> Subject: Re: [X3D-Public] remote motion
>>>> invocation
>>>>>>>>>>> (RMI)
>>>>>>>>>>>>
>>>>>>>>>>>> One of our members was working on this spec some
>>>>>>>> time
>>>>>>>>>>> ago (2001),
>>>>>>>>>>>> perhaps a good integration target with H-Anim
>>>> for
>>>>>>>>> such
>>>>>>>>>>> a platform or
>>>>>>>>>>>> communication layer: HumanML
>>>>>>>>>>>> http://www.oasis-
>>>>>>>>>>>
>>>> open.org/committees/tc_home.php?wg_abbrev=humanmarku
>>>>>> p
>>>>>>>>>>>>
>>>>>>>>>>>> it does mention kinesthetics in the overview but
>>>> I
>>>>>>>> am
>>>>>>>>>>> really not
>>>>>>>>>>>> sure where it has been adopted.
>>>>>>>>>>>> ?
>>>>>>>>>>>>
>>>>>>>>>>>> br,
>>>>>>>>>>>>
>>>>>>>>>>>> _n_polys
>>>>>>>>>>>>
>>>>>>>>>>>> On 2/28/2011 1:54 PM, Joe D Williams wrote:
>>>>>>>>>>>>>> And what if the script could recive states (
>>>>>>>> ROUTED
>>>>>>>>>>> events ) from
>>>>>>>>>>>>>> the
>>>>>>>>>>>>>> 3D browser and do a animation on the fly using
>>>>>>>>>>> inverse kinematics,
>>>>>>>>>>>>>> that would be cool.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I think the basic needed structure for that is
>>>> in
>>>>>>>>> the
>>>>>>>>>>> X3D H-Anim
>>>>>>>>>>>>> spec.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Best Regards,
>>>>>>>>>>>>> Joe
>>>>>>>>>>>>>
>>>>>>>>>>>>> ----- Original Message ----- From: "Sven-Erik
>>>>>>>>> Tiberg"
>>>>>>>>>>>>> <Sven-Erik.Tiberg at ltu.se>
>>>>>>>>>>>>> To: "John Carlson"
>>>> <john.carlson3 at sbcglobal.net>;
>>>>>>>>>>>>> <x3d-public at web3d.org>
>>>>>>>>>>>>> Sent: Sunday, February 27, 2011 11:50 PM
>>>>>>>>>>>>> Subject: Re: [X3D-Public] remote motion
>>>>>> invocation
>>>>>>>>>>> (RMI)
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>> What if using an from the browser external
>>>>>> motion
>>>>>>>>>>> engine as you
>>>>>>>>>>>>>> mention, but let the motion engine host in the
>>>>>>>>> client
>>>>>>>>>>> computer.
>>>>>>>>>>>>>> Communicating new states to the 3D-browser (to
>>>> a
>>>>>>>>> h-
>>>>>>>>>>> amin model )
>>>>>>>>>>>>>> trough a script.
>>>>>>>>>>>>>> And what if the script could recive states (
>>>>>>>> ROUTED
>>>>>>>>>>> events ) from
>>>>>>>>>>>>>> the
>>>>>>>>>>>>>> 3D browser and do a animation on the fly using
>>>>>>>>>>> inverse kinematics,
>>>>>>>>>>>>>> that would be cool.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thanks for bringing up this subject.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> /Sven-Erik Tiberg
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> -----Original Message-----
>>>>>>>>>>>>>> From: x3d-public-bounces at web3d.org
>>>>>>>>>>>>>> [mailto:x3d-public-bounces at web3d.org] On
>>>> Behalf
>>>>>> Of
>>>>>>>>>>> John Carlson
>>>>>>>>>>>>>> Sent: den 26 februari 2011 05:49
>>>>>>>>>>>>>> To: <x3d-public at web3d.org> mailing list
>>>>>>>>>>>>>> Subject: [X3D-Public] remote motion invocation
>>>>>>>>> (RMI)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Instead of downloading motion into a 3D scene,
>>>>>> how
>>>>>>>>>>> about uploading
>>>>>>>>>>>>>> motion into a remote social network server or
>>>>>>>>> robot?
>>>>>>>>>>> One could
>>>>>>>>>>>>>> upload motions to a remote server, and then
>>>>>> invoke
>>>>>>>>>>> them with
>>>>>>>>>>>>>> parameters.  One could create classes
>>>>>> aggregating
>>>>>>>>>>> motion methods,
>>>>>>>>>>>>>> instantiate classes to create behaviors in
>>>>>> avatars
>>>>>>>>> or
>>>>>>>>>>> robots.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> What if H-Anim was used to upload and control
>>>>>>>>> avatars
>>>>>>>>>>> remotely,
>>>>>>>>>>>>>> instead of downloaded models?  What I am
>>>>>> thinking
>>>>>>>>> of
>>>>>>>>>>> is something
>>>>>>>>>>>>>> like Kinect/PrimeSense for primitive motion
>>>>>> input
>>>>>>>> +
>>>>>>>>> a
>>>>>>>>>>> programming
>>>>>>>>>>>>>> language for
>>>>>>>>>>> repetition/parameterization/classes/instantation.
>>>>>>>>>>>>>> What
>>>>>>>>>>>>>> if I chose a protocol/programming such as
>>>>>>>>> JavaScript?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Does DIS do this?  Are there any patents to be
>>>>>>>>> aware
>>>>>>>>>>> of?  If we
>>>>>>>>>>>>>> use
>>>>>>>>>>>>>> something like Caja on the server, or sandbox
>>>>>> the
>>>>>>>>>>> motion on the
>>>>>>>>>>>>>> server, could we insure the security of such
>>>> an
>>>>>>>>>>> endeavor?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Initially, I am thinking only of video
>>>> download.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> John Carlson
>>>>>>>>>>>>>>
>>>> _______________________________________________
>>>>>>>>>>>>>> X3D-Public mailing list
>>>>>>>>>>>>>> X3D-Public at web3d.org
>>>>>>>>>>>>>> http://web3d.org/mailman/listinfo/x3d-
>>>>>>>>>>> public_web3d.org
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>> _______________________________________________
>>>>>>>>>>>>>> X3D-Public mailing list
>>>>>>>>>>>>>> X3D-Public at web3d.org
>>>>>>>>>>>>>> http://web3d.org/mailman/listinfo/x3d-
>>>>>>>>>>> public_web3d.org
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>> X3D-Public mailing list
>>>>>>>>>>>>> X3D-Public at web3d.org
>>>>>>>>>>>>> http://web3d.org/mailman/listinfo/x3d-
>>>>>>>>> public_web3d.org
>>>>>>>>>>>>
>>>>>>>>>>>> -- 
>>>>>>>>>>>> Nicholas F. Polys Ph.D.
>>>>>>>>>>>>
>>>>>>>>>>>> Director of Visual Computing
>>>>>>>>>>>> Virginia Tech Information Technology
>>>>>>>>>>>>
>>>>>>>>>>>> Affiliate Research Professor
>>>>>>>>>>>> Virginia Tech Computer Science
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> X3D-Public mailing list
>>>>>>>>>>>> X3D-Public at web3d.org
>>>>>>>>>>>> http://web3d.org/mailman/listinfo/x3d-
>>>>>>>>> public_web3d.org
>>>>>>>>>>>>
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> X3D-Public mailing list
>>>>>>>>>>>> X3D-Public at web3d.org
>>>>>>>>>>>> http://web3d.org/mailman/listinfo/x3d-
>>>>>>>>> public_web3d.org
>>>>>>>>>>>>
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> X3D-Public mailing list
>>>>>>>>>>>> X3D-Public at web3d.org
>>>>>>>>>>>> http://web3d.org/mailman/listinfo/x3d-
>>>>>>>>> public_web3d.org
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> X3D-Public mailing list
>>>>>>>>>>> X3D-Public at web3d.org
>>>>>>>>>>> http://web3d.org/mailman/listinfo/x3d-
>>>>>> public_web3d.org
>>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> X3D-Public mailing list
>>>>> X3D-Public at web3d.org
>>>>> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>>>
>>>
>>
>>
>> _______________________________________________
>> X3D-Public mailing list
>> X3D-Public at web3d.org
>> http://web3d.org/mailman/listinfo/x3d-public_web3d.org 
> 




More information about the X3D-Public mailing list