[X3D-Public] remote motion invocation (RMI)

GLG info at 3dnetproductions.com
Fri Mar 4 07:06:28 PST 2011


Christoph Valentin wrote;
>one small addition from my side: it's not only about the
>motion of avatars, but it's additionally his interaction
>with the (virtual) reality to care about.
>
>  - sit down on/stand up from chairs
>  - sit down on a chair lift
>  - enter a vehicle
>  - etc.
>......


Yes, we have all of the above covered and more to some
degree, but achieving the realism found in current modern
games is what we are after for the time being. However,
when we can no longer distinguish between a VR scene and a
movie scene, that is when things will start getting really
interesting.  :)

Cheers,
Lauren



-------- Original-Nachricht --------
Datum: Fri, 4 Mar 2011 01:15:49 -0500
Von: "GLG" <info at 3dnetproductions.com>
An: "'Joe D Williams'" <joedwil at earthlink.net>,
"'Sven-Erik Tiberg'" <Sven-Erik.Tiberg at ltu.se>,
npolys at vt.edu, x3d-public at web3d.org
CC: luca.chittaro at uniud.it
Betreff: Re: [X3D-Public] remote motion invocation (RMI)


ATM all of our avatars are from Avatar Studio 1 & 2. I
completely agree that they are simple and there is nothing
we want more than implementing H-Anim. But we need to
figure out the best way to do it first. That is why I
thought John was onto something in his May 2010
thread/questions. I think Joe you're the best person I
know that can answer this for us. Let me quote him back
here:

"I agree, if I want my avatar to dance as an end user, I
shouldn't have to deal with transforms, points etc.  I
should be able to download the dancing behavior I want and
apply it to my avatar.  Let's make a market out of selling
motion.  Cars are a wonderful example of this.  Here's
another: http://playstation.joystiq.com/2009/06/19/quantic
-d
ream-selling-motion-capture-libraries/  Let's convert free
mocap library data to stuff that X3D and H-Anim can
use...see: http://mocap.cs.cmu.edu/  I'm such a newbie, I
don't even know what kind of mocap data H-Anim, X3D or
VRML takes!  Can someone fill me in?  I see 123 .bvh files
on the web.  Can they be converted to h-anim?  It looks
like you can go from .asf/.amc to .bvh  to something
standard (from NIST)  Here's all the asf and amc files on
the CMU site: http://mocap.cs.cmu.edu:8080/allasfamc.zip"



>-----Original Message-----
>From: Joe D Williams [mailto:joedwil at earthlink.net]
>Sent: Thursday, March 03, 2011 11:04 PM
>To: info at 3dnetproductions.com; 'Sven-Erik Tiberg';
>npolys at vt.edu; x3d-public at web3d.org
>Cc: luca.chittaro at uniud.it
>Subject: Re: [X3D-Public] remote motion invocation (RMI)
>
>
>I've got to start back at the beginning and ask how are
>you
>constructing you avatar?
>Is it just a skin or bones and skin? Do you animate by
>chaning the
>joints or just moving skin around?
>
>> but having said the above, I hope
>> you'll see why I would be reluctant to implement the
>> previously suggested approach.
>
>I haven't done much on this, just hearing some ideas, but
>many avators
>I have seen are just simple skins where the movement is
>made just by
>directly moving sets of vertices around. Of course this
>leads to
>relativelly simple avatars and low client processing
>requirements, but
>seems not sufficient to me.
>
>Good Luck,
>Joe
>
>
>>
>>>How much bandwidth is needed to send rotation and
>>>maybe
>>>position data to 20 joints? 20 SFVec3f points is Not
>that
>>>much.
>>
>>
>> It may not seem much data in itself for one avatar but,
>> even with efficient coding, you'd still have 20 times
>as
>> much data at the very least... No? Probably a lot more.
>> Concentrating only on gestures here, I'd rather have
>for
>> example 'action=run', or 'a=r' (3 bytes) in a stream,
>than
>> say 'translation=20 1 203&rotation=1 0 0 4.712', or
>'t=20
>> 1 203&r=1 0 0 4.712' (24 bytes X 20 = 480 bytes).
>That's a
>> factor of 16,000%. Considering that most decently
>featured
>> stand-alone MU servers can optimistically handle up to
>> just about 100 simultaneous avatars and/or shared
>objects,
>> I think that would translate in a huge waste of
>resources.
>> In other words, with that setup, you might only be able
>to
>> run say 3 to 5 avatars instead of 100. Those are
>> admittedly very gross calculations, but MU servers have
>to
>> be able to send data to clients in rapid-fire fashion,
>and
>> being able to keep as much of that data in low level
>> memory is crucial. It is difficult to know what exactly
>is
>> happening down there, but don't forget, other apps will
>be
>> running too, consuming much of the memory, perhaps
>forcing
>> more data from L1 cache to L2 and back (or L2 to L3,
>etc.
>> depending your setup, circumstances, running apps,
>etc.),
>> all of which contributing to slow down the entire
>process.
>> And we haven't yet really touched on the data
>processing
>> itself to support it. I believe the server should be as
>> bare and as close to the metal as possible. That's the
>> point I was making. The simpler we can keep it on the
>> server side, the more efficient it's going to be,
>> obviously. I don't think there is much disagreement,
>but
>> no matter how much parallel hardware you have, the
>memory
>> bandwidth issue never really goes away due to all the
>> inefficiencies that implies. The way we are planning to
>> solve this with X3Daemon at Office Towers, is to
>dedicate
>> servers to specific zones that are part of the same
>world.
>> So for example, if you go to this or that room, or
>> building, or area, then this or that server will take
>> over. Since it is unlikely that 100 avatars will be in
>the
>> same room/building/area all at once, that should work
>for
>> us. In other words, we conceive that, let say, 10
>servers
>> or more could in theory process and supply the MU data
>for
>> a single world as required, for say 1000 simultaneous
>> users or whatever the case may be. Like on the web, you
>> can seamlessly go from one server to the other, but, in
>> our case, while staying in the same world. We see here
>> real potential for scalability. Instead of piling data
>in
>> parallel hardware (which we did consider but would
>rather
>> go with symmetric multiprocessing) however connected,
>each
>> machine would run independently of each other. Trying
>not
>> to get into details, but having said the above, I hope
>> you'll see why I would be reluctant to implement the
>> previously suggested approach.
>>
>> Cheers,
>> Lauren
>>
>>
>>
>>>-----Original Message-----
>>>From: Joe D Williams [mailto:joedwil at earthlink.net]
>>>Sent: Thursday, March 03, 2011 1:10 PM
>>>To: info at 3dnetproductions.com; 'Sven-Erik Tiberg';
>>>npolys at vt.edu; x3d-public at web3d.org
>>>Cc: luca.chittaro at uniud.it
>>>Subject: Re: [X3D-Public] remote motion invocation
>(RMI)
>>>
>>>
>>>Hi Lauren,
>>>I was thinking about it in terms of essentially
>>>substituting
>>>'external' 'streams' of data instead of using
>'internal'
>>>data
>>>generated by scripts, timers, and interpolators. The
>>>point is, when
>>>the joint receives a joint translation or rotation,
>then
>>>it is up to
>>>the client player to move the child segments and joints
>>>and deform the
>>>skin to achieve the movement.. So, the data needed to
>>>animate the
>>>humanoid is really not that much; the client does all
>the
>>>actual
>>>moving. How much bandwidth is needed to send rotation
>and
>>>maybe
>>>position data to 20 joints? 20 SFVec3f points is Not
>that
>>>much.
>>>However, still probably more reliable and maybe
>efficient
>>>and even
>>>faster to just use 'internal' optimized timers and
>>>interpolators. The
>>>secret of course seems to be getting as much of that
>>>computation as
>>>possible done in the parallel hardware.
>>>
>>>Note that I am not even considering the idea of just
>>>sending new
>>>fields of vertices to replace the ones currently in
>>>memory. I think
>>>this is essentially done in fixed frame per second
>>>applications where
>>>the backroom app does all the computation to achieve
>the
>>>mesh for the
>>>next timed frame, then just 'streams' the mesh to the
>>>renderer where
>>>the picture is made.
>>>Thanks and Best Regards,
>>>Joe
>>>
>>>
>>>
>>>
>>>>On the other hand, I could see a setup where live
>joint
>>>>rotation and
>>>>displacement data was streamed into the scene and
>routed
>>>>to joints to
>>>>affect the humanoid in real time, rather than using
>>>>client-side
>>>>interpolators and combiners.
>>>
>>>
>>>Thanks for your reply. Yes, that is better; 'streaming'
>>>data does seem closer to something conceivable, even
>>>interesting. However, since talking about humanoids
>>>likely
>>>implies multi-users in most applications, I'd start
>>>getting worried about processors' cache/memory
>bandwidth;
>>>I don't think we are there yet to support a great many
>>>users this way. Even with high clock speed, multi-core
>>>processors and of course plenty of network bandwidth,
>low
>>>level memory management will remain an issue as the
>>>number
>>>of users increases, IMO. It already is with much
>simpler
>>>systems. That is why I like to have as much done on the
>>>client side as possible. Other than helping lowly
>>>devices,
>>>I'd be curious about the main advantages of the above
>>>approach? Open to ideas but, as far as mobile devices,
>>>hopefully Moore's law will continue to hold true for
>some
>>>time...
>>>
>>>Cheers,
>>>Lauren
>>>
>>>
>>> _____ ___ __ __ ___ ______ ___ ___
>___
>>> /__ / / _ \ / | / // _ \/_ __/ / _ \ / _ \ / _
>\
>>> _/_ / / // / / /||/ // __/ / / / ___// _// //
>/
>>>/____/ /____/ /_/ |__/ \___/ /_/ /_/ /_/\_\
>\___/
>>>______________________________________________________
>>> * * Interactive Multimedia - Internet Management * *
>>> * * Virtual Reality - Application Programming * *
>>> * 3D Net Productions www.3dnetproductions.com *
>>>
>>>
>>>>-----Original Message-----
>>>>From: Joe D Williams [mailto:joedwil at earthlink.net]
>>>>Sent: Tuesday, March 01, 2011 7:49 PM
>>>>To: info at 3dnetproductions.com; 'Sven-Erik Tiberg';
>>>>npolys at vt.edu; x3d-public at web3d.org
>>>>Cc: luca.chittaro at uniud.it
>>>>Subject: Re: [X3D-Public] remote motion invocation
>(RMI)
>>>>
>>>>
>>>>>
>>>>> Hello Joe,
>>>>>
>>>>> While we are on the subject, I have asked you this
>>>>before
>>>>> but I think you missed it. There is a link at
>>>>>
>>>>> http://www.web3d.org/x3d/workgroups/h-anim/
>>>>>
>>>>> for ISO-IEC-19775 X3D Component 26 Humanoid
>Animation
>>>>> (H-Anim) which describes H-Anim interfaces in terms
>of
>>>>the
>>>>> X3D scenegraph...
>>>>>
>>>>> That link is broken. Where else can I obtain this?
>>>>
>>>>goto:
>>>>
>>>>http://www.web3d.org/x3d/specifications/ISO-IEC-19775-
>>>>1.2-X3D-
>>>>AbstractSpecification/Part01/components/hanim.html
>>>>
>>>>the latest spec changed base and we lost the link.
>>>>I will tell the webmaster
>>>>
>>>>> If I am
>>>>> asking the wrong person then, can someone else help?
>>>>
>>>>I think the most current spec should have the same
>link
>>>>forever.
>>>>
>>>>>
>>>>> Apologies for hijacking this thread, maybe I am not
>>>>> getting this right, but, I have difficulties getting
>>>>> around my head how server side rasterization can
>work
>>>>with
>>>>> X3D other than with the MovieTexture node.
>>>>
>>>>Well, i think server side rasterization could work
>when
>>>>you have a
>>>>simple mesh avatar and all you want to do is just show
>>>>apparent
>>>>vertices move so you send sort of a set of gifs on a
>>>>billboard down to
>>>>the client. Don't get it? me either:).
>>>>On the other hand, I could see a setup where live
>joint
>>>>rotation and
>>>>displacement data was streamed into the scene and
>routed
>>>>to joints to
>>>>affect the humanoid in real time, rather than using
>>>>client-side
>>>>interpolators and combiners. I think even mobile will
>>>>always have
>>>>power to do reasonable h-anim skin and bone characters
>>>>but low end
>>>>performance devices may need a more simple character
>>>with
>>>>fewer joints
>>>>and more open mesh and different textures.
>>>>
>>>>
>>>>>
>>>>> Thanks,
>>>>> Lauren
>>>>>
>>>>
>>>>Thanks and Best Regards,
>>>>Joe
>>>>
>>>>>
>>>>>>-----Original Message-----
>>>>>>From: x3d-public-bounces at web3d.org [mailto:x3d-
>public-
>>>>>>bounces at web3d.org] On Behalf Of Joe D Williams
>>>>>>Sent: Tuesday, March 01, 2011 12:09 PM
>>>>>>To: Sven-Erik Tiberg; npolys at vt.edu; x3d-
>>>>public at web3d.org
>>>>>>Cc: luca.chittaro at uniud.it
>>>>>>Subject: Re: [X3D-Public] remote motion invocation
>>>>(RMI)
>>>>>>
>>>>>>
>>>>>>----- Original Message -----
>>>>>>From: "Sven-Erik Tiberg" <Sven-Erik.Tiberg at ltu.se>
>>>>>>To: <npolys at vt.edu>; <x3d-public at web3d.org>
>>>>>>Cc: <luca.chittaro at uniud.it>
>>>>>>Sent: Tuesday, March 01, 2011 6:14 AM
>>>>>>Subject: Re: [X3D-Public] remote motion invocation
>>>>(RMI)
>>>>>>
>>>>>>
>>>>>>> Hi
>>>>>>>
>>>>>>> A note.
>>>>>>> Would it be possible to create a object similar to
>>>>>>dynamic h-anim,
>>>>>>> looking and behaving like a moose.
>>>>>>
>>>>>>THe basis for h-anim comes from general-purpose
>>>>>>hierarchical animation
>>>>>>needs. That is, the shoulder joint is connected to
>the
>>>>>>elbow joint,
>>>>>>for example, such that when the shoulder joint is
>>>>moved,
>>>>>>then the
>>>>>>elbow joint is translated accordingly as expected.
>To
>>>>>>follow the same
>>>>>>technique for a different character, just position
>the
>>>>>>joints and
>>>>>>segments you chose to use depending upon the level
>of
>>>>>>articulation you
>>>>>>need in skeleton space then build the skin in skin
>>>>space
>>>>>>(hopefully
>>>>>>the same as skeleton space), connect it up and
>animate
>>>>>>the thing.
>>>>>>
>>>>>>>
>>>>>>> No I'm not kidding, in the future we hope to
>create
>>>a
>>>>>>X3D-sceen for
>>>>>>> driver simulator with "living" moose and reindeers
>(
>>>>>>they are not so
>>>>>>> exotic up here but more real and a hazard on the
>>>>roads
>>>>>>).
>>>>>>> Could be that the same motion pattern can be used
>>>for
>>>>>>deers and elks
>>>>>>> and ..
>>>>>>
>>>>>>possibly. Note that the selection of initial binding
>>>>time
>>>>>>joint
>>>>>>rotation is important.
>>>>>>
>>>>>>Good Luck and Best Regards,
>>>>>>Joe
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> /Sven-Erik Tiberg
>>>>>>>
>>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: x3d-public-bounces at web3d.org
>>>>>>> [mailto:x3d-public-bounces at web3d.org] On Behalf Of
>>>>>>Sven-Erik Tiberg
>>>>>>> Sent: den 1 mars 2011 08:51
>>>>>>> To: npolys at vt.edu; x3d-public at web3d.org
>>>>>>> Subject: Re: [X3D-Public] remote motion invocation
>>>>>>(RMI)
>>>>>>>
>>>>>>>
>>>>>>> If there are a need for extending the h-anim
>>>>kinematic
>>>>>>capacity I
>>>>>>> would have a look at Modelica
>>>>https://www.modelica.org/
>>>>>>And (
>>>>>>> openmodelica ) https://www.modelica.org/ for
>>>>>>calculation of dynamic
>>>>>>> behavior.
>>>>>>>
>>>>>>> IMHO it would not be impossible to interface an
>>>>>>openmodelica model
>>>>>>> that runs in Real Time to a X3D sceen.
>>>>>>>
>>>>>>> On animiation of human I would like to suggest
>that
>>>>you
>>>>>>take a look
>>>>>>> at http://hcilab.uniud.it/index.html and it's
>>>editor
>>>>>>H-Animator
>>>>>>> http://hcilab.uniud.it/demos-videos/item11.html
>>>>>>>
>>>>>>> /Sven-Erik Tiberg
>>>>>>> Lulea Univ of Technology
>>>>>>> Sweden
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: x3d-public-bounces at web3d.org
>>>>>>> [mailto:x3d-public-bounces at web3d.org] On Behalf Of
>>>>>>Nicholas F. Polys
>>>>>>> Sent: den 28 februari 2011 21:34
>>>>>>> To: x3d-public at web3d.org
>>>>>>> Subject: Re: [X3D-Public] remote motion invocation
>>>>>>(RMI)
>>>>>>>
>>>>>>> One of our members was working on this spec some
>>>time
>>>>>>ago (2001),
>>>>>>> perhaps a good integration target with H-Anim for
>>>>such
>>>>>>a platform or
>>>>>>> communication layer: HumanML
>>>>>>> http://www.oasis-
>>>>>>open.org/committees/tc_home.php?wg_abbrev=humanmarku
>p
>>>>>>>
>>>>>>> it does mention kinesthetics in the overview but I
>>>am
>>>>>>really not
>>>>>>> sure where it has been adopted.
>>>>>>> ?
>>>>>>>
>>>>>>> br,
>>>>>>>
>>>>>>> _n_polys
>>>>>>>
>>>>>>> On 2/28/2011 1:54 PM, Joe D Williams wrote:
>>>>>>>>> And what if the script could recive states (
>>>ROUTED
>>>>>>events ) from
>>>>>>>>> the
>>>>>>>>> 3D browser and do a animation on the fly using
>>>>>>inverse kinematics,
>>>>>>>>> that would be cool.
>>>>>>>>
>>>>>>>> I think the basic needed structure for that is in
>>>>the
>>>>>>X3D H-Anim
>>>>>>>> spec.
>>>>>>>>
>>>>>>>> Best Regards,
>>>>>>>> Joe
>>>>>>>>
>>>>>>>> ----- Original Message ----- From: "Sven-Erik
>>>>Tiberg"
>>>>>>>> <Sven-Erik.Tiberg at ltu.se>
>>>>>>>> To: "John Carlson" <john.carlson3 at sbcglobal.net>;
>>>>>>>> <x3d-public at web3d.org>
>>>>>>>> Sent: Sunday, February 27, 2011 11:50 PM
>>>>>>>> Subject: Re: [X3D-Public] remote motion
>invocation
>>>>>>(RMI)
>>>>>>>>
>>>>>>>>
>>>>>>>>> What if using an from the browser external
>motion
>>>>>>engine as you
>>>>>>>>> mention, but let the motion engine host in the
>>>>client
>>>>>>computer.
>>>>>>>>> Communicating new states to the 3D-browser (to a
>>>>h-
>>>>>>amin model )
>>>>>>>>> trough a script.
>>>>>>>>> And what if the script could recive states (
>>>ROUTED
>>>>>>events ) from
>>>>>>>>> the
>>>>>>>>> 3D browser and do a animation on the fly using
>>>>>>inverse kinematics,
>>>>>>>>> that would be cool.
>>>>>>>>>
>>>>>>>>> Thanks for bringing up this subject.
>>>>>>>>>
>>>>>>>>> /Sven-Erik Tiberg
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> -----Original Message-----
>>>>>>>>> From: x3d-public-bounces at web3d.org
>>>>>>>>> [mailto:x3d-public-bounces at web3d.org] On Behalf
>Of
>>>>>>John Carlson
>>>>>>>>> Sent: den 26 februari 2011 05:49
>>>>>>>>> To: <x3d-public at web3d.org> mailing list
>>>>>>>>> Subject: [X3D-Public] remote motion invocation
>>>>(RMI)
>>>>>>>>>
>>>>>>>>> Instead of downloading motion into a 3D scene,
>how
>>>>>>about uploading
>>>>>>>>> motion into a remote social network server or
>>>>robot?
>>>>>>One could
>>>>>>>>> upload motions to a remote server, and then
>invoke
>>>>>>them with
>>>>>>>>> parameters. One could create classes
>aggregating
>>>>>>motion methods,
>>>>>>>>> instantiate classes to create behaviors in
>avatars
>>>>or
>>>>>>robots.
>>>>>>>>>
>>>>>>>>> What if H-Anim was used to upload and control
>>>>avatars
>>>>>>remotely,
>>>>>>>>> instead of downloaded models? What I am
>thinking
>>>>of
>>>>>>is something
>>>>>>>>> like Kinect/PrimeSense for primitive motion
>input
>>>+
>>>>a
>>>>>>programming
>>>>>>>>> language for
>>>>>>repetition/parameterization/classes/instantation.
>>>>>>>>> What
>>>>>>>>> if I chose a protocol/programming such as
>>>>JavaScript?
>>>>>>>>>
>>>>>>>>> Does DIS do this? Are there any patents to be
>>>>aware
>>>>>>of? If we
>>>>>>>>> use
>>>>>>>>> something like Caja on the server, or sandbox
>the
>>>>>>motion on the
>>>>>>>>> server, could we insure the security of such an
>>>>>>endeavor?
>>>>>>>>>
>>>>>>>>> Initially, I am thinking only of video download.
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>>
>>>>>>>>> John Carlson
>>>>>>>>> _______________________________________________
>>>>>>>>> X3D-Public mailing list
>>>>>>>>> X3D-Public at web3d.org
>>>>>>>>> http://web3d.org/mailman/listinfo/x3d-
>>>>>>public_web3d.org
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> X3D-Public mailing list
>>>>>>>>> X3D-Public at web3d.org
>>>>>>>>> http://web3d.org/mailman/listinfo/x3d-
>>>>>>public_web3d.org
>>>>>>>>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> X3D-Public mailing list
>>>>>>>> X3D-Public at web3d.org
>>>>>>>> http://web3d.org/mailman/listinfo/x3d-
>>>>public_web3d.org
>>>>>>>
>>>>>>> --
>>>>>>> Nicholas F. Polys Ph.D.
>>>>>>>
>>>>>>> Director of Visual Computing
>>>>>>> Virginia Tech Information Technology
>>>>>>>
>>>>>>> Affiliate Research Professor
>>>>>>> Virginia Tech Computer Science
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> X3D-Public mailing list
>>>>>>> X3D-Public at web3d.org
>>>>>>> http://web3d.org/mailman/listinfo/x3d-
>>>>public_web3d.org
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> X3D-Public mailing list
>>>>>>> X3D-Public at web3d.org
>>>>>>> http://web3d.org/mailman/listinfo/x3d-
>>>>public_web3d.org
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> X3D-Public mailing list
>>>>>>> X3D-Public at web3d.org
>>>>>>> http://web3d.org/mailman/listinfo/x3d-
>>>>public_web3d.org
>>>>>>
>>>>>>
>>>>>>_______________________________________________
>>>>>>X3D-Public mailing list
>>>>>>X3D-Public at web3d.org
>>>>>>http://web3d.org/mailman/listinfo/x3d-
>public_web3d.org
>>>>>
>>>
>>
>>



_______________________________________________
X3D-Public mailing list
X3D-Public at web3d.org
http://web3d.org/mailman/listinfo/x3d-public_web3d.org



-- 
Schon gehört? GMX hat einen genialen Phishing-Filter in
die
Toolbar eingebaut! http://www.gmx.net/de/go/toolbar




More information about the X3D-Public mailing list