[X3D-Public] remote animation invocation (RAI)

John Carlson john.carlson3 at sbcglobal.net
Sat Feb 26 09:33:24 PST 2011


When I say "uploading motion"  I mean creating a library of motions available inside a robot (or say on roboearth.org) or inside avatar on a social network server where collision detection, rendering, etc would take place on the server.  The motion designer would take primitive motions and compose them into more complex behaviors (kind of like scripting in MMOs, with the addition of transforms and interpolators).

When I say "downloading motion"  I mean the typical X3D animation, perhaps H-Anim (but I don't know if H-Anim actually supports animation, or if it uses X3D animation).

I think that Kinect might have some advantages to assigning a sign (identifier) to a sequence of behaviors.  That's why I mentioned it.  I'd have to figure out some way to pass parameters, etc.  I am thinking of Kinect as a programming tool.

I might be wrong that video download is the way to go.  It just seems very natural.

On Feb 26, 2011, at 6:10 AM, Christoph Valentin wrote:

> I'm not sure, if I understand what you mean with "downloading motion" and "uploading motion".
> 
> In a classic virtual world, you download an avatar with some potential(!) gestures/motions.
> You use some input device (e.g. a mouse) to trigger a concrete(!) gesture/motion.
> The concrete gesture/motion is uploaded to a server and distributed from the pilot to the slaves.
> Imho, Kinect is no qualitatively new situation regarding this, but it has got quantitative differences (the set of potential gestures/motions is larger).
> 
>> -------- Original-Nachricht --------
>> Datum: Fri, 25 Feb 2011 20:49:18 -0800
>> Von: John Carlson <john.carlson3 at sbcglobal.net>
>> An: "<x3d-public at web3d.org> mailing list" <x3d-public at web3d.org>
>> Betreff: [X3D-Public] remote motion invocation (RMI)
>> 
>> 
>> Instead of downloading motion into a 3D scene, how about uploading motion into a remote social network server or robot? One could upload motions to a remote server, and then invoke them with parameters. One could create classes aggregating motion methods, instantiate classes to create behaviors in avatars or robots.
>> 
>> What if H-Anim was used to upload and control avatars remotely, instead of downloaded models? What I am thinking of is something like Kinect/PrimeSense for primitive motion input + a programming language for repetition/parameterization/classes/instantation. What if I chose a protocol/programming language such as JavaScript?
>> 
>> Does DIS do this? Are there any patents to be aware of? If we use something like Caja on the server, or sandbox the motion on the server, could we insure the security of such an endeavor?
>> 
>> Initially, I am thinking only of video download.
>> 
>> Thanks,
>> 
>> John Carlson
>> _______________________________________________
>> X3D-Public mailing list
>> X3D-Public at web3d.org
>> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
> 
> 
> 
> -- 
> Schon gehört? GMX hat einen genialen Phishing-Filter in die
> Toolbar eingebaut! http://www.gmx.net/de/go/toolbar

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20110226/f70c3dca/attachment.html>


More information about the X3D-Public mailing list