[X3D-Public] remote animation invocation (RAI)

GLG info at 3dnetproductions.com
Sat Mar 19 02:08:29 PDT 2011

I just came across this document, which I am including in
this thread because of its relevancy to what was discussed



Cong Ye
Computer Graphics and Virtual Reality Research Group
The Department of Computer Science, Regent Court
211 Portobello, Sheffield
S1 4DP. UK


>-----Original Message-----
>From: x3d-public-bounces at web3d.org [mailto:x3d-public-
>bounces at web3d.org] On Behalf Of Joe D Williams
>Sent: Thursday, March 10, 2011 12:51 PM
>To: John Carlson
>Cc: x3d-public at web3d.org
>Subject: Re: [X3D-Public] remote animation invocation
>> Is there something wrong with sending motion/animation?
>Hi again John and All,
>How about proposing the 'animation' for an avator will
>naturally be
>produced using hardware acceleration (parallel
>processing) as the main
>means of moving whatever items constitute the character?
>So, an
>animation 'routine' might be sent in highly compressed
>form ready to
>use with relatively light loading of the client. With
>current and
>future access to WebGL, OGL, and OCL along with
>integration of the
>H-Anim model with physics processing, this seems a
>developing technical path.
>Thanks to All and Best Regrads,
>----- Original Message -----
>From: "John Carlson" <john.carlson3 at sbcglobal.net>
>To: "Christoph Valentin" <christoph.valentin at gmx.at>
>Cc: <x3d-public at web3d.org>
>Sent: Saturday, February 26, 2011 3:39 PM
>Subject: Re: [X3D-Public] remote animation invocation
>> On Feb 26, 2011, at 12:38 PM, Christoph Valentin wrote:
>>> Thanks for clarification - so you mean "upload" quasi
>as part of
>>> avatar authoring, but not as part of avatar operation,
>> "upload" is like teaching.  "download" is like
>learning.  I see
>> operating as teaching primitives (this is for the
>> people).  I see authoring as teaching collections and
>> which contain primitives.  There will always be
>primitive operations
>> and data.  And we can always author more complex
>operations and data
>> structures on top of them.  Also, authoring (live
>performance) is
>> something you typically do in private, publishing
>> performance or a production) is done in public.  I'm
>divided on
>> canned performance versus live performance--seems like
>the lines are
>> blurring.  How about a play (or virtual world) where
>the actors are
>> interacting with video?  Example: MST3K.
>>> Additionally, when you write
>>> >>>> I might be wrong that video download is the way
>to go.  It
>>> >>>> just seems very natural.
>>> What do you mean:
>>> (a) Rendering in the network as opposed to (b)
>rendering in the
>>> user equipment?
>> Rendering on a super computer or massive GPU farm (see
>> http://www.onlive.com), then sending compressed/lossy
>video to the
>> user's TV.  I believe the PC/Desktop is going to be
>> soon--only kept around for people like me with speech
>> If you insist, we can send JavaScript/WebGL/XML3D/X3DOM
>to the user
>> equipment--whatever floats your boat.  However, let me
>point out
>> that vector and character displays aren't used very
>much any more.
>> Raster is where it's at.  You might make an argument
>for variable
>> bit rates or frames, which I think is interesting.
>Generally, there
>> is 3 types of visual data, pixels, shapes and text,
>with a number of
>> dimensions for each.  Popular formats (postscript, PDF,
>Flash, HTML
>> w/ SVG) use all three (can you see using closed
>captioning for a
>> chat system)?  I believe MPEG already has X3D in it.
>>> (a) downloading (stereo) video from network, uploading
>raw user
>>> input, rendering in the network?
>>> (b) downloading scene (updates) from network,
>exchanging state
>>> updates between user equipment and network, rendering
>in user
>>> equipment?
>>> If you mean this, and if you assume bandwidth between
>>> equipment and network being a scarce resource, then in
>my humble
>>> opinion the conclusion is.
>>> Both (a) and (b) is necessary, (a) for complex scenery
>and high
>>> speed movement, (b) for simple scenery and low speed
>> I think that updates can be handled well with MPEG.
>However, I
>> don't know how MPEG works with panning and zooming.
>Perhaps the
>> user equipment will be able to reconstruct the 3D scene
>from video.
>> I don't see bandwidth between user equipment and the
>network being a
>> scare resource.  If necessary, we can use a combination
>> satellite, cellular, fiber-optics, cable etc to solve
>the problem.
>> I believe the next boom will be a network boom as
>people demand
>> higher network bandwidth.  We can't really speed up the
>user's PCs
>> any more than they are now (except make things
>> efficient on the software side).  Hopefully, our
>governments will
>> work toward more competition in the network to speed
>things up.
>> It's likely that servers will move closer to the users-
>> there will be a concept of "community servers"--I'm
>> geographically now.  I'm not predicting the demise of
>> graphics--the world we see is 3D objects--it's just our
>> impression of them is video (eyesight).
>> Can we think about sending something besides text
>across the
>> network?  Is there something wrong with sending
>> John
>> _______________________________________________
>> X3D-Public mailing list
>> X3D-Public at web3d.org
>> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>X3D-Public mailing list
>X3D-Public at web3d.org
-------------- next part --------------
A non-text attachment was scrubbed...
Type: application/pdf
Size: 184990 bytes
Desc: not available
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20110319/5c76934a/attachment-0001.pdf>

More information about the X3D-Public mailing list