[X3D-Public] remote animation invocation (RAI)

John Carlson john.carlson3 at sbcglobal.net
Sat Feb 26 15:39:46 PST 2011

On Feb 26, 2011, at 12:38 PM, Christoph Valentin wrote:

> Thanks for clarification - so you mean "upload" quasi as part of avatar authoring, but not as part of avatar operation, true?
"upload" is like teaching.  "download" is like learning.  I see operating as teaching primitives (this is for the hardware/system people).  I see authoring as teaching collections and conditionals which contain primitives.  There will always be primitive operations and data.  And we can always author more complex operations and data structures on top of them.  Also, authoring (live performance) is something you typically do in private, publishing (canned performance or a production) is done in public.  I'm divided on canned performance versus live performance--seems like the lines are blurring.  How about a play (or virtual world) where the actors are interacting with video?  Example: MST3K.

> Additionally, when you write
> >>>> I might be wrong that video download is the way to go.  It just seems very natural.
> What do you mean:
> (a) Rendering in the network as opposed to (b) rendering in the user equipment?
Rendering on a super computer or massive GPU farm (see http://www.onlive.com), then sending compressed/lossy video to the user's TV.  I believe the PC/Desktop is going to be history soon--only kept around for people like me with speech difficulty.  If you insist, we can send JavaScript/WebGL/XML3D/X3DOM to the user equipment--whatever floats your boat.  However, let me point out that vector and character displays aren't used very much any more.  Raster is where it's at.  You might make an argument for variable bit rates or frames, which I think is interesting.  Generally, there is 3 types of visual data, pixels, shapes and text, with a number of dimensions for each.  Popular formats (postscript, PDF, Flash, HTML w/ SVG) use all three (can you see using closed captioning for a chat system)?  I believe MPEG already has X3D in it.
> (a) downloading (stereo) video from network, uploading raw user input, rendering in the network?
> (b) downloading scene (updates) from network, exchanging state updates between user equipment and network, rendering in user equipment?
> If you mean this, and if you assume bandwidth between user equipment and network being a scarce resource, then in my humble opinion the conclusion is.
> Both (a) and (b) is necessary, (a) for complex scenery and high speed movement, (b) for simple scenery and low speed movement.

I think that updates can be handled well with MPEG.  However, I don't know how MPEG works with panning and zooming.  Perhaps the user equipment will be able to reconstruct the 3D scene from video.  I don't see bandwidth between user equipment and the network being a scare resource.  If necessary, we can use a combination of satellite, cellular, fiber-optics, cable etc to solve the problem.  I believe the next boom will be a network boom as people demand higher network bandwidth.  We can't really speed up the user's PCs any more than they are now (except make things simpler/more efficient on the software side).  Hopefully, our governments will work toward more competition in the network to speed things up.  It's likely that servers will move closer to the users--perhaps there will be a concept of "community servers"--I'm thinking geographically now.  I'm not predicting the demise of 3D graphics--the world we see is 3D objects--it's just our first impression of them is video (eyesight).

Can we think about sending something besides text across the network?  Is there something wrong with sending motion/animation?


More information about the X3D-Public mailing list