[X3D-Public] remote animation invocation (RAI)

Joe D Williams joedwil at earthlink.net
Sun Mar 20 11:49:08 PDT 2011



>" The other issue is converting VRML to X3D. Because human figure 
>model contains large number of vertex and
> polygons, it is diffult to transfer it to X3D. Most of converting 
> softwarw will meet out of memory problem."

First, I forgot to deal with this sentence:
'issue is converting VRML to X3D." as
 which I took to mean:

issue of converting poser and others to X3D.
I think I explained why the files are giant in the conversion from mb 
and some poser to VRML OR X3D.
Joe


>
>
> My understanding and experience for this is that while the internal 
> animation details may work by moving joints and bones, the final 
> export of the animation is output and entire skin for each 'frame' 
> of the animation. While the intrnal rendering machine may use 
> interpolations of joint rotations to compute the skin movement, the 
> aniamtionis output as coordinate interpolators, needing the entire 
> skin vertex field for each frame. Since the animation is aimed at 
> recording, there will be the standard number of video frames per 
> second in the animation. This leads to giant files because each 
> frame of animation needs a duplicate of the entire skin field.
>
> I hope that is fairly clear. By the time the character is exported, 
> the file is giant because instead of animating joint fields, the 
> skin is animated directly in the final output. I would suggest using 
> tools designed for realtime animation instead of film animation, or 
> change the setttings of the export tool somehow to produce joint 
> animations instead of skin coord animations.
>
> So maybe this is hard to explain but if you just look at the 
> exported user code you can easily see two things: First that all the 
> excess code you have is simple repetitons of the skin coords, one 
> for each rendered frame; second that this is completely not 
> acceptable technique to accomplish realtime animations.
>
> Thanks and Best Regards,
> Joe
>
>
>
>
> a
> Also the output is timed Thus we end up with a complete h-anim skin
>
> ----- Original Message ----- 
> From: "GLG" <info at 3dnetproductions.com>
> To: "'Joe D Williams'" <joedwil at earthlink.net>; "'John Carlson'" 
> <john.carlson3 at sbcglobal.net>
> Cc: <x3d-public at web3d.org>
> Sent: Saturday, March 19, 2011 2:08 AM
> Subject: RE: [X3D-Public] remote animation invocation (RAI)
>
>
>>
>> I just came across this document, which I am including in
>> this thread because of its relevancy to what was discussed
>> here.
>>
>> Cheers,
>> Lauren
>>
>>
>> EXPERIENMENT OF CONVERTING ANIMATED
>> VIRTUAL CHARACTER INTO X3D/VRML
>>
>> Cong Ye
>> Computer Graphics and Virtual Reality Research Group
>> The Department of Computer Science, Regent Court
>> 211 Portobello, Sheffield
>> S1 4DP. UK
>>
>> http://staffwww.dcs.shef.ac.uk/people/C.Ye/pdf/X3D%20repor
>> t.pdf
>>
>>
>>>-----Original Message-----
>>>From: x3d-public-bounces at web3d.org [mailto:x3d-public-
>>>bounces at web3d.org] On Behalf Of Joe D Williams
>>>Sent: Thursday, March 10, 2011 12:51 PM
>>>To: John Carlson
>>>Cc: x3d-public at web3d.org
>>>Subject: Re: [X3D-Public] remote animation invocation
>>>(RAI)
>>>
>>>> Is there something wrong with sending motion/animation?
>>>
>>>Hi again John and All,
>>>How about proposing the 'animation' for an avator will
>>>naturally be
>>>produced using hardware acceleration (parallel
>>>processing) as the main
>>>means of moving whatever items constitute the character?
>>>So, an
>>>animation 'routine' might be sent in highly compressed
>>>form ready to
>>>use with relatively light loading of the client. With
>>>current and
>>>future access to WebGL, OGL, and OCL along with
>>>integration of the
>>>H-Anim model with physics processing, this seems a
>>>naturally
>>>developing technical path.
>>>
>>>Thanks to All and Best Regrads,
>>>Joe
>>>
>>>----- Original Message -----
>>>From: "John Carlson" <john.carlson3 at sbcglobal.net>
>>>To: "Christoph Valentin" <christoph.valentin at gmx.at>
>>>Cc: <x3d-public at web3d.org>
>>>Sent: Saturday, February 26, 2011 3:39 PM
>>>Subject: Re: [X3D-Public] remote animation invocation
>>>(RAI)
>>>
>>>
>>>>
>>>> On Feb 26, 2011, at 12:38 PM, Christoph Valentin wrote:
>>>>
>>>>> Thanks for clarification - so you mean "upload" quasi
>>>as part of
>>>>> avatar authoring, but not as part of avatar operation,
>>>true?
>>>>>
>>>> "upload" is like teaching.  "download" is like
>>>learning.  I see
>>>> operating as teaching primitives (this is for the
>>>hardware/system
>>>> people).  I see authoring as teaching collections and
>>>conditionals
>>>> which contain primitives.  There will always be
>>>primitive operations
>>>> and data.  And we can always author more complex
>>>operations and data
>>>> structures on top of them.  Also, authoring (live
>>>performance) is
>>>> something you typically do in private, publishing
>>>(canned
>>>> performance or a production) is done in public.  I'm
>>>divided on
>>>> canned performance versus live performance--seems like
>>>the lines are
>>>> blurring.  How about a play (or virtual world) where
>>>the actors are
>>>> interacting with video?  Example: MST3K.
>>>>
>>>>> Additionally, when you write
>>>>> >>>> I might be wrong that video download is the way
>>>to go.  It
>>>>> >>>> just seems very natural.
>>>>>
>>>>> What do you mean:
>>>>> (a) Rendering in the network as opposed to (b)
>>>rendering in the
>>>>> user equipment?
>>>>>
>>>> Rendering on a super computer or massive GPU farm (see
>>>> http://www.onlive.com), then sending compressed/lossy
>>>video to the
>>>> user's TV.  I believe the PC/Desktop is going to be
>>>history
>>>> soon--only kept around for people like me with speech
>>>difficulty.
>>>> If you insist, we can send JavaScript/WebGL/XML3D/X3DOM
>>>to the user
>>>> equipment--whatever floats your boat.  However, let me
>>>point out
>>>> that vector and character displays aren't used very
>>>much any more.
>>>> Raster is where it's at.  You might make an argument
>>>for variable
>>>> bit rates or frames, which I think is interesting.
>>>Generally, there
>>>> is 3 types of visual data, pixels, shapes and text,
>>>with a number of
>>>> dimensions for each.  Popular formats (postscript, PDF,
>>>Flash, HTML
>>>> w/ SVG) use all three (can you see using closed
>>>captioning for a
>>>> chat system)?  I believe MPEG already has X3D in it.
>>>>> (a) downloading (stereo) video from network, uploading
>>>raw user
>>>>> input, rendering in the network?
>>>>>
>>>>> (b) downloading scene (updates) from network,
>>>exchanging state
>>>>> updates between user equipment and network, rendering
>>>in user
>>>>> equipment?
>>>>>
>>>>> If you mean this, and if you assume bandwidth between
>>>user
>>>>> equipment and network being a scarce resource, then in
>>>my humble
>>>>> opinion the conclusion is.
>>>>>
>>>>> Both (a) and (b) is necessary, (a) for complex scenery
>>>and high
>>>>> speed movement, (b) for simple scenery and low speed
>>>movement.
>>>>
>>>> I think that updates can be handled well with MPEG.
>>>However, I
>>>> don't know how MPEG works with panning and zooming.
>>>Perhaps the
>>>> user equipment will be able to reconstruct the 3D scene
>>>from video.
>>>> I don't see bandwidth between user equipment and the
>>>network being a
>>>> scare resource.  If necessary, we can use a combination
>>>of
>>>> satellite, cellular, fiber-optics, cable etc to solve
>>>the problem.
>>>> I believe the next boom will be a network boom as
>>>people demand
>>>> higher network bandwidth.  We can't really speed up the
>>>user's PCs
>>>> any more than they are now (except make things
>>>simpler/more
>>>> efficient on the software side).  Hopefully, our
>>>governments will
>>>> work toward more competition in the network to speed
>>>things up.
>>>> It's likely that servers will move closer to the users-
>>>-perhaps
>>>> there will be a concept of "community servers"--I'm
>>>thinking
>>>> geographically now.  I'm not predicting the demise of
>>>3D
>>>> graphics--the world we see is 3D objects--it's just our
>>>first
>>>> impression of them is video (eyesight).
>>>>
>>>> Can we think about sending something besides text
>>>across the
>>>> network?  Is there something wrong with sending
>>>motion/animation?
>>>>
>>>> John
>>>> _______________________________________________
>>>> X3D-Public mailing list
>>>> X3D-Public at web3d.org
>>>> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>>>
>>>
>>>_______________________________________________
>>>X3D-Public mailing list
>>>X3D-Public at web3d.org
>>>http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>>
>
>
> _______________________________________________
> X3D-Public mailing list
> X3D-Public at web3d.org
> http://web3d.org/mailman/listinfo/x3d-public_web3d.org 




More information about the X3D-Public mailing list