[X3D-Public] X3D multi-user VR and WebRTC

Christoph Valentin christoph.valentin at gmx.at
Sun Jan 26 14:52:17 PST 2014


Hi Cecile

Thanks for this information.

By the way, my intuition would be the other way round.

It's just a conclusion by analogy, but when comparing MU services with classical telephony service, I would

a) compare a multiuser session with a conf call (all participants of the call get the "same" virtual reality)

b) compare a collaboration server (which mixes event streams) with a conference bridge (which mixes audio streams)

Hence I would rather assign the collaboration server to the media plane.

Just a gut feeling, as you said.

Thanks again for the information

All the best
Christoph

--------------------------------------------
> Let me get back to this WebRTC issue. I have got a question, probably anybody on this list knows the answer, therefore I 'cc' to the list :-)
> WebRTC defines media plane and description of media plane (voice, video......)
> WebRTC leaves the signalling plane open to the application, as much as possible (jingle, SIP, ......)
> [...]
> When seen as part of media plane, it would mean, WebRTC should feel responsible to define a standard, imho.
 
It seems WebRTC aims to be agnostic regarding what it transports, so I'm not sure they'd want to add something specifically for 3D MU (although they might be receptive to an abstract multiuser proposal if there is a strong enough demand).
 
 
> Now, a question from network perspective. When we talk about 3D MU scene, e.g. games, do we see the shared state of the scene
> (door open, door closed, velocity of car, .........) rather a part of the media plane or rather a part of the signalling plane??
 
One thing to consider is that the signalling could go either through a WebSocket or a WebRTC DataChannel whereas you'd probably send the media plane via WebRTC (either using a MediaStream to make use of already-optimized audio/video codecs on UDP, or a DataChannel for arbitrary binary on SCTP).
 
A gut feeling makes me lean towards signaling given it could be influenced by the distribution architecture (e.g. how to check a user has permission to modify an object's shared state), but there would be arguments for either way and I'm sure the people from the WebRTC working group would have a much smarter answer :)
 
 
By the way, Microsoft's been pushing its Object RTC (http://www.html5labs.com) alternative/wrapper to WebRTC, so they might also have some ideas on the topic, although they seem more focused on the audio/video part than data from what I've gathered.
 
 
See you,
Cecile_______________________________________________ X3D-Public mailing list X3D-Public at web3d.org http://web3d.org/mailman/listinfo/x3d-public_web3d.org[http://web3d.org/mailman/listinfo/x3d-public_web3d.org]



More information about the X3D-Public mailing list