[X3D-Public] X3D multi-user VR and WebRTC
newsletter at wildpeaks.fr
Sat Jan 25 22:42:15 PST 2014
> Let me get back to this WebRTC issue. I have got a question, probably
anybody on this list knows the answer, therefore I 'cc' to the list :-)
> WebRTC defines media plane and description of media plane (voice,
> WebRTC leaves the signalling plane open to the application, as much as
possible (jingle, SIP, ......)
> When seen as part of media plane, it would mean, WebRTC should feel
responsible to define a standard, imho.
It seems WebRTC aims to be agnostic regarding what it transports, so I'm
not sure they'd want to add something specifically for 3D MU (although they
might be receptive to an abstract multiuser proposal if there is a strong
> Now, a question from network perspective. When we talk about 3D MU scene,
e.g. games, do we see the shared state of the scene
> (door open, door closed, velocity of car, .........) rather a part of the
media plane or rather a part of the signalling plane??
One thing to consider is that the signalling could go either through a
WebSocket or a WebRTC DataChannel whereas you'd probably send the media
plane via WebRTC (either using a MediaStream to make use of
already-optimized audio/video codecs on UDP, or a DataChannel for arbitrary
binary on SCTP).
A gut feeling makes me lean towards signaling given it could be influenced
by the distribution architecture (e.g. how to check a user has permission
to modify an object's shared state), but there would be arguments for
either way and I'm sure the people from the WebRTC working group would have
a much smarter answer :)
By the way, Microsoft's been pushing its Object RTC (
http://www.html5labs.com) alternative/wrapper to WebRTC, so they might also
have some ideas on the topic, although they seem more focused on the
audio/video part than data from what I've gathered.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the X3D-Public