[X3D-Public] interest in network sensor / client based server software

Christoph Valentin christoph.valentin at gmx.at
Fri Jan 10 10:41:41 PST 2014


> Summary of multi-user paradigms:
> MU,SU - multi-user, single user
> 
> X3D specs have 1 viewer per scene, one scene per browser. That's fine for single user (SU).
> For multi-user MU there are 3 possible scenarios:
> 
> 1. Console MU: 
> browser 1:1 scene 1:M user/viewer
> For a single browser to support multiple users (MU) in a single scene the x3d node specs would need to be extended to allow multiple users per scene, for example sensor nodes would need to emit also which viewer/user/avatar is interacting with the sensor, and browser developers would need to extend their code to support multiple 'viewers' each attached to a separate input device and window.

Note: for each viewer/user, there is one of the avatars a "special" avatar, because it is "his/her" avatar and he/she is "within" the avatar and the avatar need not/should not be rendered for this viewer/user (even here, an avatar is "something special" and not just "scenery as any scenery")


> 
> 2. HyperBrowser/Mashup MU:  
> browser 1:M scene 1:1 user/viewer
> Browser developers could modify the browser code to render more than one scene into the backbuffer in a mashup. Static scenery could be in one scene, and each user/viewer represented in another scene. For a given browser, one scene would be for the current user, and other user scenes would be rendered as passive geometry, with avatars in place of viewers, and those passive user scenes synchronized across networks directly to active peers or through a server. This scenario would make it easier to show different content in different browsers, with a publish/subscribe paradigm for each scene. No changes are needed to x3d specifications, but a browser would render some nodes differently depending on whether they are in the active scene or a passive scene.

quoting: [...]to show different content in different browsers[...]
Question: do you mean: "to show different content in different scenes/for different users"?

> 
> 3. Teleported Avatar MU: 
> browser 1:1 scene 1:1 user/viewer 1:M avatar
> A special avatar Proto/node is included in the users scene for each other user, and that node is synchronized across networks to the viewers in peer browsers directly or via server.
> 

Question: Maybe combination of paradigm 1. + 2., or of paradigm 2. + 3. possible in one multiuser session? Other combinations possible?

> 
> >>
> >>>
> >>>> SU issues
> >>>> You have a lot going on. Some of it isn't MU per-se. If you want to compose trains of train cars via 'module' protos, or you want to pick up and carry something, or be carried by something, that can be worked out in single-user SU mode.
> >>>> Q. How do you pick up and carry something, or attach one thing to another? Is there GrabSensor / GrabSocket node types?
> >>>
> >>> I'm not sure, if containment is a MU issue or a SU issue. In my implementation, I have "moduleName"s and "objId"s that are used in the "streamName" of the network sensor and hence used for identification on the net.
> >>> Example:
> >>> - a car with objId="car" in module "district_12": extended object Id = "district_12-car"
> >>> - the steering of the car: extended object Id = "district_12-car.steering"
> >>> Regarding trains and rail vehicles I'm not yet sure (its under construction)
> >>>
> >>> Maybe its possible to derive streamNames from the position in the scene graph, however I doubt if this is easily possible, when you add models via Browser.createVrmlFromURL() to <Group> nodes (is it the same child of the group node in all scene instances? What, if you want to use different browsers in different scene instances of the same multiuser session?)
> >>>
> >
> > MODULES/PROTOS vs SCENE PROTOTYPES
> > A missing concept in web3d is the idea of a scene prototype.
> > <SceneDeclaration>
> > (what we normally have in <Scene/>
> > </SceneDeclaration>
> > <SceneInstance/> (would instance the scene declaration)
> >
> > This would allow you to define multiple scenes in one scenefile
> > <SceneDeclaration name="Scene1">..</SceneDeclaration>
> > and Anchor around to different scenes, while holding the binary scene prototypes in memory for quick instancing. An Anchor URL would need to express the idea that the scene is ready and waiting for instancing, that it was declared in the same file.
> > <Anchor url='./Scene2' or url='.Scene2' or url='Scene2' /> or something like that.
> >
> > There could be something one level above Scene, like world or universe:
> > <UniverseDeclare 'U1'>
> > <SceneDeclare 'Scene1'/>
> > <SceneDeclare 'Scene2'/>
> > <SceneInstance 'Scene1'/> (needs to be a singleton for normal browsers)
> > </UniverseDeclare>
> > <UniverseInstance 'U1'/>
> >
> > This is a bit more like a mashup browser, or the publish/subscribe model applied to scenes.
> >
> >>
> >> EXTENDED USE SYNTAX:
> >> I haven't looked at NetworkSensor. Is it analogous to SU ProtoInstance, or to SU USE? (USE being when something has already been instanced, and you want to re-use the same instance elsewhere in the scenegraph)
> >> Lets say it's like USE, for example <ProtoInstance USE='DEFname' />
> >> Then syntactically in a .x3d file:
> >> USE='DEFname'
> >> could be extended to:
> >> USE='network_location.browser_instance.scene.DEFname'
> >> or if the thing you want is buried inside a protobody in the target scene:
> >> USE='network_location.browser_instance.scene.PROTOINSTANCE_DEFNAME.PROTOBODY_DEFname'
> >> or if you're not sure where the object is, maybe a jQuery-like search syntax:
> >> USE='$("*TRAINCAR*")[0]'
> >>
> >> Or if the idea behind NetworkSensor is like USE except don't bomb if you can't find the target, then NetworkSensor's URL could be generalized to work like the generalized USE syntax above?
> >>
> >>
> >>>
> >>>>
> >>>> MBMU - Multiple-Browser, Multiple-User
> >>>> And when we've been talking about MU we've been implying that there are 2
> >>>> processes / 2 web3dbrowser instances running on one or more computers,
> >>>> requiring network communication and scenegraph synchronization between
> >>>> them.
> >>>>
> >>>> SBMU - Single Browser, Multiple-User
> >>>> (is this what you mean by native multiuser?)
> >>>
> >>> No, with "native" multiuser I meant there are no "special" X3D nodes just for MU purposes
> >>>
> >>>> This is the game-Console-like scenario where a single process/single webbrowser could be made to support multiple users:
> >>>> - multiple windows showing the same scene from a different camera/avatar (not unlike cave or stereo rendering)
> >>>> - multiple input devices each with an ID, and associated with a window/avatar
> >>>> - each window/viewer/viewpoint is navigated by a different input device
> >>>>
> >>>> That would break the x3d specs which talk about viewer, user and avatar 1:1:1 as a singleton in the scene:
> >>>> http://www.web3d.org/files/specifications/19775-1/V3.3/index.html
> >>>
> >>> I meant the use case of a server (running the Web3D browser on a server) and connecting to the users via video streams.
> >>> This would probaly save memory on the server, when multiple users share the same scene graph
> >>>
> >>>>
> >>>> Q. what would need to change to support SBMU?
> >>>> - internal to the browser:
> >>>> -- multiple viewer/avatar instances
> >>>> -- multiple input devices with IDs, associate each 1:1 with a viewer/avatar instance
> >>>> - navigationinfo (associate 1:1 with viewer)
> >>>> - Sensors ie proximity, touch, grab sensor - extend to tell which avatar it's sensing
> >>>> (it might be handy to have an Avatar node type, that combines a few things like NavigationInfo, ProximitySensor, GrabSocket(?), currentlyBoundViewpoint, Presentation)
> >>>>
> >>>> Then after getting this SBMU working with your train scenes, whatever is leftover would probably be network stuff.
> >>>>
> >>>>
> >>>>>
> >>>>>>> another two cent (I know, I am carrying coals to Newcastle)
> >>>>>> No apologies necessary - we love your coal here.
> >>>>> [Christoph:] Thank you, that's inspiring to me :-)
> >>>>>
> >>>>>>> maybe starting with requirements:
> >>>>>>>
> >>>>>>> What could the requirements look like? (just a suggestion, not a proposal)
> >>>>>>>
> >>>>>>> a) A model can run in different MU scenes from different authors without change
> >>>>>>> b) Usage of a scene can be restricted to subscribed scene users
> >>>>>>> c) Usage of a model can be restricted to subscribed scene authors
> >>>>>>> d) Usage of a model can be restricted to subscribed scene users
> >>>>>>> e) A multiuser session can simultaneously run on different Web3D Browsers for different users
> >>>>>>> f) ....and so on
> >>>>>>
> >>>>>> When I was thinking about these different scenarios, I thought for different apps you'd want something different, or perhaps whoever starts a new empty world could decide and more flexible apps could pick up on it parametrically rather than hard coding it. I called it 'publish/subscribe policy'. I was thinking it wouldn't be 'secure' and enforced against intruders, but rather each peer/client would enforce the policy on itself. Lets say a peer publishes a content but without interactivity attribute. The subscribing client would still get the sensor nodes, but when rendering the scenegraph for that content, would check the content flag for interactivity and if false then don't allow the client avatar to click it.
> >>>>>
> >>>>>
> >>>>> [Christoph:] This way, the publisher of the content must rely on the Browser enforcing the policy. If I were a publisher, I would probably trust the server that manages my content, but I would never trust any browser.
> >>>>> Maybe better: Modifying the content on server side and downloading only the features of the content that are approved by the policy
> >>>>>
> >>>>>
> >>>>>>
> >>>>>> You have scene and model and you mentioned modules before. So there are different possible granularities of publish/subscribe content aggregation. I was thinking of course granularity - content unit ~= web3d scene file - for easy coding in freewrl. But perhaps it could work like nested nodes in a scenegraph: each node inherits publish/subscribe attributes from parent node.
> >>>>>
> >>>>>
> >>>>> [Christoph:] A few words about "modules", as I defined them in the experimental SMUOS Framework.
> >>>>> A module is "something" that provides a minimum interface (specified fields) and that contains
> >>>>> and initializes an instance of the "Module Coordinator" (X3D prototype)
> >>>>> The Module Coordinator coordinates all MIDAS Objects of a module
> >>>>> a) I came to the idea of modules, because model railroads are sometimes built by modules, too.
> >>>>> Each module can be provided by a different person and sometimes the model railroaders gather and
> >>>>> plug there modules together.
> >>>>> b) Same idea here: each module from a different author, modules can be "plugged" together by person
> >>>>> who provides the overall scene to their users.
> >>>>> c) Person, who provides the overall scene to their users, has to set "moduleName".
> >>>>> d) Models contain MIDAS Objects for instrumenting the functionality (e.g. steering of a car)
> >>>>> e) MIDAS Objects contain Network Sensors
> >>>>> f) stream Name of Network Sensor (necessary for identification on the net) is derived from
> >>>>> moduleName + objectId
> >>>>> g) A scene is restricted to the "MMF-paradigm" (model / module / frame)
> >>>>> scene 1:1 frame 1:N module 1:N model
> >>>>> h) future idea (not yet implemented) "Moving Modules":
> >>>>> enable modules being parts of models
> >>>>> scene 1:1 frame 1:N module 1:N model 1:N module 1:N model 1:N module 1:N model ad inf.
> >>>>> this future idea necessary for train ferries, turntables, "model railroad in the model railroad"
> >>>>>
> >>>>>>
> >>>>>>> regarding Synchronization:
> >>>>>>> Maybe taking a step back and thinking "what would be, if we had no DIS, no Network Sensor"
> >>>>>>>
> >>>>>>> Option A) Avoid differences between single-user-scenes and multi-user-scenes
> >>>>>>
> >>>>>> - a course-granularity content aggregation could apply publish/subscribe attributes to a scene-file-level aggregation, and the scenefile itself would look the same, except the code guts of the browser would get the attributes during subscription, at one-higher-level in the technology stack.
> >>>>>> This would hide the details of synchronization which would mean they aren't a standard in the x3d scene description. So either different browsers wouldn't cooperate, or a separate standard for MU sync, publish/subscribe etc could be needed.
> >>>>>>
> >>>>>>> Option B) Provide special means for multi-user-scenes
> >>>>>>>
> >>>>>>> ad A) X3D Time Sensor, Collision Detection, ..... maybe could be defined for "native" multi-user-mode
> >>>>>>> ad B) use X3D DIS, network sensor, ..... maybe improve them
> >>>>>>>
> >>>>>>
> >>>>>> I don't have enough experience with MU to have ideas at this level - no neurons fire.
> >>>>>>
> >>>>> [Christoph:] I would prefer option B, not only because I already did it this way, but because this option leaves more freedom to the scene author / model author (on the other hand, it's not "easy-to-use")
> >>>>>
> >>>>>
> > _______________________________________________
> > X3D-Public mailing list
> > X3D-Public at web3d.org
> > http://web3d.org/mailman/listinfo/x3d-public_web3d.org 		 	   		  
> _______________________________________________
> X3D-Public mailing list
> X3D-Public at web3d.org
> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>



More information about the X3D-Public mailing list