[X3D-Public] interest in network sensor / client based server software

Eric Maranne eric at geovrml.com
Tue Jan 7 05:59:51 PST 2014


Le 07/01/2014 14:45, Christoph Valentin a écrit :
> Hi Eric
>
> Thank you for this very detailed explanation. And of course, the 
> "Simple Multiuser Scenes" do not aim to replace professional 
> simulation environments :-)
>
> Just aiming for simple(!) scenes, standardized communication protocols 
> to be used by telecom and small (possibly closed) groups of users 
> familiar with each other, each one telling his friends: "Look, here my 
> new scene, let's try it out".
>
> I picked a few of your statements and added some comments/questions 
> from my point of view.
>
> >>Each object in the simulation is managing it's synchronisation with 
> other instances of itself. Only user interaction need to be shared on 
> the global layer (local -> global).
>  [Christoph:] it's similar in Simple Multiuser Scenes, however, each 
> object defines one instance that is "responsible" for the shared state 
> (but the shared state is stored in each instance and on the 
> collaboration server persistently). If the instance "responsible" for 
> the state gets lost, another instance takes over the responsibility.
>
>  [Eric] Looks like my Token system. Now, what will be triggering a 
> state change ?, most of the time, it's user interaction; If not, it 
> may be a timed event (or functionally time bound)  thet doesn't have 
> to be shared, for it can be produced locally.  In your scheme, this 
> means: getting event from interaction, conveying it to the application 
> where the 'responsible instance' lies, then propagates to all: 
> distributed server approach, unless all 'responsible are living on the 
> same application. In the latter case, the application load balancing 
> may inply lag on the server app. Why do you need a 'responsible' ? or, 
> put it all the way round, why wouldn't any instance be responsible of 
> propagating user interaction ?
>

> >>All sessions share all the data, all the scenegraph, this is the 
> price to pay to get rid of server or 'controllers'.
>
> [Christoph:] In principle it's the same for Simple Multiuser Scenes, 
> however I define "dynamic" modules and "dynamic" models, which can be 
> "loaded" or "unloaded" in different instances of the scene, respectively.
>
> >>Any user is considered as an object too, able to send and receive 
> messages.
>
> [Christoph:] This sounds interesting. Does this mean, one instance of 
> the scene can serve multiple users? (I miss this feature in X3D)
>
The first message 'sent' by the user (call it a login), defines it's 
capabilities and overall UI. Not any user may pilot a mechanical ladder, 
though any may seat in the pilot seat. Not anyone may be able to fly a 
helicopter, even if anyone may step aboard when on the ground.
We do have 'duos' (groups of 2 firefighters) sharing the same view (like 
pilot and copilot), or freely available simulation posts: they can be 
used for transitory roles and serve different users.
>
> >>and we don't want servers (I don't like single points of failure)
>
> [Christoph:] Latest, when we start to control private drones with VR 
> worlds, then we will need a kind of "lawful interception" for 
> trajectories. Having single points of access has proven advantageous 
> for interception.
>
> Happy new year
>
> Christoph
>
> *Gesendet:* Dienstag, 07. Januar 2014 um 12:47 Uhr
> *Von:* "Eric Maranne" <eric at geovrml.com>
> *An:* x3d-public at web3d.org
> *Betreff:* Re: [X3D-Public] interest in network sensor / client based 
> server software
> Hi,
>
> My 0.2 cents on an architecture I've been using for years now.
>
> I implemented MU on a peer to peer basis, each application being 
> plugged into a 'message bus'.
> 'Message Bus' is layered, a global synchronisation layer (low speed), 
> and localised high rate layers (typically LAN based synched displays, 
> aka low cost Caves or virtual windows).
> Each object in the simulation is managing it's synchronisation with 
> other instances of itself. Only user interaction need to be shared on 
> the global layer (local -> global).
> All sessions share all the data, all the scenegraph, this is the price 
> to pay to get rid of server or 'controllers'. All the scenegraph 
> doesn't have to be rendered though, but every object has to be 
> 'alive', in order to keep synched.
> All objects aren't necessarily 3D, some applications plugged into the 
> message bus don't have 3D at all, and act more as physics engine or 
> interface to real objects (GPS, database, web server, external 
> application API, you name it ..): AMAF, in the message bus framework, 
> each application connected is considered and synched as any other 
> object, 3D or not.
> Any user is considered as an object too, able to send and receive 
> messages.
> All messages are object level designed, so each message is defined at 
> a object semantic level: this makes messaging protocol grammar, 
> lexicology and ontology completely open and mostly undefined globally. 
> This is the price to pay for complete flexibility and compatibility 
> with historical content and incremental complexification. Clearly 
> 'open' message will not have the same meaning if sent to a door object 
> or to a OpenOffice API. Similarly, 'Save' won't have the same 
> parameters if sent to the 3D renderer (considered as an object too) or 
> if sent to a fire-fighter avatar facing a victim in a fire.
> Some objects need singular atoms, and may hardly be synched, like some 
> physics engines that use heuristics or handling timebound chaotic 
> behaviors that can't cope with lag. In these few cases, we're using 
> single 'controllers', implemented either as stand alone single objects 
> (COM or application) or singularities designed by a token mecanism 
> distributed amongst instances.
> Each object may issue messages to other instances of himself, to a 
> specific instance of any object, to all other objects, locally, 
> globally or in a given objects container (ie. typically a connected 
> application is an objects container), to a singularity (the master 
> instance) of a given object, or to it's container.
> A container is an object responsible for catering other objects. For 
> example a connected application is an object typically in charge of 
> catering objects, another application may be in charge to cater for 
> one physics engine object or whatever.
>
> Each simulation environment is defined by:
> - static scenery generated by GIS and infrastructure modelling
> - a set of objects :
>     * dynamic 3D rendered objects (train :), cars, fires, avatars, 
> pipes, valves, doors, buildings, cats, dogs, RPG launchers and panties 
> ... more than 2000 different objects waiting to be instancied in the 
> repositery)
>     * dynamic non rendered objects (triggers, sensors, logic links, 
> IA, FSM, physics engines ... )
>     * applications
>     * initial conditions (day of year, weather, leak in GPL storage, ...)
>     * eventually users ... or more exactly, roles.
>     * eventually timed events (amaf messages) sets, that may be 
> triggered by any object, from a 3D trigger to an IA application or a 
> trainer in the staff.
>
>
> Network sensor means scene level design, and server side application 
> means application level design.
> Object level design means we can, anytime, drag and drop any objects 
> in the simulation, and have a new 'application' or 'simulation'. I 
> don't have to plan beforehand what functionalities are to be 
> implemented server side, or even client side, at application level or 
> at scene level, if a trainer decides to drop a cat in front of a dog 
> in a radiologic spray incident: flexibility is very important for this 
> kind of unexpected uses. And semantic level interaction is more fun to 
> cope with :) .
>
> From architecture to protocol, everything depends on what is designed.
> For us, DIS is application level, too restricted, and not flexible or 
> versatile ... but we leave in our own 'world'. We had hard time 
> bridging with HLA systems (SOGITEC/Dassault systems)
> NetworkSensor is too much 'scene level', field level semantics doesn't 
> allow for lag handling, elaborating 'unexpectedly dropping the cat' is 
> very complex, and we don't want servers (I don't like single points of 
> failure).
> Both DIS approach and NetworkSensor approaches could be used for our 
> purposes, but would introduce too much constraints.
>
>
> Oh, BTW, happy new year everybody.
>
> Eric.
> www.vr-crisis.com <http://www.vr-crisis.com>
> YouTube <http://www.youtube.com/user/vrcrisis?feature=watch>
> FB 
> <http://www.facebook.com/pages/Crisis-Simulation-Engineering/335234416564855>
>
>
>
> Le 06/01/2014 20:32, Christoph Valentin a écrit :
>
>     Hi Doug
>
>     There are a lot of possible solutions about "client based server software".
>
>     Let me explain the *experimental* approach of the SMUOS Frameworkhttp://smuos.sourceforge.net
>
>         Q. Session 1:1 avatar 1:M Controllers 1:1 ControllableObjects?
>         Q. should each object being created get the sessionID of the creating user, perhaps as metadata or mangled into the DEF name? Is the user that creates an object always 'the owner'/'the controller'? And so when that user session ends all the objects associated need to be cleaned out?
>
>     1) avatar 1:1 user 1:1 session N:1 multiuser session 1:1 chat room
>        (this is according to MU example from BM homepage)
>     2) scene 1:K module 1:L static object
>          static objects are children of modules
>     3) scene 1:M universal object class 1:P dynamic model (not yet implemented)
>          dynamic objects are children of "universal object classes"
>          and may change the module (not yet implemented)
>
>     Now to the controllers:
>     4) scene 1:1 central controller for overall aspects of the scene
>     5) static object 1:1 object controller (static) - if the object is active in any scene instance
>     6) dynamic model 1:1 object controller (dynamic) - if the model is active in any scene instance
>
>     Now where the controllers are located:
>     7) Avatars are controlled by the scene instance of their user
>        (this is according to MU example from BM homepage)
>     8) Central controller is rearranged during startup and teardown of session
>     9) Central controller assigns module controller role to modules
>     10 Object controller roles follow the module controller of their parent module
>     11) Objects and models are controlled by the scene instance
>          that holds the module controller of the parent module
>          This does not depend on any user (objects and models have "their own life")
>     12) Controllers of objects and models might be rearranged from time to time
>          E.g. if a session leaves the game or if a module gets inactive
>     13) The SMUOS Framework cares about assigning controller roles, no problem for the author
>
>
>     Comments any time welcome :-)
>
>     Kind regards
>     Christoph
>
>     _______________________________________________
>     X3D-Public mailing list
>     X3D-Public at web3d.org
>     http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>
>
> _______________________________________________ X3D-Public mailing 
> list X3D-Public at web3d.org 
> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>
>
> _______________________________________________
> X3D-Public mailing list
> X3D-Public at web3d.org
> http://web3d.org/mailman/listinfo/x3d-public_web3d.org

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20140107/18f467f1/attachment-0001.html>


More information about the X3D-Public mailing list