[X3D-Public] interest in network sensor / client based server software
eric at geovrml.com
Tue Jan 7 03:47:40 PST 2014
My 0.2 cents on an architecture I've been using for years now.
I implemented MU on a peer to peer basis, each application being plugged
into a 'message bus'.
'Message Bus' is layered, a global synchronisation layer (low speed),
and localised high rate layers (typically LAN based synched displays,
aka low cost Caves or virtual windows).
Each object in the simulation is managing it's synchronisation with
other instances of itself. Only user interaction need to be shared on
the global layer (local -> global).
All sessions share all the data, all the scenegraph, this is the price
to pay to get rid of server or 'controllers'. All the scenegraph doesn't
have to be rendered though, but every object has to be 'alive', in order
to keep synched.
All objects aren't necessarily 3D, some applications plugged into the
message bus don't have 3D at all, and act more as physics engine or
interface to real objects (GPS, database, web server, external
application API, you name it ..): AMAF, in the message bus framework,
each application connected is considered and synched as any other
object, 3D or not.
Any user is considered as an object too, able to send and receive messages.
All messages are object level designed, so each message is defined at a
object semantic level: this makes messaging protocol grammar, lexicology
and ontology completely open and mostly undefined globally. This is the
price to pay for complete flexibility and compatibility with historical
content and incremental complexification. Clearly 'open' message will
not have the same meaning if sent to a door object or to a OpenOffice
API. Similarly, 'Save' won't have the same parameters if sent to the 3D
renderer (considered as an object too) or if sent to a fire-fighter
avatar facing a victim in a fire.
Some objects need singular atoms, and may hardly be synched, like some
physics engines that use heuristics or handling timebound chaotic
behaviors that can't cope with lag. In these few cases, we're using
single 'controllers', implemented either as stand alone single objects
(COM or application) or singularities designed by a token mecanism
distributed amongst instances.
Each object may issue messages to other instances of himself, to a
specific instance of any object, to all other objects, locally, globally
or in a given objects container (ie. typically a connected application
is an objects container), to a singularity (the master instance) of a
given object, or to it's container.
A container is an object responsible for catering other objects. For
example a connected application is an object typically in charge of
catering objects, another application may be in charge to cater for one
physics engine object or whatever.
Each simulation environment is defined by:
- static scenery generated by GIS and infrastructure modelling
- a set of objects :
* dynamic 3D rendered objects (train :), cars, fires, avatars,
pipes, valves, doors, buildings, cats, dogs, RPG launchers and panties
... more than 2000 different objects waiting to be instancied in the
* dynamic non rendered objects (triggers, sensors, logic links, IA,
FSM, physics engines ... )
* initial conditions (day of year, weather, leak in GPL storage, ...)
* eventually users ... or more exactly, roles.
* eventually timed events (amaf messages) sets, that may be
triggered by any object, from a 3D trigger to an IA application or a
trainer in the staff.
Network sensor means scene level design, and server side application
means application level design.
Object level design means we can, anytime, drag and drop any objects in
the simulation, and have a new 'application' or 'simulation'. I don't
have to plan beforehand what functionalities are to be implemented
server side, or even client side, at application level or at scene
level, if a trainer decides to drop a cat in front of a dog in a
radiologic spray incident: flexibility is very important for this kind
of unexpected uses. And semantic level interaction is more fun to cope
with :) .
From architecture to protocol, everything depends on what is designed.
For us, DIS is application level, too restricted, and not flexible or
versatile ... but we leave in our own 'world'. We had hard time bridging
with HLA systems (SOGITEC/Dassault systems)
NetworkSensor is too much 'scene level', field level semantics doesn't
allow for lag handling, elaborating 'unexpectedly dropping the cat' is
very complex, and we don't want servers (I don't like single points of
Both DIS approach and NetworkSensor approaches could be used for our
purposes, but would introduce too much constraints.
Oh, BTW, happy new year everybody.
Le 06/01/2014 20:32, Christoph Valentin a écrit :
> Hi Doug
> There are a lot of possible solutions about "client based server software".
> Let me explain the *experimental* approach of the SMUOS Framework http://smuos.sourceforge.net
>> Q. Session 1:1 avatar 1:M Controllers 1:1 ControllableObjects?
>> Q. should each object being created get the sessionID of the creating user, perhaps as metadata or mangled into the DEF name? Is the user that creates an object always 'the owner'/'the controller'? And so when that user session ends all the objects associated need to be cleaned out?
> 1) avatar 1:1 user 1:1 session N:1 multiuser session 1:1 chat room
> (this is according to MU example from BM homepage)
> 2) scene 1:K module 1:L static object
> static objects are children of modules
> 3) scene 1:M universal object class 1:P dynamic model (not yet implemented)
> dynamic objects are children of "universal object classes"
> and may change the module (not yet implemented)
> Now to the controllers:
> 4) scene 1:1 central controller for overall aspects of the scene
> 5) static object 1:1 object controller (static) - if the object is active in any scene instance
> 6) dynamic model 1:1 object controller (dynamic) - if the model is active in any scene instance
> Now where the controllers are located:
> 7) Avatars are controlled by the scene instance of their user
> (this is according to MU example from BM homepage)
> 8) Central controller is rearranged during startup and teardown of session
> 9) Central controller assigns module controller role to modules
> 10 Object controller roles follow the module controller of their parent module
> 11) Objects and models are controlled by the scene instance
> that holds the module controller of the parent module
> This does not depend on any user (objects and models have "their own life")
> 12) Controllers of objects and models might be rearranged from time to time
> E.g. if a session leaves the game or if a module gets inactive
> 13) The SMUOS Framework cares about assigning controller roles, no problem for the author
> Comments any time welcome :-)
> Kind regards
> X3D-Public mailing list
> X3D-Public at web3d.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the X3D-Public