[x3d-public] x3d-public Digest, Vol 120, Issue 34

GL info at 3dnetproductions.com
Sun Mar 10 19:38:32 PDT 2019


>It may be time to come up with some proof of concept implementations of nodes or infrastructure around a scene using web technology before more thought experiments.



Andreas, 

I don't mean to interject in your conversation, but my earlier comments (below) were not merely "thought experiments". Most of everything I've talked about is already implemented and working at officetowers.com (The server is currently suffering from a database malfunction, so login is not currently possible until next week, but I can assure you. Meanwhile, videos are available if you'd like to see them).

That world and its associated X3Daemon multiuser server, have been ready for a network sensor for years. But before potentially spending hundreds of hours in implementing one in a new version based on more modern practices and available technologies, I'd like to see what we can come up with as a consensus. 

What is exactly a network sensor? What are its functions? Within what boundaries and parameters? Do we refer to the earlier draft (when work on it stopped) as a basis? IMO, those are some of the questions that need answers before we can contemplate creating specs for a NS. 

Cheerz,
Gina-Lauren 


________________________________________________________
* * * Interactive Multimedia - Internet Management * * *
  * *  Virtual Reality -- Application Programming  * *
    *   3D Net Productions  3dnetproductions.com   *





Hello Andreas, Christoph and all,


>I looked at the network sensor, and BS Collaborate nodes. I think the idea is to explicitly forward all events which need sharing to the server which then distributes those to connected clients. 


I'd say it's a fair assessment, though there are also a little more, behind the scene events, such as avatar authentication, login/logout, standby/resume status, and other similar network connections that are not necessarily shared in the scene per say.



>This leaves the definition of the shared state to the scene, and therefore requires careful design and code for MU. Perhaps there is a way that the browser can better assist with making a scene MU capable.


X3D is very capable of doing this internally, but I'd definitely want it to have its own script node. It may be possible to pass simple events through the browser, but that solution would impose severe limits to the ability to perform multiuser tasks in general. I mean, it can get convoluted enough as it is, without adding the extra complexity of passing events and network connections back and forth through the browser. I understand it isn't easy, but if it can be done, we probably should do it.




>What is the complete state a client needs when it is admitted to a shared scene ?


>From my experience, that will depend very much on the use case, but generally, the scene is complete within each client, though avatars and/or objects may be hidden and not necessarily seen at any given time. When the client connects to the shared network, that's pretty much when state is established. There may be exceptions to that, for example when an avatar has a specific identity and/or status in a game or accumulated points, that information will typically first be obtained from the server's database, before being passed on to the client and in turn to the network. 



>Since most fields of most nodes accept input, and can therefore potentially change, complete state probably means just the value of all fields of all root nodes which means all nodes. The complete state may only need to be transferred when a client joins.


Ditto. Yes exactly. See above.



>Plus the offsets of avatars from Viewpoints. And the clock. Perhaps other values.


Again yes, so avatars don't end up on top of each other for example. 

As far as a clock, I have never shared one or felt I needed to thus far. In my implementation, each client simply polls the server at regular intervals of one, two or more seconds (depending what other subsystems are in place). We sometimes cannot allow a client which is experiencing latency for one reason or another, to hold back all of the other clients. The show must go on, whether a client gets disconnected or happens to get out of sync. It generally seems to have little impact, since, at least in some cases, users are not together in one room. So they are in essence unaware that they are out of sync, as the case may be. 

But, I can see how specific applications may need and would want to have a clock. And in that case, I believe we should take a good look at what the MIDI standard has to offer.  MIDI is a well established and solid standard that has many uses for timing and control, not just music, and is more than likely a very good fit for virtual reality.



>I was thinking about avatar initiated event cascades since the state of everything else should be deterministic and only depend on time. These other things should update themselves. There are exceptions such as scripts which generate random values.


Avatars with gestures, and also all kind of visible objects, such as doors, cars, elevators (that's a tough one), a cup of coffee, a guitar, clouds, stars and anything else that moves. But also various more subtle things like say group chat, or items inside pockets or bags, money, ammunition and things of that nature also need shared states, and sometimes to various levels of privacy. 



>Avatar initiated event cascades generally start with environment sensors. Not sure if there are other ways. The idea would be to just have to transmit this event cascade to other clients which then can replay it to update their scene instances. 


Sure. General user interactions with the world and its interface, such as buttons and inworld devices, often need to be shared as well.


>I also was reading some of the github open-dis documentation. Interesting background, and pretty accurate discussion on coordinate systems. It may be possible to set up somewhere a very simple distributed simulation to join and test clients with.


I remember the DIS-Java-VRML work. That was very good. I will take a closer look at Open DIS as you mentioned. It sounds interesting. As I began to touch above, each MU implementation will have their own particularities. Personally, I would very much like to see a general NetworkSensor node that would be flexible enough to accommodate different protocols, but also one that has the ability to be adapted to specific use case and needs. 



>Single browser MU: I think an additional touchedByUser field would be required but perhaps there is a way to generalize tracking of Users.



I don't know if it's possible, but I'd like to see the possibility to add any number of fields to the network node. Fields that would be defined by each implementation as needed, as we may not be able to always predict its application.


I feel there is much more to discuss, and I'd re-iterate my suggestion to open a NetworkSensor group or similar, so as to allow for more detailed exchanges and begin to have some basis for VR networking.

Gina-Lauren 


P.S. Your work with Python is very interesting and falls in line with some of my own thoughts.  

________________________________________________________
* * * Interactive Multimedia - Internet Management * * *
  * *  Virtual Reality -- Application Programming  * *
    *   3D Net Productions  3dnetproductions.com   *










More information about the x3d-public mailing list