[x3d-public] Netw. Sens. Protocol - WAS: 3d-public Digest, Vol120, Issue 34

GL info at 3dnetproductions.com
Thu Mar 14 21:55:43 PDT 2019


>Now coming to the idea with the Hyper Scene and to Gina-Lauren's suggestions. I think the idea with the Hyper scene that cares for the networking is a good idea

   - as long as the shared state is not too complex (each value has to be transported via the "?")

   - as long as the Hyperscene sticks to a standard (which would need to be defined, too)

 

I've read  Doug's description of his HS idea, though I'm not sure I understand. I'd imagine it to work for sprite type games, where the top layer is not actually inside the underlying scene. 

 

 

>If the scene becomes too complex or if the scene is compiled from different sources, e.g. "module by module" and "model by model", then I would rather prefer to "bury" a network sensor within each model, just broadcasting the "network connection" to every model by an SFNode and defining the "streamName" for the network sensor, the rest would be done by the model internally.

 

The object and avatar coordinates are relative to the browser. If you switch scene or remove it altogether, the objects/avatars will remain in exactly the same position, as controlled by the script node, which is at the center of it all and stays loaded at all time regardless of scenes or models. The script node resides at the core in a client/server relationship, because its functions act as the client that makes connections to the server. The objects/avatars move according to the coordinates that are being passed to them from that core script. This, at least in the implementation that we have at OT, where the 3D browser is not required to handle any MU duties, other than passing along the http requests/responses to and from a script node.     

 

But what you are describing reminds me more of Java 3D. It can be a very powerful object oriented approach, but its implementation paradigm an entirely different animal than that of X3D. I've played with it some back in the days, before deciding to opt out of Java altogether, so my experience is limited. But how you would make it work with X3D is intriguing. Is that the direction you are taking with your Train project?

 

 

>Now to the misunderstaning which I have with Gina-Lauren. I often used the term "network protocol" in a wrong manner. Of course I do NOT mean a "network LAYER protocol", such as IPv4 or IPv6, but of course I mean an "application LAYER protocol".

 

Ok.

 

>I think this application layer protocol could be sketched by message flow charts and by textual description. Just which messages are defined, how are the messages combined to procedures and which parameters can the messages carry. How are the messages routed?

 

 

Like I've mentioned earlier, in my case, avatars must first authenticate (to server) and login (to scene). Same for logout (leave scene) and delete (from server). So that's one type of messages. Then you have your coordinates and orientation messages for avatars and vehicles.  Also action messages for avatar gestures and the motion of more static but moveable objects such as doors and furniture. Text chat is also another type of message, though they can all be passed at once within the same http request. But I think that is all too application-specific for a general standard. An author should probably be able to expose any number of fields in the NS to configure it as needed, which I believe is pretty much what the current draft entails.

 

>How are the messages routed?

 

They are routed through TCP now because that's what we have available, though I'd like the ability to use other protocols, especially UDP based in the short term and quite possibly SCTP eventually. The draft also calls for an X3D protocol (X3DP) which is not defined but I think very much at the center of this discussion. 

 

Do we really need the NS to be implemented in a 3D browser? Or can it be at least emulated with Javascript? I have high hopes for that.. ;)

 

GL

 

 

________________________________________________________

* * * Interactive Multimedia - Internet Management * * *

  * *  Virtual Reality -- Application Programming  * *

    *   3D Net Productions  3dnetproductions.com   *

 

 

 

This application layer protocol should be open to (m)any transport protocol(s), such as XMPP, SIP, RTP, TCP, UDP, SCTP, MSRP, 3GPP MCX, ............

 

Such a stable description and the promise to support any transport protocol would create trust in X3D MU.

 

All the Best

Christoph

  

Gesendet: Donnerstag, 14. März 2019 um 17:53 Uhr
Von: "GPU Group" <gpugroup at gmail.com>
An: "X3D Graphics public mailing list" <x3d-public at web3d.org>
Betreff: Re: [x3d-public] Netw. Sens. Protocol - WAS: 3d-public Digest, Vol120, Issue 34

One very fuzzy idea: HyperScene (HS) 

- one layer above scene

- networking would/could work on this layer ie 

Scene -?- HS - network - HS - ? - Scene

- with the ? being maybe EAI/SAI and/or DISTransform-like node updater

- the LayeringComponent - some have been asking if it could/should be dropped or fixed, and maybe if fixing, it might relate to a layer above scene also

HS - Layering - Scene

And maybe 2 scene instances rendering on the same web page or side-by-side in one desktop browser instance might be on some layer above scene, such as Layering or HyperLayering.

But sorry I can't explain any deeper. Its all fuzzy thinking.

-Doug Sanden

  

On Tue, Mar 12, 2019 at 2:37 PM GL <info at 3dnetproductions.com> wrote:



Christoph,

Personally, I don't think the NS should impose a network protocol standard at all. Instead, it should allow for as much flexibility as possible, remembering that we don't necessarily work within a web browser, and that present and future applications may involve different types of hardware and/or protocols not thought of, yet widely in use, or available today. We don't have to define implementations, but just leave the door open so to speak, so that vendors and authors may build what they need without the standard getting in the way.

However, at a higher level, if we want different 3D implementations to be able to communicate with each other, there would be a need to have a standard way of making worlds, avatars, objects, etc. compatible at the network level. In that case, we could envision an X3D application layer protocol (perhaps thinking of ftp, smtp, http, etc, but explicit to X3D), independently of the network transport layer, while the emphasis remains on flexibility. That being said, provide a set of mechanisms, a structure for conduits and data tunnels, but what passes through it should probably largely be left undefined as far as a standard is concerned, as long as the format is true to X3D form.

It would be nice to have data transmissions more defined, but the world being what it is and constantly evolving, I doubt we'd ever get there. I'd say "let the market forces play and the chips fall where they may".

So I am not too sure how to answer your question. I think that is what we are trying to figure out. Cheerz

GL


________________________________________________________
* * * Interactive Multimedia - Internet Management * * *
  * *  Virtual Reality -- Application Programming  * *
    *   3D Net Productions  3dnetproductions.com   *




From: x3d-public [mailto:x3d-public-bounces at web3d.org] On Behalf Of Christoph Valentin
Sent: Monday, March 11, 2019 4:21 AM
To: x3d-public at web3d.org
Subject: Re: [x3d-public] Netw. Sens. Protocol - WAS: 3d-public Digest, Vol120, Issue 34

Hi Gina-Lauren, Hi John

>>>>>>But yeah, whatever the browser vendors support as a standard would be OK with me.

John, You speak as a scene author, don't you? So you don't care, which Network Protocol the browser uses.

I understand that, because I am a scene author, too.

Now, what is the position of the Browser vendor?

Wouldn't the Browser vendor say the same? "Whatever the server Vendor supports as standard, would be Ok with me?"

@Gina-Lauren: What would you like to support as standard? Just to have an example.

Just my 2c

:-) Christoph



--
Diese Nachricht wurde von meinem Android Mobiltelefon mit GMX Mail gesendet.
Am 11.03.19, 08:09, Christoph Valentin <christoph.valentin at gmx.at> schrieb:
Hi Gina-Lauren, John, Andreas,

Now seriously.

Given the Network Sensor (or however it would be called), i.e. the syntax of how to instantiate the related nodes in a Web3D scene , was specified sufficiently (e.g. in X3Dv4)

there would still be a lot to be defined regarding how to connect to the server, connect to a server at all, use a star topology or support server hierarchy and so on.

If those decisions were left to the implementors of Web3D browsers (and libraries), then the market would fall apart again, as soon as it starts to bloom.

The standard doesn't need to be technically perfect, but it must be well regarded.

Just philosophy:-)

Just my personal opinion :-)

(And meanwhile we can use open-dis ;-) )
--
Diese Nachricht wurde von meinem Android Mobiltelefon mit GMX Mail gesendet.
Am 11.03.19, 07:41, John Carlson <yottzumm at gmail.com> schrieb:
I wish there was a reviewing site for open source software.  With both developer and end-user reviews (separate).

John

Sent from Mail for Windows 10

From: Christoph Valentin
Sent: Monday, March 11, 2019 12:36 AM
To: John Carlson; x3d-public at web3d.org
Subject: RE: [x3d-public] Netw. Sens. Protocol - WAS: 3d-public Digest, Vol120, Issue 34

<!-- /* Font Definitions */ @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0in; margin-bottom:.0001pt; font-size:11.0pt; font-family:"Calibri",sans-serif;} a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:#954F72; text-decoration:underline;} .MsoChpDefault {mso-style-type:export-only;} @page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in;} div.WordSection1 {page:WordSection1;} -->

>>>>>>May I suggest a CSV-like protocol across the https or TLS/SSL network, similar to:



(SSL-initiation)

COMMAND1,TIME1,PARAMETER1,PARAMETER2,..\r\n

COMMAND2,TIME2,PARAMETER3,PARAMETER4,…\r\n

…

[Christoph:] I think it depends on the KIND of state/event you want to transmit. If the state does not change very often (let's say every second), then a text based protocol will be the right choice. If you had a state that changed in real time (position of a jet plane) , then I'd think about somthing like an RTP stream (real time protocol). Anyway, the first step is to agree THAT the protocol should be standardized.




>>>>>>Will that satisfy the network nazis?

 [Christoph] I always thought the term Network Sensor (NS) was a bad choice :-)



>>>>>>>What type of constrictions do we have to jump through to satisfy network ops?  Should we make the protocol look just like an Excel file? 😊😊  Just wait until the first executive gets his excel file stopped by a network filter 😊.

 [Christoph]: I'd suggest to use SIP as signaling protocol, but this is my personal opinion. I'm not supported by my employer nor by my former employer.


--
Diese Nachricht wurde von meinem Android Mobiltelefon mit GMX Mail gesendet.

Am 11.03.19, 06:07, John Carlson <yottzumm at gmail.com> schrieb:

May I suggest a CSV-like protocol across the https or TLS/SSL network, similar to:



(SSL-initiation)

COMMAND1,TIME1,PARAMETER1,PARAMETER2,..\r\n

COMMAND2,TIME2,PARAMETER3,PARAMETER4,…\r\n

…



Will that satisfy the network nazis?



Also, we might use a STEP/EXPRESS-like format, if we want a hierarchical format.



Can we leverage X3DUOM-style XSD and our tools to create APIs?



What type of constrictions do we have to jump through to satisfy network ops?  Should we make the protocol look just like an Excel file? 😊 😊  Just wait until the first executive gets his excel file stopped by a network filter 😊.



Maybe not, huh?



What types of interactions do we want to support?



John





Sent from Mail for Windows 10



From: Christoph Valentin
Sent: Sunday, March 10, 2019 10:34 PM
To: x3d-public at web3d.org
Subject: Re: [x3d-public] x3d-public Digest, Vol 120, Issue 34



>>>>>>It may be time to come up with some proof of concept implementations of nodes or infrastructure around a scene using web technology before more thought experiments.



>>>>>Andreas,

I don't mean to interject in your conversation, but my earlier comments (below) were not merely "thought experiments". Most of everything I've talked about is already implemented and working at officetowers.com (The server is currently suffering from a database malfunction, so login is not currently possible until next week, but I can assure you. Meanwhile, videos are available if you'd like to see them).

That world and its associated X3Daemon multiuser server, have been ready for a network sensor for years. But before potentially spending hundreds of hours in implementing one in a new version based on more modern practices and available technologies, I'd like to see what we can come up with as a consensus.

What is exactly a network sensor? What are its functions? Within what boundaries and parameters? Do we refer to the earlier draft (when work on it stopped) as a basis? IMO, those are some of the questions that need answers before we can contemplate creating specs for a NS.

Cheerz,
Gina-Lauren



[Christoph]

Hi Gina-Lauren, Hi Andreas,
It's me, who should not disturb fruitful discussion here, but waiting for Andreas' response I'd like to add my two cents.

Just assume the members could agree on a test setup and they could motivate enough manpower to "do it".

A) How could such test setup look like?

B) Assuming the test setup and the specs would be developed in parallel, what would we need from the specs?

As I said, just my 2c.

Have good luck :-)
Christoph

Am 11.03.19, 03:38, GL <info at 3dnetproductions.com> schrieb:


>It may be time to come up with some proof of concept implementations of nodes or infrastructure around a scene using web technology before more thought experiments.



Andreas,

I don't mean to interject in your conversation, but my earlier comments (below) were not merely "thought experiments". Most of everything I've talked about is already implemented and working at officetowers.com (The server is currently suffering from a database malfunction, so login is not currently possible until next week, but I can assure you. Meanwhile, videos are available if you'd like to see them).

That world and its associated X3Daemon multiuser server, have been ready for a network sensor for years. But before potentially spending hundreds of hours in implementing one in a new version based on more modern practices and available technologies, I'd like to see what we can come up with as a consensus.

What is exactly a network sensor? What are its functions? Within what boundaries and parameters? Do we refer to the earlier draft (when work on it stopped) as a basis? IMO, those are some of the questions that need answers before we can contemplate creating specs for a NS.

Cheerz,
Gina-Lauren


________________________________________________________
* * * Interactive Multimedia - Internet Management * * *
* * Virtual Reality -- Application Programming * *
* 3D Net Productions 3dnetproductions.com *





Hello Andreas, Christoph and all,


>I looked at the network sensor, and BS Collaborate nodes. I think the idea is to explicitly forward all events which need sharing to the server which then distributes those to connected clients.


I'd say it's a fair assessment, though there are also a little more, behind the scene events, such as avatar authentication, login/logout, standby/resume status, and other similar network connections that are not necessarily shared in the scene per say.



>This leaves the definition of the shared state to the scene, and therefore requires careful design and code for MU. Perhaps there is a way that the browser can better assist with making a scene MU capable.


X3D is very capable of doing this internally, but I'd definitely want it to have its own script node. It may be possible to pass simple events through the browser, but that solution would impose severe limits to the ability to perform multiuser tasks in general. I mean, it can get convoluted enough as it is, without adding the extra complexity of passing events and network connections back and forth through the browser. I understand it isn't easy, but if it can be done, we probably should do it.




>What is the complete state a client needs when it is admitted to a shared scene ?


>From my experience, that will depend very much on the use case, but generally, the scene is complete within each client, though avatars and/or objects may be hidden and not necessarily seen at any given time. When the client connects to the shared network, that's pretty much when state is established. There may be exceptions to that, for example when an avatar has a specific identity and/or status in a game or accumulated points, that information will typically first be obtained from the server's database, before being passed on to the client and in turn to the network.



>Since most fields of most nodes accept input, and can therefore potentially change, complete state probably means just the value of all fields of all root nodes which means all nodes. The complete state may only need to be transferred when a client joins.


Ditto. Yes exactly. See above.



>Plus the offsets of avatars from Viewpoints. And the clock. Perhaps other values.


Again yes, so avatars don't end up on top of each other for example.

As far as a clock, I have never shared one or felt I needed to thus far. In my implementation, each client simply polls the server at regular intervals of one, two or more seconds (depending what other subsystems are in place). We sometimes cannot allow a client which is experiencing latency for one reason or another, to hold back all of the other clients. The show must go on, whether a client gets disconnected or happens to get out of sync. It generally seems to have little impact, since, at least in some cases, users are not together in one room. So they are in essence unaware that they are out of sync, as the case may be.

But, I can see how specific applications may need and would want to have a clock. And in that case, I believe we should take a good look at what the MIDI standard has to offer. MIDI is a well established and solid standard that has many uses for timing and control, not just music, and is more than likely a very good fit for virtual reality.



>I was thinking about avatar initiated event cascades since the state of everything else should be deterministic and only depend on time. These other things should update themselves. There are exceptions such as scripts which generate random values.


Avatars with gestures, and also all kind of visible objects, such as doors, cars, elevators (that's a tough one), a cup of coffee, a guitar, clouds, stars and anything else that moves. But also various more subtle things like say group chat, or items inside pockets or bags, money, ammunition and things of that nature also need shared states, and sometimes to various levels of privacy.



>Avatar initiated event cascades generally start with environment sensors. Not sure if there are other ways. The idea would be to just have to transmit this event cascade to other clients which then can replay it to update their scene instances.


Sure. General user interactions with the world and its interface, such as buttons and inworld devices, often need to be shared as well.


>I also was reading some of the github open-dis documentation. Interesting background, and pretty accurate discussion on coordinate systems. It may be possible to set up somewhere a very simple distributed simulation to join and test clients with.


I remember the DIS-Java-VRML work. That was very good. I will take a closer look at Open DIS as you mentioned. It sounds interesting. As I began to touch above, each MU implementation will have their own particularities. Personally, I would very much like to see a general NetworkSensor node that would be flexible enough to accommodate different protocols, but also one that has the ability to be adapted to specific use case and needs.



>Single browser MU: I think an additional touchedByUser field would be required but perhaps there is a way to generalize tracking of Users.



I don't know if it's possible, but I'd like to see the possibility to add any number of fields to the network node. Fields that would be defined by each implementation as needed, as we may not be able to always predict its application.


I feel there is much more to discuss, and I'd re-iterate my suggestion to open a NetworkSensor group or similar, so as to allow for more detailed exchanges and begin to have some basis for VR networking.

Gina-Lauren


P.S. Your work with Python is very interesting and falls in line with some of my own thoughts.

________________________________________________________
* * * Interactive Multimedia - Internet Management * * *
* * Virtual Reality -- Application Programming * *
* 3D Net Productions 3dnetproductions.com *








_______________________________________________
x3d-public mailing list
x3d-public at web3d.org
http://web3d.org/mailman/listinfo/x3d-public_web3d.org



_______________________________________________ x3d-public mailing list x3d-public at web3d.org http://web3d.org/mailman/listinfo/x3d-public_web3d.org


_______________________________________________
x3d-public mailing list
x3d-public at web3d.org
http://web3d.org/mailman/listinfo/x3d-public_web3d.org

_______________________________________________ x3d-public mailing list x3d-public at web3d.org http://web3d.org/mailman/listinfo/x3d-public_web3d.org

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20190315/fc911bbd/attachment-0001.html>


More information about the x3d-public mailing list