[x3d-public] event tracing; HTML5/DOM/X3D parallelization

Don Brutzman brutzman at nps.edu
Wed Nov 9 08:37:20 PST 2016


Thank you for doing this Andreas.  It definitely bears further inspection and improvement.  Attached again for convenience.

On today's weekly X3D Working Group call, I've recommended that we elevate this diagram as an important product for thinking, exploration and specification updates.

It will be really interesting to align our terminology and functionality with DOM and other formal references.

As before, I expect that we will find that our existing work is pretty solid.  It was a major, successful effort some years ago to correlate and integrate internal event model with External Authoring Interface (EAI).  Loose coupling is possible, rather than one loop to meet all purposes.

Rendering at 90 frames per second might be nice.  Selecting 90 HTML links per second will not be a user need.


On 11/2/2016 11:28 AM, Andreas Plesch wrote:
> Hi Don,
>
> I took a stab at amending the execution model figure you referred to (http://www.web3d.org/documents/specifications/19775-1/V3.3/Part01/concepts.html#ExecutionModel) with some boxes and arrows to visualize conceptual event flow between the DOM and the X3D browser.
>
> https://docs.google.com/drawings/d/1lfBz686JbCzntBnSM5449oocTrmEIDa8hEyYGTEqW_s/edit?usp=sharing
>
> I made the drawing editable to everyone and would welcome any revisions, redesigns or refinements. This is just a straw man.
>
> For example, there could be an arrow from input events into the DOM (similar to the one for output events). There could also be an arrow from the scene-graph into DOM to symbolize synchronization of internal changes to the scene-graph with the DOM node tree.
>
> Also, the spec. Execution Model section mentions the SAI but it is not shown in the original figure.
>
> Andreas
>
>
> On Mon, Oct 31, 2016 at 6:34 PM, Don Brutzman <brutzman at nps.edu <mailto:brutzman at nps.edu>> wrote:
>
>     On 10/31/2016 2:43 PM, Andreas Plesch wrote:
>
>         Here is what I found:
>         [...]
>         It was good for me to get a sense of how cobweb works anyways. Perhaps it is helpful in general,
>
>
>     Yes thanks.  8)
>
>     Here is a functional description from the X3D abstract specification, along with a generic figure for X3D players.
>
>     ==============================================================================================
>     4.4.8.3 Execution model
>     http://www.web3d.org/documents/specifications/19775-1/V3.3/Part01/concepts.html#ExecutionModel <http://www.web3d.org/documents/specifications/19775-1/V3.3/Part01/concepts.html#ExecutionModel>
>
>         Once a sensor or Script has generated an initial event, the event is propagated from the field producing the event along any ROUTEs to other nodes. These other nodes may respond by generating additional events, continuing until all routes have been honoured. This process is called an event cascade. All events generated during a given event cascade are assigned the same timestamp as the initial event, since all are considered to happen instantaneously.
>
>         Some sensors generate multiple events simultaneously. Similarly, it is possible that asynchronously generated events could arrive at the identical time as one or more sensor generated event. In these cases, all events generated are part of the same initial event cascade and each event has the same timestamp. The order in which the events are applied is not considered significant. Conforming X3D worlds shall be able to accommodate simultaneous events in arbitrary order.
>
>         After all events of the initial event cascade are honored, post-event processing performs actions stimulated by the event cascade. The browser shall perform the following sequence of actions during a single timestamp:
>
>             Update camera based on currently bound Viewpoint's position and orientation.
>             Evaluate input from sensors.
>             Evalute routes.
>             If any events were generated from steps b and c, go to step b and continue.
>             If particle system evaluation is to take place, evaluate the particle systems here.
>             If physics model evaluation is to take place, evaluate the physics model.
>
>         For profiles that support Script nodes and the Scene Access Interface, the above order may have several intermediate steps. Details are described in 29 Scripting and 2[I.19775-2].
>
>         Figure 4.3 provides a conceptual illustration of the execution model.
>
>         Conceptual execution model
>
>         Figure 4.3 — Conceptual execution model
>
>
>             http://www.web3d.org/documents/specifications/19775-1/V3.3/Images/ConceptualExecutionModel.png <http://www.web3d.org/documents/specifications/19775-1/V3.3/Images/ConceptualExecutionModel.png>
>
>         Nodes that contain output events shall produce at most one event per field per timestamp. If a field is connected to another field via a ROUTE, an implementation shall send only one event per ROUTE per timestamp. This also applies to scripts where the rules for determining the appropriate action for sending output events are defined in 29 Scripting component.
>
>     ==============================================================================================
>
>
>     Meanwhile, looking at HTML5 and HTML5.1
>
>     10 Rendering
>     https://www.w3.org/TR/html5/rendering.html#rendering <https://www.w3.org/TR/html5/rendering.html#rendering>
>
>     10. Rendering
>     https://www.w3.org/TR/html51/rendering.html#rendering <https://www.w3.org/TR/html51/rendering.html#rendering>
>
>         User agents are not required to present HTML documents in any particular way. However, this section provides a set of suggestions for rendering HTML documents that, if followed, are likely to lead to a user experience that closely resembles the experience intended by the documents' authors.
>
>
>     [plus spec-wonk legalese]
>
>          So as to avoid confusion regarding the normativity of this section, RFC2119 terms have not been used. Instead, the term "expected" is used to indicate behavior that will lead to this experience. For the purposes of conformance for user agents designated as supporting the suggested default rendering, the term "expected" in this section has the same conformance implications as the RFC2119-defined term "must".
>
>
>     and looking at the Document Object Model (DOM) used by HTML5/5.1:
>
>     W3C DOM4; W3C Recommendation 19 November 2015
>     https://www.w3.org/TR/dom/
>
>     3 Events
>     3.1 Introduction to "DOM Events"
>
>         Throughout the web platform events are dispatched to objects to signal an occurrence, such as network activity or user interaction.
>
>
>     Numerous functionality descriptions for DOM event passing are found there.  But: no timing or sequence diagrams found there.  Has anyone seen "render loop" diagrams for HTML anywhere else?
>
>     Clarity challenge: it would be quite interesting to come up with figure or two that *illustrates DOM events interacting with X3D events and rendering a shared web page.*
>
>     Meanwhile again... rendering at 60fps stereo or better is certainly a wonderful goal. More is better.  Perception of smooth motion occurs at 7-8 Hz, framerates above 15fps are hard to distinguish... whatever works.  However I've seen nothing published that indicates whether such performance actually avoids the the physiological and psychological triggers causing simulator sickness.
>
>     In general, the event loop for DOM can be connected yet decoupled from the event loop for X3D.  Such as situation exists already in the X3D architecture and Scene Access Interface (SAI) design that allows both internal-within-scene-graph and external-in-HTML-browser scripting and event passing.
>
>     Rephrase, answering Mike's related concerns regarding frame rate: parallelization allows each to proceed at their own pace, carefully deliberate event exchange allows each to stay loosely synchronized.  Our current Javascript-based X3D players take advantage of the same optimizations of the same features being optimized for WebGL programs.  Thus X3D player performance can float right along and utilize the same browser performance-improvement advantages being pursued by everyone else.  Thus headset motion-sensitive rendering performance can be decoupled from web-browser user interactions, for HTML5 Canvas or for X3D.
>
>     Definitely worth continued study, illustration with diagrams, confirmation with implementations, and (likely) description inclusion in future X3D v4 specification.
>
>     all the best, Don
>     --
>     Don Brutzman  Naval Postgraduate School, Code USW/Br       brutzman at nps.edu <mailto:brutzman at nps.edu>
>     Watkins 270,  MOVES Institute, Monterey CA 93943-5000 USA   +1.831.656.2149 <tel:%2B1.831.656.2149>
>     X3D graphics, virtual worlds, navy robotics http://faculty.nps.edu/brutzman <http://faculty.nps.edu/brutzman>
>
>
>
>
> --
> Andreas Plesch
> 39 Barbara Rd.
> Waltham, MA 02453


all the best, Don
-- 
Don Brutzman  Naval Postgraduate School, Code USW/Br       brutzman at nps.edu
Watkins 270,  MOVES Institute, Monterey CA 93943-5000 USA   +1.831.656.2149
X3D graphics, virtual worlds, navy robotics http://faculty.nps.edu/brutzman
-------------- next part --------------
A non-text attachment was scrubbed...
Name: DOM Integration.svg
Type: image/svg+xml
Size: 121773 bytes
Desc: not available
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20161109/b61c6dca/attachment-0001.svg>


More information about the x3d-public mailing list