[x3d-public] event tracing; HTML5/DOM/X3D parallelization

Andreas Plesch andreasplesch at gmail.com
Mon Oct 31 21:08:02 PDT 2016


On Oct 31, 2016 6:34 PM, "Don Brutzman" <brutzman at nps.edu> wrote:
>
> On 10/31/2016 2:43 PM, Andreas Plesch wrote:
>>
>> Here is what I found:
>> [...]
>> It was good for me to get a sense of how cobweb works anyways. Perhaps
it is helpful in general,
>
>
> Yes thanks.  8)
>
> Here is a functional description from the X3D abstract specification,
along with a generic figure for X3D players.
>
>
==============================================================================================
> 4.4.8.3 Execution model
>
http://www.web3d.org/documents/specifications/19775-1/V3.3/Part01/concepts.html#ExecutionModel
>
>> Once a sensor or Script has generated an initial event, the event is
propagated from the field producing the event along any ROUTEs to other
nodes. These other nodes may respond by generating additional events,
continuing until all routes have been honoured. This process is called an
event cascade. All events generated during a given event cascade are
assigned the same timestamp as the initial event, since all are considered
to happen instantaneously.
>>
>> Some sensors generate multiple events simultaneously. Similarly, it is
possible that asynchronously generated events could arrive at the identical
time as one or more sensor generated event. In these cases, all events
generated are part of the same initial event cascade and each event has the
same timestamp. The order in which the events are applied is not considered
significant. Conforming X3D worlds shall be able to accommodate
simultaneous events in arbitrary order.
>>
>> After all events of the initial event cascade are honored, post-event
processing performs actions stimulated by the event cascade. The browser
shall perform the following sequence of actions during a single timestamp:
>>
>>     Update camera based on currently bound Viewpoint's position and
orientation.
>>     Evaluate input from sensors.
>>     Evalute routes.
>>     If any events were generated from steps b and c, go to step b and
continue.
>>     If particle system evaluation is to take place, evaluate the
particle systems here.
>>     If physics model evaluation is to take place, evaluate the physics
model.
>>
>> For profiles that support Script nodes and the Scene Access Interface,
the above order may have several intermediate steps. Details are described
in 29 Scripting and 2[I.19775-2].
>>
>> Figure 4.3 provides a conceptual illustration of the execution model.
>>
>> Conceptual execution model
>>
>> Figure 4.3 — Conceptual execution model
>
>
>
http://www.web3d.org/documents/specifications/19775-1/V3.3/Images/ConceptualExecutionModel.png
>
>> Nodes that contain output events shall produce at most one event per
field per timestamp. If a field is connected to another field via a ROUTE,
an implementation shall send only one event per ROUTE per timestamp. This
also applies to scripts where the rules for determining the appropriate
action for sending output events are defined in 29 Scripting component.
>
>
==============================================================================================
>
>
> Meanwhile, looking at HTML5 and HTML5.1
>
> 10 Rendering
> https://www.w3.org/TR/html5/rendering.html#rendering
>
> 10. Rendering
> https://www.w3.org/TR/html51/rendering.html#rendering
>
>> User agents are not required to present HTML documents in any particular
way. However, this section provides a set of suggestions for rendering HTML
documents that, if followed, are likely to lead to a user experience that
closely resembles the experience intended by the documents' authors.
>
>
> [plus spec-wonk legalese]
>
>>  So as to avoid confusion regarding the normativity of this section,
RFC2119 terms have not been used. Instead, the term "expected" is used to
indicate behavior that will lead to this experience. For the purposes of
conformance for user agents designated as supporting the suggested default
rendering, the term "expected" in this section has the same conformance
implications as the RFC2119-defined term "must".
>
>
> and looking at the Document Object Model (DOM) used by HTML5/5.1:
>
> W3C DOM4; W3C Recommendation 19 November 2015
> https://www.w3.org/TR/dom/
>
> 3 Events
> 3.1 Introduction to "DOM Events"
>>
>> Throughout the web platform events are dispatched to objects to signal
an occurrence, such as network activity or user interaction.
>
>
> Numerous functionality descriptions for DOM event passing are found
there.  But: no timing or sequence diagrams found there.  Has anyone seen
"render loop" diagrams for HTML anywhere else?

The closest may be this:

https://www.w3.org/TR/animation-timing/

https://www.w3.org/TR/html51/webappapis.html#animation-frames

The idea is 'render on demand' in regular but adjustable intervals.

Events do not have a predetermined sequence relative to each other. There
is no built-in cascade. Listeners receive events in the order in which they
were generated. A event listener has to manage its own sequencing or inter
event dependency. For example, it is common to wrap a whole script
including listeners into the handler of the 'load' event listener to ensure
that the script has access to the completely loaded document.

So the concept of a render loop does not directly apply but the most common
implementation of one is RAF(CB) where CB itself includes RAF(CB) to
schedule the next frame.

>
> Clarity challenge: it would be quite interesting to come up with figure
or two that *illustrates DOM events interacting with X3D events and
rendering a shared web page.*
>

I can give it a try for cobweb but there may be many ways.

Let's see.

DOM events only produce X3D events when a DOM event handler calls a SAI
function, either with the help of cobweb_dom or directly. [ Cobweb_dom only
translates DOM mutations into SAI functions. ]

The SAI function then may modify a x3d node field or create a node, thereby
causing a field_changed event. I believe for cobweb this event is included
in the current cascade which may have other events being processed.  After
this cascade is completed, the webgl canvas is redrawn fulfilling the
current request for an animation frame by the web browser (user agent).
Then the web browser may do other things such as repainting other elements
on the page if necessary until it feels it is time for another frame
request, say when another 30ms have passed.

Conversely, X3D events only produce DOM events when a callback is added to
the event which actually produces and dispatches a DOM event, using the SAI
addFieldCallback function. Cobweb_dom does that automatically for all
output events. I believe dispatching is essentially instantaneous, eg. the
DOM event can be caught quickly by installed handlers. If there are no
handlers nothing happens and the DOM event ends its life cycle without
having accomplished much. Apparently there is not a significant penalty for
useless dispatching of events because cobweb_dom does that a lot.
Handlers can then do anything with or without the payload of the event
which includes the value of the x3d event. If the handler uses SAI
functions directly or via cobweb_dom  to access the scene, the resulting
x3d events will be included in the next cascade.

There is not really a need to share a web page since cobweb is just another
DOM script. In theory, other scripts could grab the same webgl canvas and
do things on it. There is just the web page.

X3dom probably works similarly although details differ significantly.

> Meanwhile again... rendering at 60fps stereo or better is certainly a
wonderful goal. More is better.  Perception of smooth motion occurs at 7-8
Hz, framerates above 15fps are hard to distinguish... whatever works.
However I've seen nothing published that indicates whether such performance
actually avoids the the physiological and psychological triggers causing
simulator sickness.

I would suspect that there are systematic experiments which show that low
latency and high fps do help with but cannot remove the disconnect between
visual and bodily sensation.

>
> In general, the event loop for DOM can be connected yet decoupled from
the event loop for X3D.  Such as situation exists already in the X3D
architecture and Scene Access Interface (SAI) design that allows both
internal-within-scene-graph and external-in-HTML-browser scripting and
event passing.
>
> Rephrase, answering Mike's related concerns regarding frame rate:
parallelization allows each to proceed at their own pace, carefully
deliberate event exchange allows each to stay loosely synchronized.  Our
current Javascript-based X3D players take advantage of the same
optimizations of the same features being optimized for WebGL programs.
Thus X3D player performance can float right along and utilize the same
browser performance-improvement advantages being pursued by everyone else.
Thus headset motion-sensitive rendering performance can be decoupled from
web-browser user interactions, for HTML5 Canvas or for X3D.
>
> Definitely worth continued study, illustration with diagrams,
confirmation with implementations, and (likely) description inclusion in
future X3D v4 specification.
>
> all the best, Don
> --
> Don Brutzman  Naval Postgraduate School, Code USW/Br
brutzman at nps.edu
> Watkins 270,  MOVES Institute, Monterey CA 93943-5000 USA
 +1.831.656.2149
> X3D graphics, virtual worlds, navy robotics
http://faculty.nps.edu/brutzman
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20161101/9e5a0dc9/attachment-0001.html>


More information about the x3d-public mailing list