[x3d-public] ScreenSensor, was: LineSensor

Leonard Daly Leonard.Daly at realism.com
Sat Apr 6 21:05:07 PDT 2019


Andreas,

Sorry for not responding sooner on this thread -- it has been a busy 
week for me.

I am concerned about the use of a screen-aligned sensor when display 3D 
content in a browser. Other environments may be useful and interesting, 
but my focus right now is browser-based displays.

A screen-aligned sensor (either line or rectangular) occupies a 
rectangular region (degenerate rectangular is a line) on the screen. 
This is the same as a HTML5 div tag that is overlayed on the screen. 
HTML will generate various events when the pointing device interacts 
with that region. These are different for cursor (mouse) and touch 
(finger or otherwise) [for example, there is no mouseover when there is 
only a touch-sensitive screen without a cursor].

The HTML events generated by the interaction all use screen coordinates 
(pixels, some of which are relative to the div origin). The X3D events 
are created using geometry-fractional coordinates. An X3D sensor exists 
in 3D space (sort-of, as I understand it) but not subject to rotation or 
skewing. If I am wrong, then how does a screen-aligned sensor differ 
from an existing TouchSensor node. This allows the X3D screen-aligned 
sensor to change in size as the camera changes position. This feature 
would need to be programmed into an HTML div for the same effect.So the 
X3D screen-aligned sensor exists in 3D space that is displayed on the 
canvas, when the HTML div exists in an HTML layer relative (in some 
manner) to the document and may be in front of or behind the canvas.

In any case, there are two constructs that take the same sort of user 
interaction, but produce significantly different results. That will lead 
to confusion on the part of X3D developers and perhaps users. It is not 
clear what the official solution will be.The Wiki page at 
http://www.web3d.org/wiki/index.php/X3D_version_4.0_Development 
discusses X3D/DOM integration and many issues related to that. It 
appears that there is no definitive explicit statement as to 
capabilities on that page with specific regard to DOM integration.

Consider the experience of a virtual world (to fit into X3D V4.0, not 
V4.1) that runs in a browser. There is content external to the 3D 
display region that needs to be updated from the 3D content and 
vice-versa. The experience has a head's up display containing important 
information and features about the experience. It is generally easier 
(for the developer) and produces a cleaner looking interface if the HUD 
is done in 2D; however, it could be done in 3D. One of the features is a 
2D picker (aka TouchSensor). There are now (with the screen-aligned 
sensor) two different means for interacting with the user. Each do 
things a little differently. As a developer I would wonder about which 
to use:

 1. HTML5 div (easy to fix to the display)
 2. X3D TouchSensor (because its attached to the HUD that is fixed in
    the display)
 3. X3D screen-aligned sensor (because its better somehow?)

As a user I might wonder why the interaction does not support 
multi-touch. As a maintainer, I would need to figure out which 
sensor/events were source/primary so I can perform updates, etc.

I am having a very hard time figuring out what this sensor provides that 
is not available though other means. For example, doesn't a TouchSensor 
on a Billboard (with the right field values) stay screen aligned?


Leonard Daly





> Hi Leonard,
>
> I kept puzzling over your comment. The main function of the
> dragsensors is to translate from pixel/screen units to world
> units/coordinate systems which requires some kind of 2d to 3d
> translation. Why would you not count this difference ? This
> translation is quite difficult to do and benefits from a consistency
> across scenes and systems. So screen-aligned PlaneSensor is not about
> the event system and in fact does not introduce any new events. So I
> am not sure how the question about events applies ? Is about the
> DragSensors in general ?
>
> -Andreas
>
> On Thu, Apr 4, 2019 at 9:59 AM Leonard Daly <Leonard.Daly at realism.com> wrote:
>> [Removed all but the most recent post in thread. Previous post at http://web3d.org/mailman/private/x3d-public_web3d.org/2019-April/010460.html]
>>
>> Andreas,
>>
>> Reading over the explainer (github link), this looks a lot like an HTML mousedown/mousemove or touchstart/touchmove set of events. So (when running in a browser), is there any substantial difference between the two? I am not counting differences in the output coordinate system (pixels vs. scaled units). If no, then why introduce a new node and events for something that already exists. If yes, please explain what problem this solves that HTML's event system does not.
>>
>> Independent of the above, how would this work in an stereo immersive environment (headset)?
>>
>> Leonard Daly
>>
>>
>>
>> I am working on a spec. comment on screen aligned PlaneSensor
>> functionality (aka PointSensor in freeWrl or SpaceSensor in Instant)
>> and developed an example, a description and suggested spec. language
>> here:
>>
>> https://bl.ocks.org/andreasplesch/f196e98c86bc9dc9686a7e5b4acede7d
>> https://gist.github.com/andreasplesch/f196e98c86bc9dc9686a7e5b4acede7d
>>
>> This is an important but currently not available feature which cannot
>> be implemented with a Proto, at least I cannot think of a way to do.
>>
>> Any feedback will be welcome. Actually, I have a couple comments
>> myself in a follow-up message.
>>
>> -Andreas
>>
>>
>>
>> --
>> Leonard Daly
>> 3D Systems & Cloud Consultant
>> LA ACM SIGGRAPH Past Chair
>> President, Daly Realism - Creating the Future
>
>

-- 
*Leonard Daly*
3D Systems & Cloud Consultant
LA ACM SIGGRAPH Past Chair
President, Daly Realism - /Creating the Future/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20190406/fc5ee238/attachment-0001.html>


More information about the x3d-public mailing list