<div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr">> 1. Current 'standard' navigation methods, how do implementations differ?</div><div dir="ltr">> 2. Current extensions, for example the TURNTABLE. Can they be put in spec?</div><div dir="ltr"><div dir="ltr"></div><div dir="ltr">in freewrl, hit the ? button, then mouse over the menu bar to see the names of the menu buttons</div>- anything you see that you like you can put in the spec - no patents and I'm a member of <a href="http://web3d.org">web3d.org</a> </div><div dir="ltr">> 3. What methods are usable for environments not mouse-driven , for example haptic devices; and mobile touchscreens,<br>> </div><div>A. touch screen </div><div>I had freewrl running as an Android app, and worked up a desktop configuration to emulate that touchscreen style.</div><div>There are a few things about touch screens:</div><div>1) no mouse over / isOver</div><div>-- for that I put a HOVER mode button, that emulates a desktop mode of mouse xy motion with no buttons pressed</div><div>2) your finger might obscure the screen where you are trying to touch</div><div>-- for that I made a PEDAL button that allows you to offset an in-scene cursor from your touch position</div><div>3) no RMB right mouse button to drag to adjust EXAMINE distance</div><div>-- for that I made an explicit DIST button, so a touch drag (in Y direction) will adjust DIST for EXAMINE, TURNTABLE (and I think EXPLORE is in EXAMINE mode (you click the EXPLORE button 2x to put it into recenter mode)).</div><div>4) if a Sensor node is filling up your screen and you want to navigate away without touching the sensor, you need a way to temporarily turn off sensors. in desktop I think the standard is to press the SHIFT hey on the keyboard. For mobile - like Android- you might not have a keyboard handy without it taking up half your screen, so in freewrl I put a SHIFT button on the menu so a touch would toggle it.</div><div>So in general, any RMB, mouse-up drag, keyboard reliance should be reworked for touch-screen.</div><div><br></div><div>B. headset</div><div>I also did a 'cardboard' stereo on Android with freewrl, using just the orientation sensor on the device, to change the avatar look angle. (but no position sensor). If someone sends me more sophisticated headsets, I'll see what I can get working.</div><div><br></div><div>C. space mouse</div><div>Something I didn't try: 3D positioning device / space mouse / 3dconnexion sometimes used for CAD. I haven't thought of how that would work.</div><div><br></div><div>-Doug Sanden</div><div><br></div></div></div></div><br><div class="gmail_quote"><div class="gmail_attr" dir="ltr">On Tue, Feb 4, 2020 at 4:01 PM Christoph Valentin <<a href="mailto:christoph.valentin@gmx.at">christoph.valentin@gmx.at</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid"><div>
<div><div style="padding:0.5em;line-height:1">May I suggest a few ideas, although I don't know whether they're new, although I will not be able to contribute to their realization?<br>
<br>
1) One should be able to implement custom navigation methods, e.g. by scripting<br>
<br>
2) one should be able to introduce a navigation method (or methods) as an integral part(s) of an avatar<br>
<br>
3) what is an avatar: an avatar consists of one or more dynamic models that represent a virtual identity<br>
<br>
4) a dynamic model is a model that can be loaded or unloaded on demand during the lifetime of a multiuser session <br>
<br>
5) one virtual identity can be related to one or more real identities (users). In the latter case the virtual identity is called a "crew"<br>
<br>
Kind regards<br>
Christoph<br>
<br>
<br>
<br>
<br>
-- <br>
Diese Nachricht wurde von meinem Android Mobiltelefon mit GMX Mail gesendet.</div><div style="padding:0.3em;line-height:1">Am 04.02.20, 15:55 schrieb KShell Email <<a href="mailto:vmarchetti@kshell.com" target="_blank">vmarchetti@kshell.com</a>>:<blockquote class="gmail_quote" style="margin:0.8ex 0pt 0pt 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid">
I propose a review of X3D navigation methods for the Feb 7 2020 X3D Working Group call. 11:00 AM EST, 8 AM PST, 16:00 UTC
<br>
<br> On Zoom:
<a href="https://zoom.us/j/148206572" target="_blank">https://zoom.us/j/148206572</a>
<br>
<br> Preliminary outline
<br>
<br> 1. Current 'standard' navigation methods, how do implementations differ?
<br> 2. Current extensions, for example the TURNTABLE. Can they be put in spec?
<br> 3. What methods are usable for environments not mouse-driven , for example haptic devices; and mobile touchscreens,
<br>
<br>
<br> All suggestions welcome
<br>
<br> Vince Marchetti
<br>
<br>
<br>
<br>
<br>
<br> _______________________________________________
<br> x3d-public mailing list
<br> <a href="mailto:x3d-public@web3d.org" target="_blank">x3d-public@web3d.org</a>
<br>
<a href="http://web3d.org/mailman/listinfo/x3d-public_web3d.org" target="_blank">http://web3d.org/mailman/listinfo/x3d-public_web3d.org</a>
<br>
</blockquote></div></div>
</div>
_______________________________________________<br>
x3d-public mailing list<br>
<a href="mailto:x3d-public@web3d.org" target="_blank">x3d-public@web3d.org</a><br>
<a href="http://web3d.org/mailman/listinfo/x3d-public_web3d.org" target="_blank" rel="noreferrer">http://web3d.org/mailman/listinfo/x3d-public_web3d.org</a><br>
</blockquote></div>