[x3d-public] screen parallel PlaneSensor, was: LineSensor

Andreas Plesch andreasplesch at gmail.com
Sat Mar 23 06:02:45 PDT 2019


Hi Leonard,

I think your question is about the specific situation when
min/maxPosition is used to restrict dragging with a PlaneSensor in
screen orientation mode (aka PointSensor).

Without restriction, in VR, I would expect such a PlaneSensor to
define the screen parallel orientation of the tracking plane by the
normal of the view direction (of the right eye, or the average of both
eyes), anchored by the position of the point indicated with a pointing
device (a wand) when dragging starts. Then this tracking plane is
frozen as long as dragging occurs. This is all the same as on the
desktop. In VR it is almost inevitable that after dragging begins the
view direction changes, whereas on the desktop it usually would not
change. But this does not affect the definition of the tracking plane.
I think it would work reasonably well in VR but below another idea.

So far so good I think. With the min/maxPosition restriction, on the
desktop dragging would be restricted to a rectangle aligned with x and
y axis of the canvas, screen, or window, as proposed. Since there is
no screen in VR, I think the question is what would be the equivalent
rectangle on the tracking plane in VR.
The answer is clear if one decomposes the VR stereo back into a left
and right eye view which are still rendered on screens. So as long as
the head is not tilted the rectangle would follow the horizontals and
verticals of the world. If the head is tilted the rectangle would
follow orientation of the head, eg. the x axis is the orientation
between eyesm and the y axis is the orientation down the nose.
Actually, I think this could work well, and may be what would be
expected but it comes down to testing it.
Restricting in VR may be actually more important since the view is
larger and can be extended by turning the head while dragging.

Here is another dragging mode which may seem natural in VR. You grab
something with the wand (laser controller). Then the thing becomes
attached to the laser in a rigid fashion. Dragging then means using
the updated orientation (and position for 6dof) of the controller and
keeping the distance to the thing constant. In effect, a (moving)
sphere around the hand (or on the desktop, the avatar) with the radius
of the original distance to the initially indicated point is what is
sensed. It could be another planeOrientation mode - "sphere" - or
another node since it may be useful to output a rotation_changed along
with translation_changed which aligns the dragged object with the
tangential plane of the large tracking sphere.

Should we consider such VR oriented sensors for v4 ? Maybe not immediately.

Andreas

> Date: Fri, 22 Mar 2019 20:12:21 -0700
> From: Leonard Daly <Leonard.Daly at realism.com>
> To: x3d-public at web3d.org
> Subject: Re: [x3d-public] LineSensor
>
> Andreas and Doug,
>
> I haven't experimented with the fiddle yet, but I was wondering what
> happens when the virtual world is displayed in a head-mounted display
> (HMD). I think it would be OK in the normal upright position (head
> straight up, look at the horizon).
>
> Now the user tilts her head. Do the "lines" continue to follow the HMD
> coordinate system, or is there something that keeps them aligned with
> the vertical objects in the world? Note that vertical virtual objects
> will need to stay vertical relative to the external coordinate system;
> otherwise she will start to experience significant virtual-induced real
> motion sickness.
>
> Leonard Daly
>
>
> > Hi Doug,
> >
> > based on the bracketed strategy, I added min/maxPosition for screen
> > plane orientation here:
> >
> > https://jsfiddle.net/andreasplesch/mzg9dqyc/201/
> >
> > Dragging the box is now restricted to a rectangle aligned
> > to the screen edges aligned.
> >
> > It feels natural, and I think it would be a good way to use the fields.
> >
> > Thinking about the semantics, another option would be to make
> > planeOrientation of type SFVec3f holding the normal to the tracking
> > plane, by default 0 0 1. A 0 0 0 value would be special and mean
> > screen parallel.
> >
> > But offering this flexibility may be more confusing than helpful, so I
> > am not convinced.
> >
> > There may be another way but introducing a new field may be still
> > necessary.
> >
> > Another question is what to do with the axisRotation field but I think
> > we had discovered a while ago that browsers already interpret it
> > differently.
> >
> > Andreas
> >
> >
> > On Fri, Mar 22, 2019 at 4:45 PM Andreas Plesch
> > <andreasplesch at gmail.com <mailto:andreasplesch at gmail.com>> wrote:
> > >
> > > Hi Doug,I took the plunge and added a planeOrientation='screen' field
> > > in this exploration:
> > >
> > > https://jsfiddle.net/andreasplesch/mzg9dqyc/158/
> > >
> > > The idea seems to work as advertised. In addition to dragging the
> > > colored axis arrows, it is now possible to drag the grey box itself in
> > > the screen plane whatever the current view may be.
> > >
> > > One thing I noticed is that the min/maxPosition field would need to
> > > have a slightly different meaning since the X and Y axis of the local
> > > coordinate system are not useful in this case.
> > >
> > > Did you have a min/maxPosition field for PointSensor ?
> > >
> > > One possible meaning would be to limit translation in the screen plane
> > > along the horizontal axis and the vertical axis of the screen. I
> > > thought a bit about that and I think it may be very doable but could
> > > not quite finish [The problem I am a bit stuck on is how to project
> > > the intersection point from local coordinates to the new coordinate
> > > system. Perhaps back to world, then world to camera space using the
> > > view matrix, then just setting z=0. Presumably this could work for the
> > > translation directly. Then clamp the translation, and project back:
> > > inverse view matrix and local transform.]
> > >
> > > Another option is to just ignore the field in this case.
> > >
> > > Cheers,
> > >
> > > -Andreas
> > >
> > >
> > > On Fri, Mar 22, 2019 at 11:25 AM GPU Group <gpugroup at gmail.com
> > <mailto:gpugroup at gmail.com>> wrote:
> > > >
> > > > Great ideas and PlaneOrientation "screen" sounds intuitive, I like
> > it. And saves a node definition, and you Andreas having solved the
> > linesensor via planesensor, it makes sense to keep going that way,
> > packing more in the PlaneSensor node. Keep up the great work.
> > > > -Doug
> > > >
> > > > On Fri, Mar 22, 2019 at 9:12 AM Andreas Plesch
> > <andreasplesch at gmail.com <mailto:andreasplesch at gmail.com>> wrote:
> > > >>
> > > >> Hi Doug,
> > > >>
> > > >> I think a PointSensor is a good idea if there is currently no other
> > > >> way to drag a point around in a plane parallel to the screen. Since
> > > >> dragging is still restricted to a plane, perhaps ScreenSensor
> > would be
> > > >> a better name ? This is a common and useful dragging method,
> > > >> especially if you have well defined viewpoints, like a perfect map
> > > >> view.
> > > >>
> > > >> To be more specific, the tracking plane is defined by the point
> > on the
> > > >> sibling geometry first indicated when dragging starts and by the
> > > >> avatar viewing direction as the plane normal. The avatar viewing
> > > >> direction is the current viewpoint orientation, or the orientation of
> > > >> a ray to the center of the screen.
> > > >>
> > > >> Autooffset, I? think, works the same by just accumulating offsets for
> > > >> each drag action.
> > > >>
> > > >> And translation_changed and trackpoint_changed, I think, would still
> > > >> be reported in the local coordinate system of the sensor.
> > > >>
> > > >> Would it make sense to have a min. and maxPosition for it ? I
> > think so.
> > > >>
> > > >> So I think it would be pretty straightforward to implement based on
> > > >> the existing PlaneSensor. Perhaps it should become a "screen"
> > mode for
> > > >> PlaneSensor since it would have pretty much the same fields ?
> > > >>
> > > >> Then it may suffice to add only little prose to the PlaneSensor
> > spec.:
> > > >>
> > > >> "If the mode field is set to "screen", the tracking plane is not the
> > > >> XY plane of the local sensor coordinate system. Instead, the tracking
> > > >> plane is the plane parallel to screen anchored by the initial
> > > >> intersection point."
> > > >>
> > > >> On the other hand, such mode fields are generally not used in any X3D
> > > >> nodes, probably for good reason. Perhaps planeOrientation would be a
> > > >> better name, with values "XY" (default), "screen", "XZ", "YZ".
> > > >>
> > > >> Although there is no screen in VR, the vertical plane perfectly ahead
> > > >> in the viewing direction is still an important plane and would be
> > > >> useful to drag around things in.
> > > >>
> > > >> More ideas,
> > > >>
> > > >> -Andreas
> > > >>
> > > >> On Fri, Mar 22, 2019 at 9:34 AM GPU Group <gpugroup at gmail.com
> > <mailto:gpugroup at gmail.com>> wrote:
> > > >> >
> > > >> > Freewrl also does a PointSensor. I think I submitted that to
> > spec comments, or I should have/meant to, for v4 consideration.
> > Internally PointSensor is a planesensor aligned to the screen. So if
> > you EXAMINE around something that's a PointSensor, you can drag it in
> > a plane parallel to the screen from the current avatar viewing angle,
> > and it feels like you can drag it any direction you want by changing
> > the avatar pose.That's unlike a PlaneSensor that's not
> > following/moving with the avatar.
> > > >> > And confirming what Andreas said -and in freewrl code for drag
> > sensors it mentions Max Limper complaining about x3dom PlaneSensor the
> > same thing about it failing when viewed on-edge.
> > > >> > -Doug Sanden
> > > >> >
> > > >> >
> > > >> > On Thu, Mar 21, 2019 at 9:46 PM Andreas Plesch
> > <andreasplesch at gmail.com <mailto:andreasplesch at gmail.com>> wrote:
> > > >> >>
> > > >> >> I rethought the LineSensor idea and instead opted to add an
> > internal
> > > >> >> line sensor mode to PlaneSensor. The idea is that if the x or y
> > > >> >> components of both minPosition and maxPosition are 0,
> > PlaneSensor acts
> > > >> >> as a LineSensor and is free to use a better intersection plane
> > than
> > > >> >> the default XY plane which can become completely unusable in
> > the case
> > > >> >> when it is viewed on edge.
> > > >> >>
> > > >> >> In the implementation used in the fiddle below, the
> > intersection plane
> > > >> >> normal in line sensor mode is computed as the cross product of the
> > > >> >> line axis (x or y axis) with the cross product of the line
> > axis and
> > > >> >> the view direction when dragging starts:
> > > >> >>
> > > >> >> https://jsfiddle.net/andreasplesch/mzg9dqyc/111/
> > > >> >>
> > > >> >> It seems to work well and allows programming of a translation
> > widget
> > > >> >> in X3D without scripts as shown. On edge view of the
> > PlaneSensor XY
> > > >> >> plane still works because the XY plane is not actually used.
> > > >> >>
> > > >> >> This approach is more spec friendly as it does not require a
> > new node
> > > >> >> and just allows the defacto use as LineSensor to become robust.
> > > >> >> However, since it requires use of another plane than the XY
> > plane, it
> > > >> >> may be necessary to add some language. Perhaps:
> > > >> >>
> > > >> >> "This technique provides a way to implement a line sensor that
> > maps
> > > >> >> dragging motion into a translation in one dimension. In this
> > case, the
> > > >> >> tracking plane can be any plane that contains the one
> > unconstrained
> > > >> >> axis (X or Y). A browser is therefore free to choose a
> > tracking plane
> > > >> >> suitable to robustly determine intersections, in particular
> > when the
> > > >> >> XY plane is unusable. In this case, the generated
> > trackpoint_changed
> > > >> >> value becomes implementation dependent."
> > > >> >>
> > > >> >> I would prefer this over a LineSensor node, and it was pretty
> > > >> >> straightforward to add to the x3dom PlaneSensor implementation.
> > > >> >>
> > > >> >> I think it would be possible to check for other, equal values
> > than 0
> > > >> >> and then use the same approach but I am not sure if it would be
> > > >> >> useful.
> > > >> >>
> > > >> >> If there is feedback, especially on the suggested prose, I
> > will glad
> > > >> >> it to add it to a potential standard comment.
> > > >> >>
> > > >> >> For example, one could tighten the spec. by giving a formula
> > for the
> > > >> >> orientation of a suitable tracking plane in the one
> > dimensional case
> > > >> >> but that may be overreach.
> > > >> >>
> > > >> >> Cheers, -Andreas
> > > >> >>
> > > >> >> On Thu, Mar 21, 2019 at 6:06 PM Andreas Plesch
> > <andreasplesch at gmail.com <mailto:andreasplesch at gmail.com>> wrote:
> > > >> >> >
> > > >> >> > I now understand the need for LineSensor and may give it a
> > try for
> > > >> >> > x3dom. I know freewrl has it. It may make sense to replicate it.
> > > >> >> >
> > > >> >> > My initial idea would be to use the x axis of the local
> > coordinate
> > > >> >> > system as the Line orientation. Or maybe the y axis since X3D
> > > >> >> > geometries tend to align along y (cylinder, extrusion).
> > > >> >> >
> > > >> >> > Then one can construct an intersection plane which includes
> > the x axis
> > > >> >> > and another direction at a high angle with the viewing
> > direction to
> > > >> >> > get a clean intersection for the inital point and subsequent
> > points
> > > >> >> > during dragging.
> > > >> >> >
> > > >> >> > Is that how freewrl does it ? I think I saw that it may use an
> > > >> >> > additional field for the line orientation but it may be best
> > to avoid
> > > >> >> > it.
> > > >> >> >
> > > >> >> > Andreas
> > > >> >> >
> > > >> >> > --
> > > >> >> > Andreas Plesch
> > > >> >> > Waltham, MA 02453
> > > >> >>
> > > >> >>
> > > >> >>
> > > >> >> --
> > > >> >> Andreas Plesch
> > > >> >> Waltham, MA 02453
> > > >> >>
> > > >> >> _______________________________________________
> > > >> >> x3d-public mailing list
> > > >> >> x3d-public at web3d.org <mailto:x3d-public at web3d.org>
> > > >> >> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> Andreas Plesch
> > > >> Waltham, MA 02453
> > >
> > >
> > >
> > > --
> > > Andreas Plesch
> > > Waltham, MA 02453
> >
> >
> >
> > --
> > Andreas Plesch
> > Waltham, MA 02453
> >
> > _______________________________________________
> > x3d-public mailing list
> > x3d-public at web3d.org
> > http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>
>
> --
> *Leonard Daly*
> 3D Systems & Cloud Consultant
> LA ACM SIGGRAPH Past Chair
> President, Daly Realism - /Creating the Future/
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20190322/b705274e/attachment.html>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> x3d-public mailing list
> x3d-public at web3d.org
> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>
>
> ------------------------------
>
> End of x3d-public Digest, Vol 120, Issue 108
> ********************************************



--
Andreas Plesch
Waltham, MA 02453



More information about the x3d-public mailing list