[x3d-public] LineSensor
Leonard Daly
Leonard.Daly at realism.com
Fri Mar 22 20:12:21 PDT 2019
Andreas and Doug,
I haven't experimented with the fiddle yet, but I was wondering what
happens when the virtual world is displayed in a head-mounted display
(HMD). I think it would be OK in the normal upright position (head
straight up, look at the horizon).
Now the user tilts her head. Do the "lines" continue to follow the HMD
coordinate system, or is there something that keeps them aligned with
the vertical objects in the world? Note that vertical virtual objects
will need to stay vertical relative to the external coordinate system;
otherwise she will start to experience significant virtual-induced real
motion sickness.
Leonard Daly
> Hi Doug,
>
> based on the bracketed strategy, I added min/maxPosition for screen
> plane orientation here:
>
> https://jsfiddle.net/andreasplesch/mzg9dqyc/201/
>
> Dragging the box is now restricted to a rectangle aligned
> to the screen edges aligned.
>
> It feels natural, and I think it would be a good way to use the fields.
>
> Thinking about the semantics, another option would be to make
> planeOrientation of type SFVec3f holding the normal to the tracking
> plane, by default 0 0 1. A 0 0 0 value would be special and mean
> screen parallel.
>
> But offering this flexibility may be more confusing than helpful, so I
> am not convinced.
>
> There may be another way but introducing a new field may be still
> necessary.
>
> Another question is what to do with the axisRotation field but I think
> we had discovered a while ago that browsers already interpret it
> differently.
>
> Andreas
>
>
> On Fri, Mar 22, 2019 at 4:45 PM Andreas Plesch
> <andreasplesch at gmail.com <mailto:andreasplesch at gmail.com>> wrote:
> >
> > Hi Doug,I took the plunge and added a planeOrientation='screen' field
> > in this exploration:
> >
> > https://jsfiddle.net/andreasplesch/mzg9dqyc/158/
> >
> > The idea seems to work as advertised. In addition to dragging the
> > colored axis arrows, it is now possible to drag the grey box itself in
> > the screen plane whatever the current view may be.
> >
> > One thing I noticed is that the min/maxPosition field would need to
> > have a slightly different meaning since the X and Y axis of the local
> > coordinate system are not useful in this case.
> >
> > Did you have a min/maxPosition field for PointSensor ?
> >
> > One possible meaning would be to limit translation in the screen plane
> > along the horizontal axis and the vertical axis of the screen. I
> > thought a bit about that and I think it may be very doable but could
> > not quite finish [The problem I am a bit stuck on is how to project
> > the intersection point from local coordinates to the new coordinate
> > system. Perhaps back to world, then world to camera space using the
> > view matrix, then just setting z=0. Presumably this could work for the
> > translation directly. Then clamp the translation, and project back:
> > inverse view matrix and local transform.]
> >
> > Another option is to just ignore the field in this case.
> >
> > Cheers,
> >
> > -Andreas
> >
> >
> > On Fri, Mar 22, 2019 at 11:25 AM GPU Group <gpugroup at gmail.com
> <mailto:gpugroup at gmail.com>> wrote:
> > >
> > > Great ideas and PlaneOrientation "screen" sounds intuitive, I like
> it. And saves a node definition, and you Andreas having solved the
> linesensor via planesensor, it makes sense to keep going that way,
> packing more in the PlaneSensor node. Keep up the great work.
> > > -Doug
> > >
> > > On Fri, Mar 22, 2019 at 9:12 AM Andreas Plesch
> <andreasplesch at gmail.com <mailto:andreasplesch at gmail.com>> wrote:
> > >>
> > >> Hi Doug,
> > >>
> > >> I think a PointSensor is a good idea if there is currently no other
> > >> way to drag a point around in a plane parallel to the screen. Since
> > >> dragging is still restricted to a plane, perhaps ScreenSensor
> would be
> > >> a better name ? This is a common and useful dragging method,
> > >> especially if you have well defined viewpoints, like a perfect map
> > >> view.
> > >>
> > >> To be more specific, the tracking plane is defined by the point
> on the
> > >> sibling geometry first indicated when dragging starts and by the
> > >> avatar viewing direction as the plane normal. The avatar viewing
> > >> direction is the current viewpoint orientation, or the orientation of
> > >> a ray to the center of the screen.
> > >>
> > >> Autooffset, I think, works the same by just accumulating offsets for
> > >> each drag action.
> > >>
> > >> And translation_changed and trackpoint_changed, I think, would still
> > >> be reported in the local coordinate system of the sensor.
> > >>
> > >> Would it make sense to have a min. and maxPosition for it ? I
> think so.
> > >>
> > >> So I think it would be pretty straightforward to implement based on
> > >> the existing PlaneSensor. Perhaps it should become a "screen"
> mode for
> > >> PlaneSensor since it would have pretty much the same fields ?
> > >>
> > >> Then it may suffice to add only little prose to the PlaneSensor
> spec.:
> > >>
> > >> "If the mode field is set to "screen", the tracking plane is not the
> > >> XY plane of the local sensor coordinate system. Instead, the tracking
> > >> plane is the plane parallel to screen anchored by the initial
> > >> intersection point."
> > >>
> > >> On the other hand, such mode fields are generally not used in any X3D
> > >> nodes, probably for good reason. Perhaps planeOrientation would be a
> > >> better name, with values "XY" (default), "screen", "XZ", "YZ".
> > >>
> > >> Although there is no screen in VR, the vertical plane perfectly ahead
> > >> in the viewing direction is still an important plane and would be
> > >> useful to drag around things in.
> > >>
> > >> More ideas,
> > >>
> > >> -Andreas
> > >>
> > >> On Fri, Mar 22, 2019 at 9:34 AM GPU Group <gpugroup at gmail.com
> <mailto:gpugroup at gmail.com>> wrote:
> > >> >
> > >> > Freewrl also does a PointSensor. I think I submitted that to
> spec comments, or I should have/meant to, for v4 consideration.
> Internally PointSensor is a planesensor aligned to the screen. So if
> you EXAMINE around something that's a PointSensor, you can drag it in
> a plane parallel to the screen from the current avatar viewing angle,
> and it feels like you can drag it any direction you want by changing
> the avatar pose.That's unlike a PlaneSensor that's not
> following/moving with the avatar.
> > >> > And confirming what Andreas said -and in freewrl code for drag
> sensors it mentions Max Limper complaining about x3dom PlaneSensor the
> same thing about it failing when viewed on-edge.
> > >> > -Doug Sanden
> > >> >
> > >> >
> > >> > On Thu, Mar 21, 2019 at 9:46 PM Andreas Plesch
> <andreasplesch at gmail.com <mailto:andreasplesch at gmail.com>> wrote:
> > >> >>
> > >> >> I rethought the LineSensor idea and instead opted to add an
> internal
> > >> >> line sensor mode to PlaneSensor. The idea is that if the x or y
> > >> >> components of both minPosition and maxPosition are 0,
> PlaneSensor acts
> > >> >> as a LineSensor and is free to use a better intersection plane
> than
> > >> >> the default XY plane which can become completely unusable in
> the case
> > >> >> when it is viewed on edge.
> > >> >>
> > >> >> In the implementation used in the fiddle below, the
> intersection plane
> > >> >> normal in line sensor mode is computed as the cross product of the
> > >> >> line axis (x or y axis) with the cross product of the line
> axis and
> > >> >> the view direction when dragging starts:
> > >> >>
> > >> >> https://jsfiddle.net/andreasplesch/mzg9dqyc/111/
> > >> >>
> > >> >> It seems to work well and allows programming of a translation
> widget
> > >> >> in X3D without scripts as shown. On edge view of the
> PlaneSensor XY
> > >> >> plane still works because the XY plane is not actually used.
> > >> >>
> > >> >> This approach is more spec friendly as it does not require a
> new node
> > >> >> and just allows the defacto use as LineSensor to become robust.
> > >> >> However, since it requires use of another plane than the XY
> plane, it
> > >> >> may be necessary to add some language. Perhaps:
> > >> >>
> > >> >> "This technique provides a way to implement a line sensor that
> maps
> > >> >> dragging motion into a translation in one dimension. In this
> case, the
> > >> >> tracking plane can be any plane that contains the one
> unconstrained
> > >> >> axis (X or Y). A browser is therefore free to choose a
> tracking plane
> > >> >> suitable to robustly determine intersections, in particular
> when the
> > >> >> XY plane is unusable. In this case, the generated
> trackpoint_changed
> > >> >> value becomes implementation dependent."
> > >> >>
> > >> >> I would prefer this over a LineSensor node, and it was pretty
> > >> >> straightforward to add to the x3dom PlaneSensor implementation.
> > >> >>
> > >> >> I think it would be possible to check for other, equal values
> than 0
> > >> >> and then use the same approach but I am not sure if it would be
> > >> >> useful.
> > >> >>
> > >> >> If there is feedback, especially on the suggested prose, I
> will glad
> > >> >> it to add it to a potential standard comment.
> > >> >>
> > >> >> For example, one could tighten the spec. by giving a formula
> for the
> > >> >> orientation of a suitable tracking plane in the one
> dimensional case
> > >> >> but that may be overreach.
> > >> >>
> > >> >> Cheers, -Andreas
> > >> >>
> > >> >> On Thu, Mar 21, 2019 at 6:06 PM Andreas Plesch
> <andreasplesch at gmail.com <mailto:andreasplesch at gmail.com>> wrote:
> > >> >> >
> > >> >> > I now understand the need for LineSensor and may give it a
> try for
> > >> >> > x3dom. I know freewrl has it. It may make sense to replicate it.
> > >> >> >
> > >> >> > My initial idea would be to use the x axis of the local
> coordinate
> > >> >> > system as the Line orientation. Or maybe the y axis since X3D
> > >> >> > geometries tend to align along y (cylinder, extrusion).
> > >> >> >
> > >> >> > Then one can construct an intersection plane which includes
> the x axis
> > >> >> > and another direction at a high angle with the viewing
> direction to
> > >> >> > get a clean intersection for the inital point and subsequent
> points
> > >> >> > during dragging.
> > >> >> >
> > >> >> > Is that how freewrl does it ? I think I saw that it may use an
> > >> >> > additional field for the line orientation but it may be best
> to avoid
> > >> >> > it.
> > >> >> >
> > >> >> > Andreas
> > >> >> >
> > >> >> > --
> > >> >> > Andreas Plesch
> > >> >> > Waltham, MA 02453
> > >> >>
> > >> >>
> > >> >>
> > >> >> --
> > >> >> Andreas Plesch
> > >> >> Waltham, MA 02453
> > >> >>
> > >> >> _______________________________________________
> > >> >> x3d-public mailing list
> > >> >> x3d-public at web3d.org <mailto:x3d-public at web3d.org>
> > >> >> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
> > >>
> > >>
> > >>
> > >> --
> > >> Andreas Plesch
> > >> Waltham, MA 02453
> >
> >
> >
> > --
> > Andreas Plesch
> > Waltham, MA 02453
>
>
>
> --
> Andreas Plesch
> Waltham, MA 02453
>
> _______________________________________________
> x3d-public mailing list
> x3d-public at web3d.org
> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
--
*Leonard Daly*
3D Systems & Cloud Consultant
LA ACM SIGGRAPH Past Chair
President, Daly Realism - /Creating the Future/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20190322/b705274e/attachment-0001.html>
More information about the x3d-public
mailing list