[x3d-public] x3d-public Digest, Vol 128, Issue 30

Andreas Plesch andreasplesch at gmail.com
Thu Dec 5 07:01:41 PST 2019


Hi Michalis,

thanks for thinking this through.

To me, then, there are really two options concerning normal maps for Pointset:

(A) allow normal maps for Pointsets, and introduce a MFVec3f Tangent
and probably Bitangent nodes, similar to the existing Normal node.
Similar to the Normal node, values would be automatically computed for
continuous surfaces using the MIKKTspace method when not provided.
(B) If introducing Tangent nodes is not feasible at this point, do not
allow normal maps for Pointsets and rely exclusively on the Normal
node to provide normals, per vertex.

Moving on to colors, I do think global color textures (in addition to
sprite textures) for Pointsets could be quite useful, for coloring
massive point clouds. Providing a 2d color look up table in the form
of a texture would be more efficient than providing a color for each
point directly, in many cases, especially if default, bounding box
derived, texture coordinates can be used.

The use case for point clouds I am mostly interested in are massive
(millions of points), static ones where it is advantageous to just
visualize raw scanning results instead of going through a full surface
reconstruction and mesh building.

For this use case, efficient Level of Detail becomes relevant but I do
not think Pointset or PointProperties play a role in that. A X3D
browser could do some automatic LOD, perhaps by preprocessing, or a
LOD node can be used if necessary.

There are probably scanners now which can record larger dynamic point
clouds, at good frame rates, for animations but I do not know if there
are additional requirements for smooth X3D based playback. The
existing interpolator nodes may suffice ? Is something needed for
efficient GPU use ? I could imagine that a Float 3d textures may be
used internally for vertex positions, where u and v is used to look up
xy,z coordinates (x and y merged into xy, perhaps quantized) and w for
time/frames . This would provide automatic interpolation between
frames. Three float 2d textures, one for each spatial dimension, would
be easier. But this would work only for up to 4k to 16k points, the
limit of texture sizes. Would ideas like this need more explicit
support in the spec. ? (16k frames at 30 frames/s=533s=almost 90
minutes=very long)

-Andreas

On Thu, Dec 5, 2019 at 5:26 AM Michalis Kamburelis
<michalis.kambi at gmail.com> wrote:
>
> (Adding x3d-public to Cc.)
>
> Indeed, when animating topology (like animating "coord" using
> CoordinateInterpolator) one has to also animate other vectors in local
> space -- like normals (in "normal" field) or (if we add it) vectors in
> the "tangent" field. In this sense, indeed the burden is just shifted.
>
> Still, it seems easier to animate a set of vectors expressed as
> MFVec3f, than to animate a set of vectors expressed in a texture.
> Outside of X3D, others animate vectors too -- Unity AddBlendShapeFrame
> ( https://docs.unity3d.com/ScriptReference/Mesh.AddBlendShapeFrame.html
> ) takes as parameters vertices, normals and tangents for new "blend
> shape".
>
> I can't comment about the use-case of "shading a point set" -- I mean,
> my knowledge is more about "what is common for normal maps on
> polygons". Although glTF treats the normal maps consistently on
> polygons, points and lines -- they are always in tangent space,
> according to glTF spec. Unity also only allows normal maps in tangent
> space, as far as I know.
>
> Regards,
> Michalis
>
> czw., 5 gru 2019 o 04:48 Andreas Plesch <andreasplesch at gmail.com> napisał(a):
> >
> > I had intended to share this also on x3d-public. Please feel free to do so.
> >
> > As a tangent map would be necessary then for points, would the tangents then be in local space ?
> >
> > And if this is the case, would an animation then require animation of the tangents map ?
> >
> > So, in that sense, the burden would be just shifted, would it not?
> >
> > On another note, in my mind the purpose of using as normal map for a dense point cloud would not be providing a sense of detailed morphology as it is for a continuous surface. It would be simply a compact way of delivering normal data to each point, for shading purposes.
> >
> > Best, Andreas
> >
> >
> >
> > ---on the phone---
> >
> > On Wed, Dec 4, 2019, 5:07 PM Michalis Kamburelis <michalis.kambi at gmail.com> wrote:
> >>
> >> Normals in normalmaps are usually expressed in tangent space, because
> >> this makes them useful even after mesh deformation (using
> >> CoordinateInterpolator or H-Anim skinning). I think that this argument
> >> would apply also to PointSet (as far as I understand, the rationale
> >> for "lit" PointSet is that it approximates a surface, e.g. when
> >> scanned by 3D scanner; so we may want to animate it, just like we
> >> animate IndexedFaceSet).
> >>
> >> Requiring normalmaps in other space (like local space) in X3D is
> >> possible but indeed would be somewhat "exotic".
> >>
> >> - It would be surprising to asset authors, I think, since so many
> >> other software (that creates or renders normalmaps) defaults to
> >> "normalmaps in tangent space".
> >>
> >> - It would also be incompatible with glTF, that says normalmaps are in
> >> tangent space. So we could not use normalmaps from glTF.
> >>
> >> - As we lose the advantages I mentioned at the beginning: animating
> >> mesh (by CoordinateInterpolator etc.) would break normalmaps.
> >>
> >> Regards,
> >> Michalis
> >>
> >>
> >> śr., 4 gru 2019 o 15:28 Andreas Plesch <andreasplesch at gmail.com> napisał(a):
> >> >
> >> > Hi Michalis,
> >> >
> >> > thanks for the thoughtful explanations. Agreed on both points. I had
> >> > forgotten about tangent space.
> >> >
> >> > Explicit normals in a Normals node are provided in local space, not
> >> > tangent space. Could we require that normals in normal maps for points
> >> > also need to provided in local space, to avoid the requirement for
> >> > tangents ? This may be too exotic ? I could imagine that point clouds
> >> > from lidar and image reconstruction actually would come with normals
> >> > in world space, rather than tangent space but that would be something
> >> > to learn more about.
> >> >
> >> > Best, -Andreas
> >> >
> >> > On Wed, Dec 4, 2019 at 6:09 AM Michalis Kamburelis
> >> > <michalis.kambi at gmail.com> wrote:
> >> > >
> >> > > sob., 30 lis 2019 o 06:00 Andreas Plesch <andreasplesch at gmail.com> napisał(a):
> >> > > > Yes, agreed. uv mapped from the axis aligned bounding box would be a good default. u from X and v from Y.
> >> > >
> >> > > To be more precise, I proposed to use the same scheme as
> >> > > IndexedFaceSet (
> >> > > https://www.web3d.org/documents/specifications/19775-1/V3.3/Part01/components/geometry3D.html#IndexedFaceSet
> >> > > ). It means:
> >> > >
> >> > > - U along the longest dimension
> >> > > - V along the next dimension
> >> > >
> >> > > I agree that  "U from X, V from Y" would be simpler (maybe also more
> >> > > useful)... but IndexedFaceSet already defines it a bit differently.
> >> > > And being consistent with IndexedFaceSet seems proper here.
> >> > >
> >> > > >
> >> > > > Are tangents really necessary for Phong shading ? see
> >> > > > https://en.m.wikipedia.org/wiki/Blinn%E2%80%93Phong_reflection_model
> >> > >
> >> > > Tangents are not really related to Phong shading. Tangents are
> >> > > necessary to interpret the information from the normal map. This, in
> >> > > turn, is necessary to actually use normal maps at all.
> >> > >
> >> > > Explanation: the vectors in normal map are defined in "tangent space".
> >> > > In this space,
> >> > >
> >> > > - The +Z vector is pointing outward from the polygon. (To be more
> >> > > precise, +Z is pointing in the direction indicated by per-vertex or
> >> > > per-face normal vectors.)
> >> > >
> >> > > - The +X vector should point in the direction where U texture
> >> > > coordinate grows (and be fixed to be orthogonal to +Z). This is called
> >> > > "tangent vector".
> >> > >
> >> > > - Similarly, +Y vector should point in the direction where V texture
> >> > > coordinate grows (and again be fixed to be orthogonal to  +Z). This is
> >> > > called "bitangent vector".
> >> > >
> >> > > Browser can calculate tangent and bitangent if your mesh is composed
> >> > > from polygons (using MikkTSpace algorithm , or CGE
> >> > > TAbstractBumpMappingGenerator.CalculateTangentVectors ). But in case
> >> > > of point set, you don't know which points should  be considered
> >> > > "adjacent", so you cannot calculate it.
> >> > >
> >> > > And without knowing where the "tangent vector" points, we don't know
> >> > > how to interpret a vector in normalmap like (1, 0, 0).
> >> > >
> >> > > Regards,
> >> > > Michalis
> >> >
> >> >
> >> >
> >> > --
> >> > Andreas Plesch
> >> > Waltham, MA 02453



-- 
Andreas Plesch
Waltham, MA 02453



More information about the x3d-public mailing list