[x3d-public] remove SF/MFInt32 ?

Andreas Plesch andreasplesch at gmail.com
Tue Apr 10 05:25:23 PDT 2018


On Tue, Apr 10, 2018, 12:15 AM Michalis Kamburelis <michalis.kambi at gmail.com>
wrote:

> 2018-04-10 4:40 GMT+02:00 Andreas Plesch <andreasplesch at gmail.com>:
> > Another way to approach the question if there is an opportunity or if
> such
> > an idea is just a distraction, is to consider why there is not a SFInt16
> or
> > SFInt8 type. The thinking at the time may have been that there is a need
> for
> > integers for indices but also a need to keep it simple and only have a
> > single one, int32. On the other hand, for floats let's have both 32 and
> > 64bit.
>
> Note that it is a reasonable optimization to pack mesh indexes into
> 8-bit or 16-bit integers, instead of full 32-bit integers. Even today,
> with incredibly fast GPUs :)
>

The drive will always be there to utilize hardware to the fullest amount.
The trade-off is the increased effort in programming.


> Looking at others:
>
> - glTF 2.0 allows providing indexes as 8-bit or 16-bit integers
> (https://github.com/KhronosGroup/glTF/tree/master/specification/2.0
> allows to use UNSIGNED_SHORT, UNSIGNED_BYTE as an alternative to
> UNSIGNED_INT for indexes).
>
> - And the APIs -- like OpenGL[ES], WebGL, Three.js -- all allow
> indexes to be 8-bit or 16-bit integers, not only 32-bit integers.
>
> - Unity3d explicitly advices using 16-bit indexes for meshes, not 32:
> https://docs.unity3d.com/Manual/FBXImporter-Model.html : "Note: For
> bandwidth and memory storage size reasons, you generally want to keep
> 16 bit indices as default, and only use 32 bit when necessary." . Note
> that it's a new setting -- before 2017, Unity3d was *forcing* to use
> 16-bit indices, limiting all mesh sizes to 64k chunks, and there
> wasn't even an option to use 32-bit indices.
>
> - In Castle Game Engine, we use 16-bit indexes for rendering on
> OpenGLES (mobile) and for all 2D user-interface rendering. We keep
> using 32-bit indexes on desktop OpenGL for 3D rendering.
>
> I'm not proposing to introduce MFInt16 to X3D :) But I wanted to note
> that "the size of integers still matters for GPUs". While the hardware
> is incredibly faster than it was 20 years ago, some of the "old"
> optimizations still matter. The gain you get from using smaller types
> still matters when you have a large mesh data and need to send it fast
> to GPU. ("coordIndex" expressed using 16-bit ints is 2x smaller than
> 32-bit ints, "Coordinate.point" expressed using half-floats is 2x
> smaller than 32-bit floats and so on.)
>

Well, it does actually sound like having an explicit MFInt16 may be a good
idea, for mobile targets.

Alternatively, it would be more transparent to have a generic MFInt type
which would make it obvious that it is up to the browser how many bits to
use, eg. 16 on mobile and perhaps automatic subdivision into fitting
chunks, or 32 on OpenGL.

Regards,
Andreas


> Regards,
> Michalis
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20180410/d7fb4a34/attachment.html>


More information about the x3d-public mailing list