[x3d-public] remove SF/MFInt32 ?

Michalis Kamburelis michalis.kambi at gmail.com
Fri Apr 6 10:41:56 PDT 2018


2018-04-06 18:49 GMT+02:00 GPU Group <gpugroup at gmail.com>:
> related: SF float32 vs double64: would all double be simpler on x3d specs,
> and internally, up to opengl interface?
>

The OpenGL (and GPUs, internally) perform calculations using the
precision indicated in their shader code. Which is usually float
(32-bit) or half-float (16-bit floating point type, not natively
available on CPUs), and very seldom double (64-bit).

And you should not use double in shader code, unless you really have a
good reason for this (i.e. you have tested that this particular
calculation is too unprecise otherwise). It matters for performance in
my experience, you can degrade the speed of your calculations
noticeably by carelessly using more precision than necessary. I
actually made a bug around this once, causing any shape with bump
mapping in Castle Game Engine to kill the speed of rendering :) And
that was on desktop.

And you should not send a list of double values when you can instead
send a list of floats. The list of doubles will have 2x size, and it
matters for a long list. Which is esp. important if we would also make
all vectors use double-precision, to be consistent with scalars. That
is why it's a reasonable optimization to even send your mesh
coordinates as 16-bit integers, when possible (Unity3d can compress
meshes like this, and it makes a significant speedup on Android).

Some applications for the same reason convert floats to half-floats on
CPU (half-float is not supported natively, but there are libraries to
work with it). If the mesh is static and large, it's better to run a
preprocessing step and deliver it as a list of half-floats, not
floats, to GPU.

See also https://www.khronos.org/opengl/wiki/Data_Type_(GLSL) and
https://en.wikibooks.org/wiki/GLSL_Programming/Vector_and_Matrix_Operations#Precision_Qualifiers
.

So, in my experience forcing "double" everywhere would be a bad idea
for performance on GPU... Esp. since the float precision is enough for
most cases.

And as for CPU calculations: For a long time, I had a switchable (by a
compiler directive) implementation of many collision routines in
Castle Game Engine (CastleVectors and CastleTriangles units). You
could calculate everything using double, if you needed. And my
measurements consistently shown that it causes a big performance drop,
and it's also not necessary for a majority of the code. Most of the
"switchable" feature was even removed around the CGE 6.4 (it was
complicating code for no gain). Of course you can still double for
some calculations, and we have double-precision vectors in CGE, but
they are only used when it's really necessary. I think there occurred
only 2 such occasions in a large engine :)

So, in my experience, even on CPU: we benefit, noticeably, from using
floats vs doubles.

Regards,
Michalis



More information about the x3d-public mailing list