[x3d-public] X3D features missing, but desired, by a game engine

Michalis Kamburelis michalis.kambi at gmail.com
Fri Apr 14 11:40:53 PDT 2017


2017-04-14 15:26 GMT+02:00 doug sanden <highaspirations at hotmail.com>:
> I had a theory/hypothesis about how the specs table 17.2, 17.3 could end up a bit awkward:
> back in the old days when internet was just starting, and low bandwidth (remember
> dial up modems? and when 19.2kbaud was hot - big improvement over 9600?)
> image files would be the last thing to show up, if ever.
> So material color was what it would thunk to while waiting. In other words,
> the color would be a placeholder for image texture, in case the image file
> never showed up. So 'replace' made sense -the image and color were doing the same job.
> Fancy blending modes were the last thing they were thinking of.

This makes sense, indeed.

My other theory was that someone thought that (back in the old days)
multiplication "texture * material" has non-zero cost.

In any case, neither of these are valid arguments anymore, I think.
*Everyone* right now is by default multiplying the "texture * material
color" (Blender default material, Unity3d default material, Collada
default material, Spine... every format I can think of), otherwise it
seems like a wasted opportunity. And this multiplication has a
zero-cost on GPU, as far as my practice shows:)

So, I would like to update the X3D to also do this by default. It also
makes the spec more consistent (making multi-tetxuring more in line
with single-texturing; the multi-tetxuring nodes always by default
multiply, for both grayscale and RGB).

> Detecting grayscale image after image library has loaded the image in a generic way:
> recent freewrl runs a cpu intensive test for R==G==B when loading image files, in separate loading thread
> - starts with first pixel, runs till it finds a difference or end of image
> - if no difference, sets number of color channels to 1 (else 3) (plus alpha). Then in above code
>                         channels = getImageChannelCountFromTTI(node->appearance);
> returns channel count for influencing use in shader

I am doing it in Castle Game Engine too, to more efficiently deliver
images to GPU (grayscale image, without alpha, is 4x smaller than RGBA
image after all :).

But I don't think that such detection should be used to decide whether
the texture is "Intensity" or "RGB" texture (with respect to the
http://www.web3d.org/documents/specifications/19775-1/V3.3/Part01/components/lighting.html#t-Litcolourandalpha
). This makes it impossible for texture author to mark the texture as
RGB in GIMP/Photoshop (thus forcing the "multiply" behavior in X3D, by
storing the image using RGB).

See also "18.2.2 Texture map image formats" ,
http://www.web3d.org/documents/specifications/19775-1/V3.3/Part01/components/texturing.html#TextureMapImageFormats
. It describes how to decide whether texture is grayscale / RGB, and
it seems to suggest to me that one should look at the image header
(how the image is encoded), and *not* analyze image contents for this
purpose.

So, analyzing the image contents may be done by CGE or FreeWRL for
optimization purposes (takes longer to load, but may render faster),
but it should not cause a difference in the observed behavior by user.

I think that this would be best, and also friendly (not surprising) to
X3D / texture authors.

Best regards,
Michalis



More information about the x3d-public mailing list