<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">Michalis,<br>
<br>
You did hit on the difficultly of expressing geometry or complex
texture in a single node. This is certainly the case when you are
using a model in X3D (as opposed to glTF or other model format).
What if there was a requirement that Mesh only supported simple
models and textures and anything complex needed to be referenced
by URL (to a non-X3D file). The external file could also include
internal animation of the model (e.g., pistons, wheel, walk, run,
jump, etc.)<br>
<br>
I believe that would work for all non-dynamic models or textures.
If the geometry changes according to user input (real-time
geometry changes), then there might be problems (probably an
understatement).<br>
<br>
You also brought up accessing various subparts of mesh that are
current existing nodes (e.g., reusing Appearance, Geometry,
Coordinate, etc.). The current standard when modeling (*) is to
create a complete model for each non-repeated instance. The parts
that may be reused (texture, animation) could be represented as
the equivalent of CSS classes and reused between instances of
Mesh. For example<br>
<br>
.texture-color {diffuse:orange; specular:blue; }<br>
.texture-ball {appearance: hard-surface.shader; }<br>
.texture-plush {appearance: fuzzy-surface.shader; }<br>
<br>
<Mesh class='.texture-color .texture-ball'
geometry='url(dragon.gltf)' ... /><br>
<br>
This does bring up the question of how do the colors and shaders
actually make it into the node. I don't have an answer for that
right now, I'm just beginning to explore the concepts.<br>
<br>
Leonard Daly<br>
<br>
(*) My experience is mostly with the entertainment industry,
including computer graphics for movies, TV, and games. It also
includes "gamification" of training, architecture, and other
fields. The primary tools used for these operations is Maya,
Blender, Unity, and Unreal. Broadcast (including film) rendering
is done with a separate application (e.g., RenderMan). Compositing
is also done separately.<br>
<br>
Using Maya or Blender an artist creates a model and animates it.
Textures are also created but not permanently applied to the
model. When the model is imported into the next tool in the chain,
the texture is applied. This allows reuse of textures, shaders,
etc. The animation is connected to various stimuli. Note that
nearly all animation is done on a rigged model -- even simple
animation like a spinning wheel or piston. Animation is reused at
the modeling stage when the rig is created. If the rig is
identical, then the animation can be reused -- even if the surface
geometry is completely different.<br>
<br>
<br>
<br>
</div>
<blockquote type="cite"
cite="mid:CAKzBGZNV+vOGU680m0Si5_5dy3vNis--uUJUHPjiAQJMr2fKwA@mail.gmail.com">
<pre wrap="">2017-04-26 6:50 GMT+02:00 Leonard Daly <a class="moz-txt-link-rfc2396E" href="mailto:Leonard.Daly@realism.com"><Leonard.Daly@realism.com></a>:
</pre>
<blockquote type="cite">
<pre wrap="">I would like to know what people think of the concept of flattening out
portions of the scene graph. Specifically the Shape node and its contents.
The current structure for an object is something like:
<Transform translation='1 2 3' orientation='.707 .707 0 1' scale='.6 1.2
1.8'>
<Shape>
<Box size='3 2 1'/>
<Appearance>
<Material diffuseColor='1 .5 0'/>
<ImageTexture url='wrap.jpg'/>
</Appearance>
</Shape>
</Transform>
Of course there any many different ways to have a Shape node, but using this
as an example...
Create a new node called Mesh and put all of the children attributes and
parent Transform (if one exists) attributes into the node as such
<Mesh
translation='1 2 3'
orientation='.707 .707 0 1'
scale='.6 1.2 1.8'
geometry='Box:size (3 2 1)'
diffuseColor='orange'
texture='url (wrap.jpg)' />
This may not be the best choices for various attributes and something more
complex would need to be done for the non-primitive geometries (e.g.,
IndexedFaceSet, ElevationGrid).
</pre>
</blockquote>
<pre wrap="">
It looks nice in a simple case, but the inevitable question is how you
would express the complicated cases. There are some really complex
things inside Shape and Appearance that would require inventing some
new syntax to "squeeze" them into a single attribute (if you would go
that route).
- You mention one example, non-primitive geometries. Besides simple
lists (of vertexes, indexes) they also have texture coordinates (that
may be 2D, 3D, 4D, may be generated or explicit, may be
MultiTextureCoordinate), shader attributes (that are a list of
xxxAttribute nodes). Even the vertex coordinates have options (single
or double precision, normal or Geospatial system).
- There are also complicated texture setups, using MultiTexture or
ComposedTexture3D or ComposedCubeMapTexture . Each of them consists of
a number of other texture nodes. These other texture nodes may contain
TextureProperties inside.
- The Appearance.shaders is complicated too, it may contain 5 from the
nodes inside "Programmable Shaders" component (
<a class="moz-txt-link-freetext" href="http://www.web3d.org/documents/specifications/19775-1/V3.3/Part01/components/shaders.html">http://www.web3d.org/documents/specifications/19775-1/V3.3/Part01/components/shaders.html</a>
) in various nesting configurations. Some of these nodes also allow
user-defined fields ("interface declarations", just like the Script
node).
And most of these are not features that we want to miss in X3D, I
think. E.g. I will definitely defend cube maps, 3D textures and
generated texture coordinates. They are common things in modern
graphic APIs (and implementations using shaders), they allow some
really cool things (like easy real-time mirrors,
<a class="moz-txt-link-freetext" href="https://castle-engine.sourceforge.io/x3d_implementation_cubemaptexturing.php#section_example">https://castle-engine.sourceforge.io/x3d_implementation_cubemaptexturing.php#section_example</a>
, or volumetric fog).
- Also, you would need to invent a new syntax for DEF / USE, to enable
sharing subparts of the Mesh. E.g. sometimes we want to share the
geometry, but use different appearance. Sometimes we want to share the
appearance, but not geometry.
It seems to me a really complicated task... The resulting Mesh
specification would be surely the largest node ever described in the
X3D specification:)
Unless you would introduce Mesh only a shortcut for "common usage" of
other nodes. This is an easy way, as you don't close any possibility
then.
Another alternative that I see is something that sums <Transform> +
<Shape> + <Appearance> but leaves the rest (like geometry, and all
appearance children) as children XML elements. This will still raise
the question of "how to share only a subset of Mesh", but otherwise is
good. Like this:
<Mesh translation='1 2 3' orientation='.707 .707 0 1' scale='.6 1.2 1.8'>
<Box size="3 2 1" />
<Material diffuseColor='orange' />
<ImageTexture url='"wrap.jpg"' />
</Mesh>
It is a less drastic change than yours. Indeed, it does not look as
nice as yours, it's not as concise. But it retains all the power of
the existing nodes' combinations.
Regards,
Michalis
_______________________________________________
x3d-public mailing list
<a class="moz-txt-link-abbreviated" href="mailto:x3d-public@web3d.org">x3d-public@web3d.org</a>
<a class="moz-txt-link-freetext" href="http://web3d.org/mailman/listinfo/x3d-public_web3d.org">http://web3d.org/mailman/listinfo/x3d-public_web3d.org</a>
</pre>
</blockquote>
<p><br>
</p>
<div class="moz-signature">-- <br>
<font class="tahoma,arial,helvetica san serif" color="#333366">
<font size="+1"><b>Leonard Daly</b></font><br>
3D Systems & Cloud Consultant<br>
LA ACM SIGGRAPH Chair<br>
President, Daly Realism - <i>Creating the Future</i>
</font></div>
</body>
</html>