[x3d-public] Semantic Scene Description
yottzumm at gmail.com
Mon Sep 30 19:18:45 PDT 2019
The goals of the X3D semantic working group are elsewhere. I am a skeptic of the Semantic Web as well, and would prefer if it was on a firmer footing. In order to do that for X3D will require some non-trivial work. We have at least 4 capabilities we are working on. I am currently interested in the 2nd and 3rd:
• An XML Schema/XML Object Model/JSON Schema/X3D Ontology capability. I call this the non-rendered abstract level.
• The rendered visualization of the above. I call this the rendered abstract level. OSNAP falls under this.
• The rendered, browser display of a scene. I call this the rendered concrete level.
NSM, https://en.wikipedia.org/wiki/Natural_semantic_metalanguage has several capabilities:
• A set of atomic meanings/words which are cross language
• A DALIA “parser”. The question is, what do we get out of this parser? My believe is that you get a translation from one language to another. You have declared that syntax and semantics are inseparable. I don’t disagree. There is also semiotics, pragmatics, etc.
• Sets of molecules which define higher or composite meanings.
X3D is similar to NSM in that it has
• A set of atomic tags and attributes which are currently single language, but might be considered cross-language (like HTML5 is cross-language).
• PROTOs as a way to define new classes. PROFILEs/components are a way to add atoms to a core set of features.
I have knowledge that some languages are not hierarchical, I need more information about how NSM is structured through molecules, that is, how are NSM primes collected. I’ve seen some uses of sentences and collections of sentences create meaning. Those are 2 levels of structure. So let’s compare a NSM sentence to an XML element. I believe an NSM prime may be the same as an XML tag for instance. Thus if we were to construct the most primitive X3D without attributes, we would get default shapes and colors, likely all present in the center of the scene. A transform is like a group without attributes. We probably couldn’t get much visual information from that.
So I think we’re both kind of grasping for 3D meanings which are primitive. I believe I brought up Xtranormal, where you put in scripts an out comes a movie after some time. Xtranormal background scenes chosen in advance, I think, not created. You can have some number of actors in the scene whose mouths and standing bodies follow the script. Geometric primitives might be considered primitives in 3D, but often they are broken down into faces or triangles. Triangles would be broken down into 3 line segments or 3 points (although the newer graphics cards are doing raytracing)—the primitive Shape. There are two potential primitives in Material: Color (over a whole triangle) and Texture (more than one color per triangle). These primitives are affected by shading. There are additional primitives, camera/eye point, light, … my brain ran out of steam. Let’s please collect these for discussion.
Triangle (partial plane)
The above seem very sight oriented, and thus, not prime. I tried again:
Sides which contact at a place,
3 sides that contact in three places,
Inside 3 sides see/feel.
Where something sees,
Many Particles/waves that reveal somethings we can see (help here?).
Oh, I seem that SEE and HEAR are primes. Hmm. How does a blind person have a meaning of see? How does a deaf person have a meaning of hear?
I think we need some way of labelling molecules for bandwidth savings.
I haven’t figured out zero in terms of primes yet.
I think it would be good to put things in terms of NSM, if only for blind people.
I am reminded of the book that Brian Lopez read that defined all its terms, then proceeded with the rest of the book using those terms.
Should any effort of the X3D semantics group go towards defining stuff up from the ground? What terms do the standards assume people understand? How much of the understanding is in appinfo and description and thus not currently computable?
What is the appropriate technology to build up from NSM into X3D (XML schema for example)? I’m guessing that XML3D would be a good choice? What do you think?
Note that I have no interest in developing this. I am interested in non-rendered concrete and rendered abstract levels. I am interested in rendering and exploring the results of building a firm *semantic* foundation that X3D will sit upon.
I am interested in NSM as far as sign language goes, but unless we have a decomposition of signs into NSM (a very aggressive work), I am more interested in working on the sign/phoneme level (BVH, mocap, interpolation between key frames) than the meaning level. In other words, if we come up with NSM primes in sign language (ASL is fine), I would be very happy!
While NSM is rigorous, we can also provide a lot of value with the status quo technology. In other words, I don’t want to build a Peano arithmetic, I want to use what’s already in mathematics and language (I’m an application programmer). I’m with you on don’t invent a new language. I am very wary of what happened to Frege when Russell came up with his paradox, and of Godel’s theorems.
In other words, read “I am a Strange Loop” by Douglas Hofstadter. Or look at the symbol for infinity or a moebius strip.
I believe a worthy task would be to better define NSM abstract levels. That is, how do you define the labels for the categories of NSM primes?
I will continue conversing on this, but am mentally tired now.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the x3d-public