<div><div><div dir="auto">•step up on stump•</div><div dir="auto">I think what I was proposing was NSM for a semantic language of X3D. I don’t think I made that clear. Again, movies, symbols and shapes; what languages or mishmashes can handle all of them without a serious amount of work? No, I don’t want to go backwards like the web did. Now that the web has begun to catch up, what can we do with it? What steps can we take to improve it?</div><div dir="auto"><br></div><div dir="auto">My goal here is to integrate vector, character and raster, among other things, or at least raise awareness. PostScript/PDF has done it. 3D graphics has not typically supported text as a first class citizen, and the web and NSM have not typically supported 3D graphics as a first class citizen. I don’t want to see further “mistakes” like that. I want to lower barriers, not raise them.</div><div dir="auto"><br></div><div dir="auto">I believe that one of the things Bruce Garner, Nik Mitschkowitz, Don Vickers and Carolyn Wimple were working on was the integration of STEP and CGM with SGML. I do not know why that was not made widespread. Perhaps there was competition with HP and Adobe? I am sorry I was not more clued into their project. I understand now!</div><div dir="auto"><br></div><div dir="auto">NSM has more history than OWL/RDF/Turtle and should be considered!</div><div dir="auto"><br></div><div dir="auto">John</div><div dir="auto"><br></div></div></div><div><div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Jul 17, 2020 at 1:17 PM Michael Turner <<a href="mailto:michael.eugene.turner@gmail.com" target="_blank">michael.eugene.turner@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)">So I skipped to the last paragraph, as John suggested. To the rest of<br>
you: John's understanding of Natural Semantic Metalanguage (NSM) is<br>
limited. From my more informed stance on NSM (and from some very<br>
limited acquaintance with X3D) can say categorically: X3D is not "the<br>
NSM of graphics."<br>
<br>
While I'm interested in attempting what I call Natural Semantic<br>
Programming (that is, programming built up in a declarative style, in<br>
natural language, and executed with constraint solvers, perhaps Prolog<br>
or ECLiPSe to start), it's hard for me to understand how this would<br>
apply immediately to X3D. Nor can I understand why any X3D standards<br>
developer or user would care.<br>
<br>
Any NSM-based computational framework that's capable of 3D graphical<br>
animation would probably need start with "what is a bit?" and build up<br>
from there, through floating point computation and matrix math. That's<br>
hardly a good starting point for anything like humanoid animation at<br>
this point. The main advantage of NSM would be to enable interlingual<br>
programming that's not just highly accessible not only to<br>
non-programmers, but interlingually so -- programs described in one<br>
natural language could be readily translatable to another.<br>
<br>
John tried to interest me in sign language, and fair enough: sign<br>
languages are natural languages. But to go from (say) English straight<br>
to some sign language seemed to freight my Natural Semantic<br>
Programming agenda with an elaborate graphical aspect at a time when<br>
that agenda was still just a twinkle in my (inner) eye.<br>
<br>
Long story short: you can safely ignore all this.<br>
<br>
Regards,<br>
Michael Turner<br>
Executive Director<br>
Project Persephone<br>
1-25-33 Takadanobaba<br>
Shinjuku-ku Tokyo 169-0075<br>
Mobile: +81 (90) 5203-8682<br>
<a href="mailto:turner@projectpersephone.org" target="_blank">turner@projectpersephone.org</a><br>
<br>
Understand - <a href="http://www.projectpersephone.org/" rel="noreferrer" target="_blank">http://www.projectpersephone.org/</a><br>
Join - <a href="http://www.facebook.com/groups/ProjectPersephone/" rel="noreferrer" target="_blank">http://www.facebook.com/groups/ProjectPersephone/</a><br>
Donate - <a href="http://www.patreon.com/ProjectPersephone" rel="noreferrer" target="_blank">http://www.patreon.com/ProjectPersephone</a><br>
Volunteer - <a href="https://github.com/ProjectPersephone" rel="noreferrer" target="_blank">https://github.com/ProjectPersephone</a><br>
<br>
"Love does not consist in gazing at each other, but in looking outward<br>
together in the same direction." -- Antoine de Saint-Exupéry<br>
<br>
On Thu, Jul 16, 2020 at 11:01 PM John Carlson <<a href="mailto:yottzumm@gmail.com" target="_blank">yottzumm@gmail.com</a>> wrote:<br>
><br>
><br>
> Don, all, would it be possible to get the X3DUOM as RDF/Turtle (just as an interesting exercise)? Thanks!<br>
><br>
> Michael,<br>
><br>
> TL;DR; Just read last paragraph if you like.<br>
><br>
> The subject basically describes all the technology I've been describing to you. I have never achieved a complete compiler to machine code for a general language. It's possible that I've created a bytecode interpreter using the translator flat file format converted to compileable/interpretable C++ variable declarations. I did at one point, expand branches into the steps a branch would take as a C++ function, but went no further and backed out, primarily because I wanted to decompile the code. I was able to compare the flat file to the decompiled code by re-persisting the programs, to verify the source code generator.<br>
><br>
> This is how I am a languages guy, I guess. I am not a big Natural Semantic Metalanguage (NSM) fan, but if NSM statements can be couched in the terms of persistent objects or class grammar (even list of lists...of primes--the obvious choice), and can fulfill the last paragraph, I would be interested in exploring the NSM concept for whatever purpose you want.<br>
><br>
> As a side note, I state that I've been able to convert documents to lists of lists of words. I merely used something like PDF->HTML, and converted HTML div's to JSON (JavaScript Object Notation) arrays. I wanted to do JSON translation-by-demonstration and I had an existing data source. We can have a very large supply of documents represented as JSON arrays, if we need it. I realize that NSM-DALIA takes English.<br>
><br>
> I guess we're dealing with a *semantic* object model and *semantic* graph with NSM. The Web3D consortium is currently working with RDF/Turtle I believe. There's also OWL/OWL2. I should be able to provide you with around 3000 .ttl (RDF/Turtle) files in a single domain (X3D) translated from XML: <a href="https://www.web3d.org/x3d/content/examples/X3dResources.html#Examples" rel="noreferrer" target="_blank">https://www.web3d.org/x3d/content/examples/X3dResources.html#Examples</a> (try Online link, .ttl is on right for individual scenes). Did you send me an NSM Bible at one point? Can we translate RDF/Turtle to NSM?<br>
><br>
> Ah yes! <a href="https://openresearch-repository.anu.edu.au/bitstream/1885/155252/2/What%20Christians%20Believe%20for%20Open%20Research%2020190204.pdf" rel="noreferrer" target="_blank">https://openresearch-repository.anu.edu.au/bitstream/1885/155252/2/What%20Christians%20Believe%20for%20Open%20Research%2020190204.pdf</a><br>
><br>
> I'm not saying that RDF/Turtle is even desirable. I just have a bunch of XML files (X3D scenes--non-RDF) I'd like to convert to their processable semantics. We may have to improvise some of the semantics, that is, someone converts animations to semantics, such as "walk." (I know this isn't an NSM prime. I want something like "move from point to point upright at normal speed using legs") It's very likely we would have to have thousands of walk examples, and many more with "not walk", and do some kind of supervised learning with current technology that I've seen (I need to think about adversarial networks, here). It may be possible to convert each NSM prime to an animation, IDK. I tried both the dictionary approach (word->video) and the SignWriting approach (word->icons)<br>
><br>
> I know NSM is about breaking down larger structures into simpler ones. Say I'd like to reduce the X3D to VR/AR animations (assume for conversion to a 4D printer/animatronics). If NSM-DALIA would help with that, I'm all ears. If we can even create animations from NSM phrases (skipping the X3D), that would be awesome! Can we create an animation of Towers of Hanoi from the NSM code in your Natural Semantic Programming paper?<br>
><br>
> Yes, I realize you assigned me that very goal with translating a single word to an animation, which I have not achieved yet.<br>
><br>
> In other words, we need to extend NSM to handle virtual worlds, or satellite worlds, not only human worlds, right? How does one describe a virtual world in NSM? We have Towers of Hanoi. We can obviously use any language we like as long as it can be reduced to NSM primes.<br>
><br>
> The HAnim (Humanoid Animation) ISO standard: <a href="https://www.web3d.org/documents/specifications/19774/V2.0/index.html" rel="noreferrer" target="_blank">https://www.web3d.org/documents/specifications/19774/V2.0/index.html</a> has been ratified. We now need examples of HAnimMotion (.bvh) elements. This is my other job.<br>
><br>
> The difficulty in all of this is translating from spatiotemporal semantics (geometry, coordinates, etc.) to/from NSM semantics. The NSM primes for this are:<br>
><br>
> Time: WHEN/TIME, NOW, BEFORE, AFTER, A LONG TIME, A SHORT TIME, FOR SOME TIME, MOMENT<br>
><br>
> Space: WHERE/PLACE, HERE, ABOVE, BELOW, FAR, NEAR, SIDE, INSIDE, TOUCH (CONTACT)<br>
><br>
> Thus, there are three kinds of "output," movies, symbols, and shapes. NSM handles symbols. What handles movies and shapes? X3D! I don't care if it's VRML, XML, JSON, Turtle, Python, JavaScript, Java ... X3D is the NSM of graphics! Now, how can NSM and X3D work together?<br>
><br>
> John<br>
</blockquote></div></div>
</div>