<html xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"><head><meta http-equiv=Content-Type content="text/html; charset=utf-8"><meta name=Generator content="Microsoft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:Wingdings;
panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph
{mso-style-priority:34;
margin-top:0in;
margin-right:0in;
margin-bottom:0in;
margin-left:.5in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
.MsoChpDefault
{mso-style-type:export-only;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
/* List Definitions */
@list l0
{mso-list-id:2063207248;
mso-list-type:hybrid;
mso-list-template-ids:1057291824 -1 67698691 67698693 67698689 67698691 67698693 67698689 67698691 67698693;}
@list l0:level1
{mso-level-start-at:0;
mso-level-number-format:bullet;
mso-level-text:\F0D8;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;
font-family:Wingdings;
mso-fareast-font-family:"Times New Roman";
mso-bidi-font-family:"Times New Roman";}
@list l0:level2
{mso-level-number-format:bullet;
mso-level-text:o;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;
font-family:"Courier New";}
@list l0:level3
{mso-level-number-format:bullet;
mso-level-text:\F0A7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;
font-family:Wingdings;}
@list l0:level4
{mso-level-number-format:bullet;
mso-level-text:\F0B7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;
font-family:Symbol;}
@list l0:level5
{mso-level-number-format:bullet;
mso-level-text:o;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;
font-family:"Courier New";}
@list l0:level6
{mso-level-number-format:bullet;
mso-level-text:\F0A7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;
font-family:Wingdings;}
@list l0:level7
{mso-level-number-format:bullet;
mso-level-text:\F0B7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;
font-family:Symbol;}
@list l0:level8
{mso-level-number-format:bullet;
mso-level-text:o;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;
font-family:"Courier New";}
@list l0:level9
{mso-level-number-format:bullet;
mso-level-text:\F0A7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;
font-family:Wingdings;}
ol
{margin-bottom:0in;}
ul
{margin-bottom:0in;}
--></style></head><body lang=EN-US link=blue vlink="#954F72" style='word-wrap:break-word'><div class=WordSection1><ul style='margin-top:0in' type=disc><li class=MsoListParagraph style='margin-left:0in;mso-list:l0 level1 lfo1'>Graph Convolution Networks</li></ul><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Hi John, </p><p class=MsoNormal>Well, looking at ways of organizing paths and data in a live repository when </p><p class=MsoNormal>dealing with data, discovery, knowledge, and perhaps wisdom, </p><p class=MsoNormal>along with descriptions of data and knowledge, </p><p class=MsoNormal>using representations of data, finding and showing relationships between elements and collections of data, </p><p class=MsoNormal>having dynamic, orderly, findable, predictable interactions with the collection, </p><p class=MsoNormal>obvious, appropriate planned and whimsical navigation within and outside of the current environment without getting lost, </p><p class=MsoNormal>while controlling a live on the inside and awake to the outside versatile, extensible graphical structure, </p><p class=MsoNormal>then, what better than x3d realtime anytime scenegraph and internal/external sai. </p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>The key is you could do all basic functionality with x3d scenegraph using nothing but metadata. </p><p class=MsoNormal>Active or passive multisensory multimedia, in a way, are just add-in inhabitants for our beloved dag. </p><p class=MsoNormal>How do you improve this with ai? </p><p class=MsoNormal>Teach the ai to help you do input output add archive, structure, document, navigate, and interact with your current dynamic knowledge repository and other networks and graphs. </p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>It really is part of the fun with x3d,</p><p class=MsoNormal>Joe</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal> </p><p class=MsoNormal><o:p> </o:p></p><div style='mso-element:para-border-div;border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0in 0in 0in'><p class=MsoNormal style='border:none;padding:0in'><b>From: </b><a href="mailto:yottzumm@gmail.com">John Carlson</a><br><b>Sent: </b>Saturday, October 1, 2022 9:29 PM<br><b>To: </b><a href="mailto:joedwil@earthlink.net">Joe D Williams</a>; <a href="mailto:x3d-public@web3d.org">X3D Graphics public mailing list</a><br><b>Subject: </b>Re: Text-to-mesh</p></div><p class=MsoNormal><o:p> </o:p></p><div><p class=MsoNormal>Ok.</p></div><div><p class=MsoNormal><o:p> </o:p></p></div><div><p class=MsoNormal>I imagine some perhaps future tool where a prompt or description is given and a games/unreal/unity world/universe/metaverse is produced. can we do this with VRML or XML? I understand that XR may not initially be supported.</p></div><div><p class=MsoNormal><o:p> </o:p></p></div><div><p class=MsoNormal>Originally, in 1986, I envisioned a description of games rules being converted to a game, with something like Cyc.</p></div><div><p class=MsoNormal><o:p> </o:p></p></div><div><p class=MsoNormal>Eventually, I even wrote a crazy8s game using OpenCyc. The product was way too slow.</p></div><div><p class=MsoNormal><o:p> </o:p></p></div><div><p class=MsoNormal>I’ve also envisioned using version space algebra or EGGG or Ludii to do this. I even wrote a card table top to record plays. I tried to find commonalities between moves, but no supercomputer was available.</p></div><div><p class=MsoNormal><o:p> </o:p></p></div><div><p class=MsoNormal>I guess i want to have an AI cache everything needed for the game, so interaction can be really fast.</p></div><div><p class=MsoNormal><o:p> </o:p></p></div><div><p class=MsoNormal>Is the Web3D Consortium Standards/Metaverse Standards up to the task?</p></div><div><p class=MsoNormal><o:p> </o:p></p></div><div><p class=MsoNormal>How does game mechanics link up with rendering? Michalis?</p></div><div><p class=MsoNormal><o:p> </o:p></p></div><div><p class=MsoNormal>John</p></div><div><p class=MsoNormal><o:p> </o:p></p><div><div><p class=MsoNormal>On Sat, Oct 1, 2022 at 10:34 PM John Carlson <<a href="mailto:yottzumm@gmail.com">yottzumm@gmail.com</a>> wrote:</p></div><blockquote style='border:none;border-left:solid #CCCCCC 1.0pt;padding:0in 0in 0in 6.0pt;margin-left:4.8pt;margin-right:0in'><div><p class=MsoNormal>Even as a start, semantics metadata to image or video would be a start. I’m not sure what’s possible with Graph Convolution Networks (GCNs).</p></div><div><p class=MsoNormal><o:p> </o:p></p></div><div><p class=MsoNormal>John</p></div><div><p class=MsoNormal><o:p> </o:p></p><div><div><p class=MsoNormal>On Sat, Oct 1, 2022 at 9:17 PM John Carlson <<a href="mailto:yottzumm@gmail.com" target="_blank">yottzumm@gmail.com</a>> wrote:</p></div><blockquote style='border:none;border-left:solid #CCCCCC 1.0pt;padding:0in 0in 0in 6.0pt;margin-left:4.8pt;margin-right:0in'><div><p class=MsoNormal>I realize image-to-3D and video-to-3D may be possible already.</p></div><div><p class=MsoNormal><o:p> </o:p></p></div><div><p class=MsoNormal>John</p></div><div><p class=MsoNormal><o:p> </o:p></p><div><div><p class=MsoNormal>On Sat, Oct 1, 2022 at 9:05 PM John Carlson <<a href="mailto:yottzumm@gmail.com" target="_blank">yottzumm@gmail.com</a>> wrote:</p></div><blockquote style='border:none;border-left:solid #CCCCCC 1.0pt;padding:0in 0in 0in 6.0pt;margin-left:4.8pt;margin-right:0in'><p class=MsoNormal>Now with prompt engineering, we can do text-to-image and text-to-video. Now, how about text(description)-to-mesh, text(description)-to-shape, text(description)-to-scene, or even text(description)-to-world?</p><div><p class=MsoNormal><o:p> </o:p></p></div><div><p class=MsoNormal>Can semantics help this?</p></div><div><p class=MsoNormal><o:p> </o:p></p></div></blockquote></div></div></blockquote></div></div></blockquote></div></div><p class=MsoNormal style='margin-left:.2in'>John</p><p class=MsoNormal><o:p> </o:p></p></div></body></html>