[x3d-public] New Work Statement -- X3D Scenegraph to Semantic Description andback. Semantics to Animation
yottzumm at gmail.com
Mon Sep 16 15:34:46 PDT 2019
In other words, I’m trying to translate virtual worlds into books (bibles) for deaf people. It’s important to realize that standard writing is not native language to deaf people.
Sent from Mail for Windows 10
From: John Carlson
Sent: Monday, September 16, 2019 5:24 PM
To: X3D Graphics public mailing list; ali at forkaia.com; Don Brutzman; Joseph D Williams; semantics at web3d.org
Subject: New Work Statement -- X3D Scenegraph to Semantic Description andback. Semantics to Animation
I talked with my wife about this, and she agreed. I need to focus on the 3D/HAnim aspects of my mission more than the translation aspects. Thinking about translating semantics to 400 deaf languages is a bit much at this time, but we may get there. I have asked around for semantics of literature, but I specifically asked for the bible. There may be other semantics sources (X3D up and coming standards) that we can translate into sign language that we already have semantics for. The initial idea would be to take a semantics source (an X3D scenegraph semantics document) and translate into some sign language (ASL/signwriting might be first), as automatically as we can.
https://www.web3d.org/standards/h-anim (Humanoid Animation -- HAnim standards and draft standards)
https://www.web3d.org/x3d/content/examples/Basic/HumanoidAnimation/ (Examples of Humanoid Animation in various versions of the standards)
https://www.web3d.org/working-groups/x3d-semantic-web/charter X3D semantics working group charter (and links)
This means: Creating a collection of sign language (BVH?) “Motion” or “Interpolation” HAnim “scripts.” Resolve all sign segments from http://signwriting.org into these motion scripts or combination of scripts.
Possibly convert signwriting to scratch-blocks (they already have icons for this—we would replace the icons).
I have both Kinect and Leap Motion, so I feel these could help.
I think this would be help for making 3D scenes and schemas more accessible to blind people, if this has not already been done. The advantage for Web3D would be a declarative 2D or 3D encoding (most of our encodings are textual) of a scenegraph.
I am looking for people to assist me! Ali may have some resources.
I know we have translation from X3DUOM to semantics. What about an X3D scenegraph translated to semantics? X3dToSemantics.xslt? X3dToRDF.xslt? X3dToOWL.xslt? X3dToTTL.xslt? Don?
Possible leverage points: Look for semantic descriptions of 3D scenes or semantic descriptions of movies (I think something was done at Interval Research. I will try to contact Brian L. about that in October).
The overall goal is to get a semantic description into a visual presentation of the semantics (beyond text), either iconic, video, or browser based 3D. This may mean converting a semantic description to X3D, and then rendering the animation.
I will continue refining this, but early feedback is welcome. It currently seems like I just want conversion of X3D to semantics and back to X3D (possibly with captioned description in sign language and natural language). Is this all we want out of this is a roundtrip from X3D to semantics to X3D?
Would a 2D or 3D version of X3DUOM be useful? There are already UML 3D versions, right? I would guess that’s not too useful.
What is the value proposition behind a semantic encoding for X3D documents? Would this be a truly international encoding?
So a semantic-based renderer. What’s currently in semantics that can’t be converted to X3D?
Sorry, there’s a lot of repetition in the above document. The point is to create a 1st language encoding for deaf pople (sign is their primary language).
This was kind of a shotgun approach, but it has really helped focus my ideas.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the x3d-public