[x3d-public] New Work Statement -- X3D Scenegraph to SemanticDescription and back. Semantics to Animation
yottzumm at gmail.com
Thu Oct 24 05:46:50 PDT 2019
I got a chance to look at your work recently. More animations would be more convincing. I know you’re working on more advanced system. I recently got involved in Natural Semantic Metalanguage (NSM) and have been tasked with translating NSM primes and phrases to ASL. The best approach so far has been to translate primes and phrases using:
It seems like most deaf people ignore deaf writing systems, and go for videos. Would there be a service available that can translate English to HamNoSys, SiGML, and VRML/X3D? Would it be possible for me to work on such a thing?
Sent from Mail for Windows 10
From: Richard Kennaway
Sent: Tuesday, September 17, 2019 3:14 AM
To: x3d-public at web3d.org
Subject: Re: [x3d-public] New Work Statement -- X3D Scenegraph to SemanticDescription and back. Semantics to Animation
I've done some work on animating sign language myself, going from the
(https://en.wikipedia.org/wiki/Hamburg_Notation_System) to a more
convenient XML-ised version called SiGML (Signing Gesture Markup
Language), and thence to animation data for any given avatar. The
animation data can be generated in a variety of formats, including
VRML, although I've never upgraded it to generate X3D. The most
complete description of the general approach I took is in the
unpublished (except for arXiv) "Avatar-independent scripting for
real-time gesture animation" https://arxiv.org/abs/1502.02961.
-- Richard Kennaway
Quoting John Carlson <yottzumm at gmail.com>:
[Hide Quoted Text]
It appears that there is quite a bit of research behind translating
sign languages and other movement languages to a graphical form.
x3d-public mailing list
x3d-public at web3d.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the x3d-public