[x3d-public] New Work Statement -- X3D Scenegraph to SemanticDescription and back. Semantics to Animation
yottzumm at gmail.com
Thu Oct 24 12:43:34 PDT 2019
Could your technology be used to control a robot (physically realized) with English or sign language?
Sent from Mail for Windows 10
From: Richard Kennaway
Sent: Tuesday, September 17, 2019 3:14 AM
To: x3d-public at web3d.org
Subject: Re: [x3d-public] New Work Statement -- X3D Scenegraph to SemanticDescription and back. Semantics to Animation
I've done some work on animating sign language myself, going from the
(https://en.wikipedia.org/wiki/Hamburg_Notation_System) to a more
convenient XML-ised version called SiGML (Signing Gesture Markup
Language), and thence to animation data for any given avatar. The
animation data can be generated in a variety of formats, including
VRML, although I've never upgraded it to generate X3D. The most
complete description of the general approach I took is in the
unpublished (except for arXiv) "Avatar-independent scripting for
real-time gesture animation" https://arxiv.org/abs/1502.02961.
-- Richard Kennaway
Quoting John Carlson <yottzumm at gmail.com>:
[Hide Quoted Text]
It appears that there is quite a bit of research behind translating
sign languages and other movement languages to a graphical form.
x3d-public mailing list
x3d-public at web3d.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the x3d-public