[x3d-public] [AI] MCCF Milestone: Emotion Field Translation and XML Export
cbullard at hiwaay.net
cbullard at hiwaay.net
Thu Apr 30 19:35:39 PDT 2026
On 2026-04-30 5:46 pm, John Carlson wrote:
> Interesting. Could HumanML be used with sign language?
HumanML is where we started 25 years ago. It was an interesting idea
but I boiled the ocean in that design. I don't have a copy of the
HumanML schema and walked away from the work. It was too soon and we
didn't have the technical support, and really, we needed what we can get
from the affective layers of the LLM. The AI was the missing piece.
Since then X3D has become much more powerful. I am not sure which
features we can use, which features are widely supported, etc. I am
working off old chops from the last worlds I built. This is a fresh
start.
This time we build what we need as we know we need it. The XML exports
document the scene requirements. That is a work in process. I will
report back here as we go so anyone who wants to inspect it can by
looking at the GitHub or asking me for current examples.
We're only now creating the X3D generator and master script. From
Claude:
"The sign language question is actually a good signal about where the
system is heading — gesture triggering is a natural extension of the
channel outputs we already have. E-channel drives facial expression and
upper body, B-channel drives posture and grounding, P drives gaze and
orientation, S drives proximity and reach. Sign language is a
high-resolution version of that same mapping. But you're right — we need
to run the instrument before we can answer that question honestly. File
it, don't solve it.
On the scene generator question: what you're describing is a separation
between the semantic skeleton (MCCF-enabled, channel-driven,
Proto-defined) and the visual skin (geometry an AI artist loads into the
skeleton). That's exactly the right architecture and it's achievable.
The Proto is the skeleton. Geometry is a child node the author replaces.
The Master Script only addresses the skeleton — it doesn't know or care
what the geometry looks like. That means an AI artist can swap in
anything from a colored box to a full H-Anim figure without touching the
Script."
V2 was a push model. This is a pull model. Zones are attractors for
agents and actually agents themselves. Just fixed. It's a strange idea
but I think it is the right one.
I need an H-Anim example to work with. One that has basic gestures or
behaviors we can trigger from the MCCF. I think I can get one from the
W3DC repository. This is where experimenation by the X3D community of
developers and artists can be a Real Good Thing as Bill Mumy says.
We're on new ground. I don't know of other efforts to drive an X3D
simulation witha a multi channel coherence field (Say emotional engine
++) integrated with an LLM. I am working this out one piece at a time.
Pretty much like rocket design. Build one. Blow it up. Rinse and
repeat. :)
len
More information about the x3d-public
mailing list