[x3d-public] MCCF Issues
John Carlson
yottzumm at gmail.com
Thu Apr 23 16:06:57 PDT 2026
If you like, I can help you get set up with ollama, a local model (maybe
gemma4, but I’m unsure of coding capabilities) and maybe pi as a way to
build agents on your own. I’ve just started myself. English generation
looks very good. I haven’t set up any *.md files yet.
https://youtu.be/f8cfH5XX-XU?si=ZRfdMmKQWgysB6Zj
Seems like a way forward. Maybe we can find a local coding model. I’ve
used gemma3 in LMStudio.
I’m still a fan of web chat (free). Copilot or Codex might be options.
John
On Thu, Apr 23, 2026 at 10:33 AM <cbullard at hiwaay.net> wrote:
>
> Hi John:
>
> MCCF has prompts per waypoint that drive the dynamic state changes of
> the agent where the agent is constrained by the initial settings of the
> cultivar pattern. So far we are using barcoded prompts and cultivars
> based on the Anthropic Constitution. My idea as a starting set while we
> worked out the math and the plumbing. The main friction at the moment
> is the SAI. We are adapting but getting behavior and material settings
> has been a lot tougher than I anticipated. A clean clear api doc for
> the X3D engines would be a real good thing.
>
> I plan to externalize the prompts and zone definitions in the next
> version. We keep a backlog in the GitHub.
>
> I want to work through that first.
>
> It includes a more complete math model per a recent paper from
> Anthropic but most of what they describe we already had discovered and
> implemented. Berners-Lee’s principle of independent invention is
> holding steady.
>
> I looked into MCP when it was mentioned on this list. Right now there
> are stubs for the foundation models and when we start using multiple
> models in the same scene, MCP is the most promising direction.
>
> Other local models are a good idea.
>
> The voice engine is using the browser resident voices. Edge is superior
> to Firefox. I have an eleven labs account building credits for the
> future. Richer emotional expression and XML compatible but it will
> require a new api design. TBD. As you say, $$$. Grants would be nice
> but I’m not holding my breath and I don’t need new bosses. The last ten
> years were hell in the MIC gulch.
>
> I designed and we implemented an xml export for a scene execution. TSV
> is weak and I am not a markdown fan. So one step at a time. The XML
> export is a start toward precisely declaring the content of messages but
> these are early days and the concept of emotional system evolution as
> field constraints is edge tech. The foundation docs and the semantic
> attractor dynamics theory are interesting reading.
>
> I looked at your example of emotion behaviors. Very nice and quite
> advanced X3D implementation. That is precisely what is needed for the
> next steps to go from simple shapes to rich scenes with h-anim. There
> are some very talented 3D artists on the street as some metaverse
> ventures have collapsed. Getting them to reconsider the now gif awfully
> apparent advantages of open standards that preserve and protect
> investment in content lifecycles should be a work item.
>
> A lot of new features have been added to X3D. Y’all have been doing very
> good work. My compliments.
>
> I want to focus on the X3D namespace to avoid Swiss Army knife syndrome
> or whack a mole. I can feed prompts to image and video generators and
> it would work but it is not a dynamic design. And it doesn’t give
> emotional flexibility for pause and inspect. And given the expense,
> Nyet for now.
>
> We’re back on the bleeding edge, eh? That’s where the fun is.
>
> Thank you.
> len
>
>
>
> 2026-04-22 11:31 pm, John Carlson wrote:
> > 1. Have Claude generate a prompt for you.
> >
> > 2. Use skills to reduce token: https://skills.sh/ Requires agents.
> > I have been exploring ollama, pi and Gemma4 (many models) for local
> > processing
> >
> > 3. Take advantage of X3D MCP if you can.
> >
> > John
> >
> > On Wed, Apr 22, 2026 at 2:21 PM Len Bullard via x3d-public
> > <x3d-public at web3d.org> wrote:
> >
> >> MCCF Progress:
> >>
> >> 1. Using Claude, a new chat loses state precariously. What worked
> >> smoothly in first chat goes to hell in new chat. Because chat
> >> context
> >> windoww are limited, am forced into new chat which is the 50 FIrst
> >> Dates
> >> problem. I can use the GitHub to recover state but it is not a
> >> perfect
> >> solution.
> >>
> >>
> >
> https://aiartistinprocess.blogspot.com/2026/04/the-seed-crystal-how-artists-can.html
> >>
> >> 2. Rate limits on downloads from GitHub force copying into Claude
> >> chat
> >> window. Second session limits are much lower so time is lost
> >> waiting
> >> for resets. Frustrating.
> >>
> >> 3. Much session time is wasted because Claude makes lots of
> >> mistakes
> >> with the coding and in the successive chats as the project grows
> >> more
> >> complex this compounds. Corrections take several tries and each
> >> attempt
> >> has to be retested by me. It is confident of fixes but wrong.
> >> Claude
> >> can corrupt code. Keep old working versions. Frustrating.
> >>
> >> 4. If Claude is pulling from GitHub and I am testing the local
> >> files it
> >> gives to me, it is easy to desync and collide. This did not occur
> >> during first chat session. A nightmare.
> >>
> >> AI coding is productive but can be expensive. It is not all it is
> >> cracked up to be. YMMV.
> >>
> >> 5. Working with W3DC projects can be hit or miss. Working groups
> >> are
> >> not as responsive as ten years ago and that is likely because it is
> >> a
> >> much smaller group. X_Lite is polymorphic because it is handling
> >> multiple formats. A 3D swiss army knife. At minimum this means API
> >>
> >> calls are not as documented in standard. Product documentation as
> >> far
> >> as I can tell does not include the product specific API calls. Two
> >> implications:
> >>
> >> a. If a developer is not "in the know" they will waste time and
> >> resources.
> >>
> >> b. AI will not be "in the know" because it scrapes for training
> >> data
> >> and sparse data or legacy non-currrent data is used for training.
> >> It
> >> can't keep up. Results of a and b are frustration, wasted time and
> >> wasted money. Caveat emptor.
> >>
> >> 6. A Harvard researcher forked the Github. The good news is
> >> interest
> >> in an open source project. The bad news is he inherited all of the
> >> SAI
> >> problems. It is a prototype and those are the risks with open
> >> source.
> >> We shall see if code contributions are made back to the original
> >> project. Fortunately everything is time stamped so YMMV.
> >>
> >> 7. Reddit articles show that others including Anthropic are
> >> researching
> >> the emotional attractors in Claude. Because affective layers are
> >> part
> >> of every Foundation LLM, this is good news. It validates what we
> >> set
> >> out to work on with the MCCF. The math approach we are using is
> >> likely
> >> unique because I require a dynamic system. It is working and we
> >> benefit
> >> from the published results of other researchers. MCCF is very
> >> expressive and pretty much on target for tools to explore emotional
> >> vectors, domains and trajectories in LLMa. The HumanML work twenty
> >> years ago paid off. I suspect the use of X3D for visualization for
> >> MCCF
> >> is likely unique. Because my goals for this project are not
> >> explicitly
> >> the same, I anticipate divergence. That's fine.
> >>
> >> Time to go do something else for a few hours.
> >>
> >> len
> >>
> >> _______________________________________________
> >> x3d-public mailing list
> >> x3d-public at web3d.org
> >> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20260423/8490c224/attachment-0001.html>
More information about the x3d-public
mailing list