<div dir="auto">If you like, I can help you get set up with ollama, a local model (maybe gemma4, but I’m unsure of coding capabilities) and maybe pi as a way to build agents on your own. I’ve just started myself. English generation looks very good. I haven’t set up any *.md files yet.</div><div dir="auto"><br></div><div dir="auto"><div style="font-size:inherit"><a href="https://youtu.be/f8cfH5XX-XU?si=ZRfdMmKQWgysB6Zj" style="font-size:inherit">https://youtu.be/f8cfH5XX-XU?si=ZRfdMmKQWgysB6Zj</a></div><br></div><div dir="auto"><br></div><div dir="auto">Seems like a way forward. Maybe we can find a local coding model. I’ve used gemma3 in LMStudio.</div><div dir="auto"><br></div><div dir="auto">I’m still a fan of web chat (free). Copilot or Codex might be options.</div><div dir="auto"><br></div><div dir="auto">John</div><div><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Thu, Apr 23, 2026 at 10:33 AM <<a href="mailto:cbullard@hiwaay.net">cbullard@hiwaay.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)"><br>
Hi John:<br>
<br>
MCCF has prompts per waypoint that drive the dynamic state changes of <br>
the agent where the agent is constrained by the initial settings of the <br>
cultivar pattern. So far we are using barcoded prompts and cultivars <br>
based on the Anthropic Constitution. My idea as a starting set while we <br>
worked out the math and the plumbing. The main friction at the moment <br>
is the SAI. We are adapting but getting behavior and material settings <br>
has been a lot tougher than I anticipated. A clean clear api doc for <br>
the X3D engines would be a real good thing.<br>
<br>
I plan to externalize the prompts and zone definitions in the next <br>
version. We keep a backlog in the GitHub.<br>
<br>
I want to work through that first.<br>
<br>
It includes a more complete math model per a recent paper from <br>
Anthropic but most of what they describe we already had discovered and <br>
implemented. Berners-Lee’s principle of independent invention is <br>
holding steady.<br>
<br>
I looked into MCP when it was mentioned on this list. Right now there <br>
are stubs for the foundation models and when we start using multiple <br>
models in the same scene, MCP is the most promising direction.<br>
<br>
Other local models are a good idea.<br>
<br>
The voice engine is using the browser resident voices. Edge is superior <br>
to Firefox. I have an eleven labs account building credits for the <br>
future. Richer emotional expression and XML compatible but it will <br>
require a new api design. TBD. As you say, $$$. Grants would be nice <br>
but I’m not holding my breath and I don’t need new bosses. The last ten <br>
years were hell in the MIC gulch.<br>
<br>
I designed and we implemented an xml export for a scene execution. TSV <br>
is weak and I am not a markdown fan. So one step at a time. The XML <br>
export is a start toward precisely declaring the content of messages but <br>
these are early days and the concept of emotional system evolution as <br>
field constraints is edge tech. The foundation docs and the semantic <br>
attractor dynamics theory are interesting reading.<br>
<br>
I looked at your example of emotion behaviors. Very nice and quite <br>
advanced X3D implementation. That is precisely what is needed for the <br>
next steps to go from simple shapes to rich scenes with h-anim. There <br>
are some very talented 3D artists on the street as some metaverse <br>
ventures have collapsed. Getting them to reconsider the now gif awfully <br>
apparent advantages of open standards that preserve and protect <br>
investment in content lifecycles should be a work item.<br>
<br>
A lot of new features have been added to X3D. Y’all have been doing very <br>
good work. My compliments.<br>
<br>
I want to focus on the X3D namespace to avoid Swiss Army knife syndrome <br>
or whack a mole. I can feed prompts to image and video generators and <br>
it would work but it is not a dynamic design. And it doesn’t give <br>
emotional flexibility for pause and inspect. And given the expense, <br>
Nyet for now.<br>
<br>
We’re back on the bleeding edge, eh? That’s where the fun is.<br>
<br>
Thank you.<br>
len<br>
<br>
<br>
<br>
2026-04-22 11:31 pm, John Carlson wrote:<br>
> 1. Have Claude generate a prompt for you.<br>
> <br>
> 2. Use skills to reduce token: <a href="https://skills.sh/" rel="noreferrer" target="_blank">https://skills.sh/</a> Requires agents.<br>
> I have been exploring ollama, pi and Gemma4 (many models) for local<br>
> processing<br>
> <br>
> 3. Take advantage of X3D MCP if you can.<br>
> <br>
> John<br>
> <br>
> On Wed, Apr 22, 2026 at 2:21 PM Len Bullard via x3d-public<br>
> <<a href="mailto:x3d-public@web3d.org" target="_blank">x3d-public@web3d.org</a>> wrote:<br>
> <br>
>> MCCF Progress:<br>
>> <br>
>> 1. Using Claude, a new chat loses state precariously. What worked<br>
>> smoothly in first chat goes to hell in new chat. Because chat<br>
>> context<br>
>> windoww are limited, am forced into new chat which is the 50 FIrst<br>
>> Dates<br>
>> problem. I can use the GitHub to recover state but it is not a<br>
>> perfect<br>
>> solution.<br>
>> <br>
>> <br>
> <a href="https://aiartistinprocess.blogspot.com/2026/04/the-seed-crystal-how-artists-can.html" rel="noreferrer" target="_blank">https://aiartistinprocess.blogspot.com/2026/04/the-seed-crystal-how-artists-can.html</a><br>
>> <br>
>> 2. Rate limits on downloads from GitHub force copying into Claude<br>
>> chat<br>
>> window. Second session limits are much lower so time is lost<br>
>> waiting<br>
>> for resets. Frustrating.<br>
>> <br>
>> 3. Much session time is wasted because Claude makes lots of<br>
>> mistakes<br>
>> with the coding and in the successive chats as the project grows<br>
>> more<br>
>> complex this compounds. Corrections take several tries and each<br>
>> attempt<br>
>> has to be retested by me. It is confident of fixes but wrong.<br>
>> Claude<br>
>> can corrupt code. Keep old working versions. Frustrating.<br>
>> <br>
>> 4. If Claude is pulling from GitHub and I am testing the local<br>
>> files it<br>
>> gives to me, it is easy to desync and collide. This did not occur<br>
>> during first chat session. A nightmare.<br>
>> <br>
>> AI coding is productive but can be expensive. It is not all it is<br>
>> cracked up to be. YMMV.<br>
>> <br>
>> 5. Working with W3DC projects can be hit or miss. Working groups<br>
>> are<br>
>> not as responsive as ten years ago and that is likely because it is<br>
>> a<br>
>> much smaller group. X_Lite is polymorphic because it is handling<br>
>> multiple formats. A 3D swiss army knife. At minimum this means API<br>
>> <br>
>> calls are not as documented in standard. Product documentation as<br>
>> far<br>
>> as I can tell does not include the product specific API calls. Two<br>
>> implications:<br>
>> <br>
>> a. If a developer is not "in the know" they will waste time and<br>
>> resources.<br>
>> <br>
>> b. AI will not be "in the know" because it scrapes for training<br>
>> data<br>
>> and sparse data or legacy non-currrent data is used for training.<br>
>> It<br>
>> can't keep up. Results of a and b are frustration, wasted time and<br>
>> wasted money. Caveat emptor.<br>
>> <br>
>> 6. A Harvard researcher forked the Github. The good news is<br>
>> interest<br>
>> in an open source project. The bad news is he inherited all of the<br>
>> SAI<br>
>> problems. It is a prototype and those are the risks with open<br>
>> source.<br>
>> We shall see if code contributions are made back to the original<br>
>> project. Fortunately everything is time stamped so YMMV.<br>
>> <br>
>> 7. Reddit articles show that others including Anthropic are<br>
>> researching<br>
>> the emotional attractors in Claude. Because affective layers are<br>
>> part<br>
>> of every Foundation LLM, this is good news. It validates what we<br>
>> set<br>
>> out to work on with the MCCF. The math approach we are using is<br>
>> likely<br>
>> unique because I require a dynamic system. It is working and we<br>
>> benefit<br>
>> from the published results of other researchers. MCCF is very<br>
>> expressive and pretty much on target for tools to explore emotional<br>
>> vectors, domains and trajectories in LLMa. The HumanML work twenty<br>
>> years ago paid off. I suspect the use of X3D for visualization for<br>
>> MCCF<br>
>> is likely unique. Because my goals for this project are not<br>
>> explicitly<br>
>> the same, I anticipate divergence. That's fine.<br>
>> <br>
>> Time to go do something else for a few hours.<br>
>> <br>
>> len<br>
>> <br>
>> _______________________________________________<br>
>> x3d-public mailing list<br>
>> <a href="mailto:x3d-public@web3d.org" target="_blank">x3d-public@web3d.org</a><br>
>> <a href="http://web3d.org/mailman/listinfo/x3d-public_web3d.org" rel="noreferrer" target="_blank">http://web3d.org/mailman/listinfo/x3d-public_web3d.org</a><br>
</blockquote></div></div>