[x3d-public] Story-to-Motion

John Carlson yottzumm at gmail.com
Wed Nov 22 18:35:38 PST 2023


Hot air:

What may be more interesting to game developers is tracking many, many
players in a game, and then producing NPCs based on traces capable of
playing like humans.  It’s similar to “predict the next word” for popular
AIs.   Then add reinforcement learning to improve successful NPCs.  This is
where synthetic motion generation (ala Story-to-Motion) may come in handy
instead of relying solely on humans for training data.  The question is,
can story-to-motion provide enough creativity?

On Wed, Nov 22, 2023 at 7:57 PM John Carlson <yottzumm at gmail.com> wrote:

> Please let us know of supporting authoring systems and players.
>
> From what I can see of the video and paper, the Story-To-Motion is more
> about position of the humanoid and rotation of joints.  If these are
> “Group”ed by keyword, and smooth transitions are accomplished by chaining
> animations, then some degree of automation can result.  If result numbers
> are available in a potential tool like JAminate,  Blender or a text editor,
> and accessible to human animators for optimization, all the better!
>
> I’m thinking more and more about parsing interpolators into JAminate,
> which has been a goal.  Using JAminate or a spreadsheet, one could
> potentially group key-fractions and keyValues, then output interpolators
> and routes could be published in addition to chaining of moves with Boolean
> Sequencers.
>
> John
>
> On Wed, Nov 22, 2023 at 7:18 PM Joe D Williams <joedwil at earthlink.net>
> wrote:
>
>> It might be that the closest x3d going on with this is in web3d x3d HAnim
>> WG facial animation standards-track. The facial mesh of various densities
>> are organized into regions activated in certain sets to perform a certain
>> expression.  So, standard keywords and parameter sets can activate standard
>> related areas of the mesh to allow use of standardized animations as well
>> as serve as a prime basis for the personalization of the Humanoid that you
>> really want to do. This is not a quick and simple effort. It is very
>> interesting, design-friendly, much simplified authoring, and much more
>> standardizable and demonstratable at all levels of expression using the
>> hanim displacer concept.
>>
>>
>>
>> Joe
>>
>>
>>
>>
>>
>>
>>
>> -----Original Message-----
>> From: John Carlson <yottzumm at gmail.com>
>> Sent: Nov 21, 2023 3:19 PM
>> To: Joe D Williams <joedwil at earthlink.net>, X3D Graphics public mailing
>> list <x3d-public at web3d.org>
>> Subject: Re: Story-to-Motion
>>
>>
>> Here’s a better link i hope.
>>
>>
>> https://arxiv.org/abs/2311.07446#:~:text=A%20new%20and%20challenging%20task,level%20control%20(motion%20semantics)
>> .
>>
>> This should be a better link (added f):
>>
>> https://arxiv.org/pdf/2311.07446.pdf
>>
>> Apologies for multiple posts.
>>
>> John
>>
>> On Tue, Nov 21, 2023 at 5:08 PM John Carlson <yottzumm at gmail.com> wrote:
>>
>>> https://arxiv.org/pdf/2311.07446.pd
>>>
>>> Has this research been peer-reviewed?
>>>
>>> It’s hard to read on my cell phone, I may get my computer and attempt to
>>> read it.
>>>
>>> John
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20231122/ecdf68e6/attachment-0001.html>


More information about the x3d-public mailing list