[x3d-public] Story-to-Motion

John Carlson yottzumm at gmail.com
Wed Nov 22 20:27:26 PST 2023


Doug, if you need to re-enable HAnimMotion (motions) in the exporter, go
ahead.  I couldn't get it working well, but we could take
export_interpolators.py and redo export_bvh.py.

John

On Wed, Nov 22, 2023 at 9:36 PM GPU Group <gpugroup at gmail.com> wrote:

> Looks interesting -- go for it.
> And maybe some of the papers referenced might have some interesting
> algorithms.
> Relating to HAnim, could there be a HyperMotion node, and .motions
> [Motion] would be the 'database of short animations' the article mentions,
> and with any attributes needed added to the Motion node, for example 'name'
> for a name lookup / matching of motion, a 'cycle duration' handy, distance
> per cycle etc.
> HyperMotion might compute or be programmed with a queue of (name,
> location_target(x,y), duration(time)) tuples, and when it gets to a target,
> it looks up the next motion 'name' from the .motions list, and does a
> transition blending for a fraction of a second, between old and new motion.
> But the story-to-motion or whatever else is generating the queue of tuples
> - that's the part that looks either laborious pre-programmed, or requires
> AI automation of some sort to generate the queue.
> And what about collision avoidance with other animated characters and
> static scene elements? What about switching to stair climbing animation
> when over stairs and other scene-responsiveness?
> Reminds me a bit of the Crowd Particles function map.
>
> https://freewrl.sourceforge.io/tests/40_Particle_systems/crowd/function_map.png
>
>
> https://drive.google.com/file/d/1krC0k2CKZDuOGgzlyaFosrFbjFRO2qzx/view?usp=drive_link
>
> which could be done other ways such as function bounding boxes /
> rectangles or as the paper suggested trajectory lines.
> So HyperMotion might have children field with some sort of (zone,
> function) tuple list it can use to switch between motions along a
> trajectory.
> -Doug
>
>
> On Wed, Nov 22, 2023 at 7:36 PM John Carlson <yottzumm at gmail.com> wrote:
>
>> Hot air:
>>
>> What may be more interesting to game developers is tracking many, many
>> players in a game, and then producing NPCs based on traces capable of
>> playing like humans.  It’s similar to “predict the next word” for popular
>> AIs.   Then add reinforcement learning to improve successful NPCs.  This is
>> where synthetic motion generation (ala Story-to-Motion) may come in handy
>> instead of relying solely on humans for training data.  The question is,
>> can story-to-motion provide enough creativity?
>>
>> On Wed, Nov 22, 2023 at 7:57 PM John Carlson <yottzumm at gmail.com> wrote:
>>
>>> Please let us know of supporting authoring systems and players.
>>>
>>> From what I can see of the video and paper, the Story-To-Motion is more
>>> about position of the humanoid and rotation of joints.  If these are
>>> “Group”ed by keyword, and smooth transitions are accomplished by chaining
>>> animations, then some degree of automation can result.  If result numbers
>>> are available in a potential tool like JAminate,  Blender or a text editor,
>>> and accessible to human animators for optimization, all the better!
>>>
>>> I’m thinking more and more about parsing interpolators into JAminate,
>>> which has been a goal.  Using JAminate or a spreadsheet, one could
>>> potentially group key-fractions and keyValues, then output interpolators
>>> and routes could be published in addition to chaining of moves with Boolean
>>> Sequencers.
>>>
>>> John
>>>
>>> On Wed, Nov 22, 2023 at 7:18 PM Joe D Williams <joedwil at earthlink.net>
>>> wrote:
>>>
>>>> It might be that the closest x3d going on with this is in web3d x3d
>>>> HAnim WG facial animation standards-track. The facial mesh of various
>>>> densities are organized into regions activated in certain sets to perform a
>>>> certain expression.  So, standard keywords and parameter sets can activate
>>>> standard related areas of the mesh to allow use of standardized animations
>>>> as well as serve as a prime basis for the personalization of the Humanoid
>>>> that you really want to do. This is not a quick and simple effort. It is
>>>> very interesting, design-friendly, much simplified authoring, and much more
>>>> standardizable and demonstratable at all levels of expression using the
>>>> hanim displacer concept.
>>>>
>>>>
>>>>
>>>> Joe
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: John Carlson <yottzumm at gmail.com>
>>>> Sent: Nov 21, 2023 3:19 PM
>>>> To: Joe D Williams <joedwil at earthlink.net>, X3D Graphics public
>>>> mailing list <x3d-public at web3d.org>
>>>> Subject: Re: Story-to-Motion
>>>>
>>>>
>>>> Here’s a better link i hope.
>>>>
>>>>
>>>> https://arxiv.org/abs/2311.07446#:~:text=A%20new%20and%20challenging%20task,level%20control%20(motion%20semantics)
>>>> .
>>>>
>>>> This should be a better link (added f):
>>>>
>>>> https://arxiv.org/pdf/2311.07446.pdf
>>>>
>>>> Apologies for multiple posts.
>>>>
>>>> John
>>>>
>>>> On Tue, Nov 21, 2023 at 5:08 PM John Carlson <yottzumm at gmail.com>
>>>> wrote:
>>>>
>>>>> https://arxiv.org/pdf/2311.07446.pd
>>>>>
>>>>> Has this research been peer-reviewed?
>>>>>
>>>>> It’s hard to read on my cell phone, I may get my computer and attempt
>>>>> to read it.
>>>>>
>>>>> John
>>>>>
>>>>
>>>>
>>> _______________________________________________
>> x3d-public mailing list
>> x3d-public at web3d.org
>> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20231122/5b67579b/attachment-0001.html>


More information about the x3d-public mailing list