[x3d-public] (emotion) prediction axis (EMOCAP in Humanoid Animation)
yottzumm at gmail.com
Sun Oct 20 04:07:22 PDT 2019
In an ongoing effort to generate X3D Models with math and physics, I stumbled across “Emotional Grammar.” Here’s where I’m going with the prediction of emotions. I am very much a noob at this, but I’ve been studying generators for about a year plus some. I am first going to identify emotion generators.
1. Changing belief, attitude or name
2. Musical Instrument
3. Singing Voice
4. Soft Voice
5. Calm space
6. Busy street
7. Normal Voice
8. Loud Voice
20. New Job
21. Laid off
25. Care (generates feelings of safety)
26. Emotional Transference
27. Expressed Emotion generates Emotion in others
28. Closeness of a pet or loved one.
29. Anticipating family reunions
32. Psychological states
33. Probably a kazillion of these things. Anyone want to categorize? I realize that’s not the appropriate tool.
34. A sense of incompleteness
Next, I should order them in terms of predictability and tools for predicting emotions.
1. We can try predicting one generator to a different generator in a simulation
2. We can predict music by looking at the notes on paper, applying knowledge of chords effects on emotion.
3. We can predict a busy street, loud voice or noise will generate more incoherent emotions.
4. The harder to predict, the more disorderly the emotions will be.
Then, I need an emotion capture/recognition device, aka EMOCAP device.
After that, I need to organize the data captured.
https://www.affectiva.com/ We can capture emotions as facial expressions, body expressions, change in heart rate, and other diagnostic measurement devices. We may need to map captured data to words. Looks like EMOCAP and deep learning may come to the rescue. Does X3Dv4 handle EMOCAP? How does an EMOCAP device limit us? Can empaths do better?
Would this be better for the UX group? Or the Printing and Scanning group?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the x3d-public