[x3d-public] Constrained Sound and Video Sensors for X3Dv4. [ 6th in a series ]
yottzumm at gmail.com
Tue May 14 18:02:51 PDT 2019
I found a constrained sensor in X3Dv3.3, PlaneSensor (and Probably LineSensor). I was referring to a SoundSensor or VideoSensor and how I might sense ambient input versus actionable input in web3d (and enabling mic and cam on a browser is an important part of this). For example, music playing in the town square versus an alarm siren going off. Or if I move away from the computer, I may want my app to turn off the microphone and the camera (but not the speaker). Also detecting anomalous events on the network and showing a visualization of it for fun or work!
Generalized input/output interface support
Possibly Virtual Reality Peripheral Network (VRPN), gesture recognition (such as KINECT, LEAP), etc.
Support for arbitrary sensors and user interaction devices
Has anyone tried to integrate https://www.affectiva.com/ into an X3D browser?
I have both Kinect and Leap Motion (somewhere) if someone wants me to test something.
What are other frameworks for dealing with webcams and microphones on the web? Cordova? Gstreamer? I would like to know. Can I record something into a Jupyter Lab?
How does one create a video constraint or an audio constraint.
I’ll be calling you, Mr. E, very soon now, if my contact works.
Sent from Mail for Windows 10
From: John Carlson
Sent: Tuesday, May 14, 2019 6:41 PM
To: X3D Graphics public mailing list
Subject: 5th? in a series on the rise and demise Controller CenteredComputing
Model-View-Controller (from Smalltalk) Discussion aka
Very boring, except for passive ambient input discussion (non-command controllers).
So this is stepping up a level into systems of paradigms (paradigms being view, controller, model, and processing) so around 14-15 in the hierarchy of complexity, 16 being max’ed out. https://en.wikipedia.org/wiki/Model_of_hierarchical_complexity#Stages_of_hierarchical_complexity
Review and Expand:
Controller/Input Device components
https://en.wikipedia.org/wiki/Input_device (too many to list here)
Physical Motion (yawl, roll, dive)
Medical Devices (is there a group?)
Video of Sports Games
Therapy Robot input
Unordered Map (object)
Ordered Map (array, list, function, grid, uri)
Graph (network, DAG, Hypergraph)
New category, but previously covered
Not sure where this fits
So controllers are moving away from typical to atypical inputs. Controller input *might* be converted into commands. I am trying to deal with the cases where controller input is NOT converted to commands. Can we list them? What do we call controller input which isn’t commands? Is there a term for it? Ambient input? Here’s a use of the term: https://blog.joshlewis.org/2007/03/22/passive-ambient-input/
How do we collect and analyze these inputs? Your feedback on these subjects (research papers welcome) is desired. Jeffrey Allen suggested that computers might move into the background, nearly invisible, and things might work off of gestures.
How might X3D create a sensor for passive ambient input that is processed in a non-event fashion, except when some condition is met. What design might we implement for passive ambient input? One example might be to lower the output from the stereo when I’m talking on the phone. Constrained input might be a term for it that is more typical. How might constrained input be implemented with Sensors (no I haven’t read the standard!)
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the x3d-public