[x3d-public] Constrained Sound and Video Sensors for X3Dv4. [ 6th in a series ]

John Carlson yottzumm at gmail.com
Tue May 14 18:02:51 PDT 2019


I found a constrained sensor in X3Dv3.3, PlaneSensor (and Probably LineSensor).  I was referring to a SoundSensor or VideoSensor and how I might sense ambient input versus actionable input in web3d (and enabling mic and cam on a browser is an important part of this).  For example, music playing in the town square versus an alarm siren going off.  Or if I move away from the computer, I may want my app to turn off the microphone and the camera (but not the speaker).  Also detecting anomalous events on the network and showing a visualization of it for fun or work!

From: http://www.web3d.org/wiki/index.php/X3D_version_4.0_Development
Generalized input/output interface support
       Possibly Virtual Reality Peripheral Network (VRPN), gesture recognition (such as KINECT, LEAP), etc.
       Support for arbitrary sensors and user interaction devices

Would this be a good thing for a working group to work on?  A sensors working group?  Expand the networking group to all types of sensors?  Is adding arbitrary events being added to v4.0?   Can I program up any sensor I want in JavaScript?

Has anyone tried to integrate https://www.affectiva.com/ into an X3D browser?

I have both Kinect and Leap Motion (somewhere) if someone wants me to test something.

What are other frameworks for dealing with webcams and microphones on the web? Cordova?  Gstreamer?  I would like to know.  Can I record something into a Jupyter Lab?

How does one create a video constraint or an audio constraint.

I’ll be calling you, Mr. E, very soon now, if my contact works.

Thanks,

John
Sent from Mail for Windows 10

From: John Carlson
Sent: Tuesday, May 14, 2019 6:41 PM
To: X3D Graphics public mailing list
Subject: 5th? in a series on the rise and demise Controller CenteredComputing

Model-View-Controller (from Smalltalk)  Discussion aka
Storage-Output-Input-Processing

Very boring, except for passive ambient input discussion (non-command controllers).

So this is stepping up a level into systems of paradigms (paradigms being view, controller, model, and processing) so around 14-15 in the hierarchy of complexity, 16 being max’ed out. https://en.wikipedia.org/wiki/Model_of_hierarchical_complexity#Stages_of_hierarchical_complexity

=============================================
Review:
       View/Output components
       Pixel/Voxel/Movie
       Letter
       Shape

Review and Expand:
       Controller/Input Device components
                                Typical:
                                                Individual
                                                                https://en.wikipedia.org/wiki/Input_device (too many to list here)
                                                                Physical Motion (yawl, roll, dive)
                                                                Medical Devices (is there a group?)
                                                                Presence/Location
                                                                Leap Motion
       Group
                                                                Social Forums/Chat/Voice/Video
       Video of Sports Games
       Corporation input
                                Atypical:
                                                Individual
                                                                Emotional Energy
                                                                                Fear
                                                                                Anxiety
                                                                                Love
                                                                Thinking
                                                                Therapy Robot input
                                                Group
                                                                DynamicLand
                                                                Kinect
New:
                Model/Storage components
                                Primitive Types
                                                Boolean
                                                Number
                                                Letter
                                                Pointer/Reference/Address
                                                Frequency
                                                Wavelength
                                                Temperature
                                Structured Types
                                                Unordered Map (object)
                                                Ordered Map (array, list, function, grid, uri)
                                                Graph (network, DAG, Hypergraph)
                                                Date/Time
                                                Blood Pressure
                                                Pulse
New category, but previously covered
       Processing/Generators
                                       Loop
                                                Procedural
                                                Hyper
                                                Stochastic
                                                Chaotic
                                                Quantum
Not sure where this fits
                                                Meta

                                                                
So controllers are moving away from typical to atypical inputs. Controller input *might* be converted into commands.  I am trying to deal with the cases where controller input is NOT converted to commands.   Can we list them?   What do we call controller input which isn’t commands?  Is there a term for it?  Ambient input?  Here’s a use of the term: https://blog.joshlewis.org/2007/03/22/passive-ambient-input/
How do we collect and analyze these inputs? Your feedback on these subjects (research papers welcome) is desired.  Jeffrey Allen suggested that computers might move into the background, nearly invisible, and things might work off of gestures.

How might X3D create a sensor for passive ambient input that is processed in a non-event fashion, except when some condition is met. What design might we implement for passive ambient input?   One example might be to lower the output from the stereo when I’m talking on the phone.  Constrained input might be a term for it that is more typical. How might constrained input be implemented with Sensors (no I haven’t read the standard!)

                                                


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20190514/08de8161/attachment-0001.html>


More information about the x3d-public mailing list