[x3d-public] Sound v4 > ListenerPointSource QUESTIONS

GPU Group gpugroup at gmail.com
Thu Mar 2 04:59:47 PST 2023


Q5. How does LPS refer to the stream(s) it wants? There's no field saying
which SAD stream(s) it wants. Or would all LPS get all the SAD streams in
the scene?

On Thu, Mar 2, 2023 at 5:56 AM GPU Group <gpugroup at gmail.com> wrote:

> Q4. or does it mean LPS is implemented with one (or 2 for binaural)
> panner node(s), and the panner positions the SAD sound relative to its pose
> (with no influence on panner nodes under SAD)?
> -Doug
>
> On Thu, Mar 2, 2023 at 5:31 AM GPU Group <gpugroup at gmail.com> wrote:
>
>> Thanks very much Don. Yes it fascinating - great node.
>>
>> Specs: "If the *dopplerEnabled* field is TRUE, ListenerPointSource
>> *children* sources which are moving spatially in the transformation
>> hierarchy, relative to the location of the ListenerPointSource node, shall
>> apply velocity-induced frequency shifts corresponding to Doppler effect."
>> You said LPS has no inputs (children) yet this text talks about its
>> children.
>> Q1. is it trying to say the source streams / stream sources / SAD
>> StreamAudioDestinations?
>> Q2. is it saying LPS pose would be used by any panner node under any SAD
>> it refers to?
>> (PS web audio has deprecated doppler effect).
>> Q3. if multiple LPS refer to the same SAD, which LPS pose would panner
>> nodes under SAD use?
>> Thanks, Doug
>>
>> On Wed, Mar 1, 2023 at 11:31 PM Brutzman, Donald (Don) (CIV) <
>> brutzman at nps.edu> wrote:
>>
>>> Thanks for asking Doug.  References:
>>>
>>>
>>>
>>>    - X3D4 Architecture, Sound Component, 16.4.13 ListenerPointSource
>>>    -
>>>    https://www.web3d.org/specifications/X3Dv4Draft/ISO-IEC19775-1v4-DIS/Part01/components/sound.html#ListenerPointSource
>>>    - X3D Tooltips, ListenerPointSource
>>>    -
>>>    https://www.web3d.org/x3d/content/X3dTooltips.html#ListenerPointSource
>>>
>>>
>>>
>>> “ListenerPointSource represents the position and orientation of a person
>>> listening to virtual sound in the audio scene, and provides single or
>>> multiple sound channels as output. Multiple ListenerPointSource nodes can
>>> be active for sound processing.”
>>>
>>>
>>>
>>> Suggested analogy: a virtual microphone in the virtual environment.  The
>>> essential idea is that sound can be recorded as if a listener were at that
>>> point.  Thus position and orientation can be placed at a given location
>>> (such as a seat in a virtual auditorium) or even be animated to rehearse an
>>> audio soundtrack through a virtual environment.
>>>
>>>
>>>
>>> Further option: can be user driven at run time.
>>>
>>>
>>>
>>>    - *trackCurrentView* field is TRUE then *position* and *orientation* matches
>>>    the user's current view.
>>>
>>>
>>>
>>> The ListenerPointSource can be an input to other parts of an audio
>>> graph, and feeding outputs to external speakers, or whatever.  Note
>>> similarity to other sources such as MicrophoneSource and StreamAudioSource
>>> - they provide outputs for processing by other audio nodes, and don’t have
>>> inputs.
>>>
>>>
>>>
>>> Our design intent was to take full advantage of a sound spatialization
>>> in a virtual environment so that any point (static or moving or navigating)
>>> might be a potential sample point.  Seems pretty powerful, especially as
>>> our aural designs become further sophisticated to match or augment our
>>> geometric designs.
>>>
>>>
>>>
>>> Similarly, just as *Source nodes provide audio signals, the *Destination
>>> consume audio signals.  In other words (electrical engineering parlance)
>>> they are signal sources and signal sinks.  All variation occurs between
>>> those.
>>>
>>>
>>>
>>> We followed the example (and insightful design) of Web Audio API
>>> Recommendation and stayed relatively silent about monaural, binaural, etc.
>>> If a signal source has one or two channels, for example, the audio graph
>>> mostly looks the same.  ChannelMerger ChannelSelector and ChannelSplitter
>>> lets an author work on individual channels if they want.
>>>
>>>
>>>
>>> No doubt you noticed in the Support Levels we did not place any bounds
>>> on the number of nodes supported at one time, expecting that
>>> implementations and hardware support will vary quite a bit and further
>>> improve over time.
>>>
>>>
>>>
>>> Hopefully the spec prose is a bit clearer now – we will all learn a lot
>>> as we keep building more sophisticated audio graphs and sound environments.
>>>
>>>
>>>
>>> All the best, Don
>>>
>>>
>>>
>>> *From:* x3d-public <x3d-public-bounces at web3d.org> *On Behalf Of *GPU
>>> Group
>>> *Sent:* Wednesday, March 1, 2023 11:47 AM
>>> *To:* X3D Graphics public mailing list <x3d-public at web3d.org>
>>> *Subject:* [x3d-public] Sound v4 > ListenerPointSource QUESTIONS
>>>
>>>
>>>
>>> I'm struggling to understand how the LPS ListenerPointSource should
>>> work. It seems like a great idea with plenty of possibilities.
>>>
>>> Guess 1:
>>>
>>> LP ListenerPoint - would have just the pose of the listener.
>>>
>>> - that pose would be used in any Panner nodes instead of viewpoint
>>>
>>> - LP would be a singleton and apply to all contexts
>>>
>>> - would work under AD AudioDestination
>>>
>>> Guess 2:
>>>
>>> Multiple LP - like LP, except each LP could refer to / apply its pose to
>>> a specific audio context using keys / IDs.
>>>
>>> - each AD would have a key/ID (SFString or SFInt32), (ID would be simple
>>> number like 0 or 1 or 2, not UUID of mediaStream)
>>>
>>> - each LP would have a keys/IDs field, (MFString, MFInt32), and could
>>> apply its pose to panner nodes under the ADs with a key in its key list, or
>>> if key list empty, to all contexts.
>>>
>>> Guess 3:
>>>
>>> LPS works with SAD StreamAudioSource, with each SAD converting its
>>> output to a bufferable stream like HTML MediaStream. The program would hold
>>> a list of Streams with key/ID: list (Stream,key)
>>>
>>> Each LPS would have a keys/IDs field, and merge all the streams its
>>> interested in.
>>>
>>> The LPS pose would not affect Panner nodes under SADs.
>>>
>>> Rather it would take each stream as positionless and add its own pose
>>> with an internal panner node.
>>>
>>> And have a built in AD to deliver its output to the scene
>>>
>>> (when I implemented Sound and SpatialSound, I included an AD node if and
>>> only if it had no audio parent)
>>>
>>> Guess 4:
>>>
>>> Like 3, except LPS pose affects Panner nodes under its SAD list.
>>>
>>> x But multiple LPS would have conflicting pose claims, the Panner node
>>> under SAD wouldn't know which one to use)
>>>
>>>
>>>
>>> I've lost the link to a paper that explained it.
>>>
>>> Q. how is it supposed to work?
>>>
>>> Thanks, Doug Sanden
>>>
>>>
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20230302/ece9e03a/attachment.html>


More information about the x3d-public mailing list