[x3d-public] Sound v4 > ListenerPointSource QUESTIONS

GPU Group gpugroup at gmail.com
Wed Mar 1 11:47:09 PST 2023


I'm struggling to understand how the LPS ListenerPointSource should work.
It seems like a great idea with plenty of possibilities.

Guess 1:

LP ListenerPoint - would have just the pose of the listener.

- that pose would be used in any Panner nodes instead of viewpoint

- LP would be a singleton and apply to all contexts

- would work under AD AudioDestination

Guess 2:

Multiple LP - like LP, except each LP could refer to / apply its pose to a
specific audio context using keys / IDs.

- each AD would have a key/ID (SFString or SFInt32), (ID would be simple
number like 0 or 1 or 2, not UUID of mediaStream)

- each LP would have a keys/IDs field, (MFString, MFInt32), and could apply
its pose to panner nodes under the ADs with a key in its key list, or if
key list empty, to all contexts.

Guess 3:

LPS works with SAD StreamAudioSource, with each SAD converting its output
to a bufferable stream like HTML MediaStream. The program would hold a list
of Streams with key/ID: list (Stream,key)

Each LPS would have a keys/IDs field, and merge all the streams its
interested in.

The LPS pose would not affect Panner nodes under SADs.

Rather it would take each stream as positionless and add its own pose with
an internal panner node.

And have a built in AD to deliver its output to the scene

(when I implemented Sound and SpatialSound, I included an AD node if and
only if it had no audio parent)

Guess 4:

Like 3, except LPS pose affects Panner nodes under its SAD list.

x But multiple LPS would have conflicting pose claims, the Panner node
under SAD wouldn't know which one to use)



I've lost the link to a paper that explained it.

Q. how is it supposed to work?

Thanks, Doug Sanden
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20230301/8c012acd/attachment.html>


More information about the x3d-public mailing list