[x3d-public] X3D4 Audio and Sound progress, 17 JUN 2020
GPU Group
gpugroup at gmail.com
Tue Jun 23 08:25:26 PDT 2020
> b. Further thought on whether/how to include SoundEffects nodes, building
on past work. Primary topic for next week
My understanding so far about the SoundEffects issue:
A. Explicit x3d SoundEffect nodes
Disadvantage:
- there's a lot of them, and x3d browsers have some overhead for each node
- any ancillary js functions used in conjunction such as sleep() or loops
would need to be node-ized / converted into a declarative form
- some uncertainty in implementation method (see ** below), unfamiliar
scene authoring syntax for those familiar with webAudio
Advantage:
- can route directly from x3d nodes to SoundEffect node fields, updating
the effect from x3d
B, js scripting approach
Disadvantage
- different implementation effort to expose Labsound or equivalent webAudio
API to js engine, depends on browser and programmer capability (no opinion,
haven't reviewed js exposing of labsound)
Advantage of js scripting approach:
- js snippets can be copied from html examples of webAudio (including js
loops and sleep()s when not long enough to stall rendering thread? or would
the effect be run-once in a spawned thread?)
- only fields of interest would be exposed on script node (you mentioned a
url field on SpatializedSound would hold the script. Will SpatializedSound
have 'dynamic fields' like a Script node that can be scene-authored and
routed to, so regular scene can route to fields on SoundEffect nodes or
more precisely to dynamic fields declared on SpatializedSound, for use
inside the script by webAudio nodes?)
-Doug Sanden
** Untested design for native implementation method with LabSound.lib for
explicit effect nodes approach:
1) when initializing x3d audioContext node, launch worker thread (separate
thread per audioContext)
- the worker thread is in C++ and holds smart pointer variables for
lifespan of context ie run of program
2) the worker thread loops, and once per loop waits on a condition variable^
3) once per rendering frame, when the browser is rendering the scenegraph,
and renders the audioContext node, it sets the condition variable
4) if first time, the context worker thread creates all the labsound nodes
and connects them
- to match the x3d declared context and connected child nodes
5) in subsequent rendering frames, if any field has been changed on the x3d
audio effect nodes,
- the condition variable is set and the worker thread updates the changed
fields on
- the labsound nodes
6) at end of program run, the condition variable is set and the worker
thread exits,
- triggering garbage collection of smart pointer variables
^https://en.cppreference.com/w/cpp/thread/condition_variable
-shows std::condition_variable and thread.
On Wed, Jun 17, 2020 at 11:21 AM Don Brutzman <brutzman at nps.edu> wrote:
> Attendees: Efi Lakka, Thanos Malamos, Dick Puk, Don Brutzman
>
> No Web3D member-only information is included in these minutes.
>
> 1. Refined structure for object model.
>
> Great progress, we are converging nicely.
>
> Efi will distribute updated document. Thank you!
>
> We discussed Microphone node. We think it best fits as MicrophoneSource
> as an X3DSoundSourceNode, with no direct need of X3DSensorNode.
>
>
> 2. Discussed "sound effects" nodes. Observed that names like
> "AnalyserNode, OscillatorNode, ChannelSplitterNode etc. are the WebAudio
> API names, the suffix "Node" would not be part of the X3D names (because
> "Node" indicates abstract types).
>
> Wondering if all of the sound effects could have a parent
> X3DSoundEffectsNode. Conjectured:
>
> +- X3DTimeDependentNode -+- X3DSoundEffectsNode -+- SoundEffectsGroup
> (containing the following as an audio graph) or
> +- SoundProcessingGroup
> (or something)
> +- Analyzer
> +- Oscillator
> +- ChannelSplitter
> +- etc.
> 3. Next steps.
>
> a. For consideration... much of the structure for sound-effects node
> composition can be "side-stepped" if we allow support for WebAudioApi
> javascript source as a way to describe audio graph. If accepted, this
> would be connected via /url/ field for SpatialSound node. It also would
> confirm whether all spatialized inputs/outputs are available in
> SpatialSound node. This is an option (not a requirement) that might
> provide comparisons and clarity regarding whether node-defined audio graphs
> in X3D have the same expressive power.
>
> b. Further thought on whether/how to include SoundEffects nodes, building
> on past work. Primary topic for next week.
>
> c. Dick and Don will update X3D4 draft specification organization to match
> the current structure in the document, establishing placeholders for
> agreed-upon nodes. We will try to keep up with incremental progress.
>
> Our next goal is to have a preliminary releasable draft X3D4 Specification
> by 30 JUN 2020, anticipating public release of draft for SIGGRAPH 2020.
>
> We will try to meet Wednesday 0900 pacific (1900 EET Eastern European Time
> for Crete) in order to not collide with Web3D User Experience (Web3DUX)
> working group meeting.
>
> "What say??" ... Have fun with X3D audio and sound! 8)
>
> all the best, Don
> --
> Don Brutzman Naval Postgraduate School, Code USW/Br
> brutzman at nps.edu
> Watkins 270, MOVES Institute, Monterey CA 93943-5000 USA +1.831.656.2149
> X3D graphics, virtual worlds, navy robotics
> http://faculty.nps.edu/brutzman
>
> _______________________________________________
> x3d-public mailing list
> x3d-public at web3d.org
> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20200623/d8ece981/attachment-0001.html>
More information about the x3d-public
mailing list