[x3d-public] X3D Specification Editors: Audio and Sound, 15 July 2020

Michalis Kamburelis michalis.kambi at gmail.com
Wed Jul 15 12:28:40 PDT 2020


I looked at the diff and the addition of AudioProperties looks fine:
https://github.com/Web3DConsortium/X3D/commit/1601e441adc9ebcbaaa65d82979da97fcb229a92
.

Note: As you observed, the section numbers in the "Shape" component have
now changed, as a result of adding AudioProperties. This has to be
reflected in the "nodeIndex.html" page, as it contains the section number
for each node. Just reminding, to not forget about it :)

Regards,
Michalis

śr., 15 lip 2020 o 20:01 Don Brutzman <brutzman at nps.edu> napisał(a):

> Attendees Efi Lakka, Thanos Malamos, Dick Puk, Don Brutzman.
>
> ---
>
> 1. Updated X3D4 draft Shape component to show Acoustic Properties details.
>
> =====================================
> 12.4.1 AcousticProperties
>
> AcousticProperties : X3DAppearanceChildNode  {
>    SFFloat [in,out] absorption 0    [0,1]
>    SFFloat [in,out] diffuse    0    [0,1]
>    SFNode  [in,out] metadata   NULL [X3DMetadataObject]
>    SFFloat [in,out] refraction 0    [0,1]
>    SFFloat [in,out] specular   0    [0,1]
> }
>
> The AcousticProperties node determines acoustic effects including surface
> reflection, physical phenomena such as absorption, specular, diffuse and
> refraction coefficients of materials.
>
> The absorption field specifies the sound absorption coefficient of a
> surface which is the ratio of the sound intensity absorbed or otherwise not
> reflected by a specific surface that of the initial sound intensity. This
> characteristic depends on the nature and thickness of the material.
> Particularly, the sound is absorbed when it encounters fibrous or porous
> materials, panels that have some flexibility, volumes of air that resonate,
> openings in the room boundaries (e.g. doorways). Moreover, the absorption
> of sound by a particular material/panel depends on the frequency and angle
> of incidence of the sound wave.
>
> The diffuse field describes the sound specular coefficient, which is one
> of the physical phenomena of sound that occurs when a sound wave strikes a
> plane surface, and a part of the sound energy is reflected back into space
> but the angle of reflection is equal to the angle of incidence.
>
> The refraction field determines the sound diffusion coefficient, which
> aims to measure the degree of scattering produced on reflection.
> Specifically, it is produced in the same way as the specular reflection,
> but in this case, the sound wavelength is comparable with the corrugation
> dimensions of an irregular reflection surface and the incident sound wave
> will be scattered in all directions. In other words, it is a measure of the
> surface’s ability to uniformly scatter in all directions.
>
> The specular field describes the sound specular coefficient, which is one
> of the physical phenomena of sound that occurs when a sound wave strikes a
> plane surface, and a part of the sound energy is reflected back into space
> but the angle of reflection is equal to the angle of incidence.
> =====================================
>
> Several questions:
>
> * Editors note: what is correct name, refraction or reflection? Aren't
> diffuse and specular coefficients for reflection? Typically refraction
> refers to wave bending as defined by Snell's Law.
>
> We believe that refraction is for sound passing through the material.
> Good description:
>
> [1] Wikipedia: Refraction
>      https://en.wikipedia.org/wiki/Refraction
>
> * Editors note: aren't some equations needed here?
>
> TODO:
> - Efi share document illustrating refraction
> - Improved definition for refraction in paragraph above
> - Great diagram attached, we will probably redraw for specification
> - What diagrams and prose for a new section, "16.2.x Sound Model"
>
> Sound component    16.2.2 Sound attenuation and spatialization
>
> might be renamed as 16.2.2 Sound propagation, attenuation and
> spatialization
>
> Alternatively a separate section "16.2.3 Sound propagation" can be written.
>
> Suggested path forward: review Efi's document again, then integrate prose
> and diagrams in *16.2.3 Sound propagation* as available and discussed.
>
> Having simple sound equations adds clarity to our coefficient terms,
> similar to the X3D lighting model sections.  We are indeed fortunate that
> we are taking advantage of centuries of knowledge, not trying to re-invent
> how sound works!  Distilling this immense body of knowledge for X3D4
> clarity is a simple straightforward task.
>
> Likely future discussion points: what do we mean by refraction "through" a
> material?  For example, a Box is provided with a refraction value in it's
> corresponding AcousticProperties.
> - Do we expect an aural render to compute how much of the Box is occluding
> a sound source and adjust the received signal?
> - Do we expect such computation of propagation intensity to occur external
> to objects or also internally?
> - Can Collision proxy geometry be used for simplified geometric analysis?
> - How do we indicate whether solid inside?  (Use of Box Cone Cylinder
> Sphere geoemetry primitives defines the "solid" field already).
> - Perhaps we should add "solid" field to Shape?
>
> Suggest that our near-term goal is to capture the issues in the
> soon-to-be-released X3D4 draft specification so that wide-ranging
> consideration can continue.
>
> ----
>
> 2. Accepted nodes.
>
> We confirmed consensus to add the "accepted nodes" from last week's
> minutes (listed below) to X3D4 draft.  This includes nodes, fields and
> descriptions in Efi's documentation.
>
> ----
>
> 3. Channels
>
> We discussed the notion of channels, particularly whether there is a one
> waveform (for example, one superpositioned-sinusoid audio signal) per
> channel, or multiple signals (such as stereo) per channel.
>
> [2] Wikipedia: Audio signal
>      https://en.wikipedia.org/wiki/Audio_signal
>
> [3] Wikipedia: Audio signal flow
>      https://en.wikipedia.org/wiki/Audio_signal_flow
>
> [4] Wikipedia: Multiplexing
>      https://en.wikipedia.org/wiki/Multiplexing
>
> Great news!  Update published:
>
> [4] Web Audio API
>      W3C Candidate Recommendation, 11 June 2020
>      Latest published version: https://www.w3.org/TR/webaudio/
>      Editor's Draft version:   https://webaudio.github.io/web-audio-api/
>
> see
> * 1.14 The ChannelMergerNode Interface
> * 1.15 The ChannelSplitterNode Interface
> * 1.30 The StereoPannerNode Interface
>
> [5] Web Audio API 1.0 Implementation Report
>      Chris Lilley, 6 September 2018
>      https://webaudio.github.io/web-audio-api/implementation-report.html
>
> This report shows status of Safari, Chrome/Blink, Firefox, Edge for 8426
> WebAudio API tests.
>
> Key issue: are channels monophonic, or can they be stereo.  Answer:
> monophonic.
>
> The Web Audio API (appears to) defines channels as corresponding to a
> single source.
>
> We are aligning with that.  We are now working to best align Web Audio API
> with X3D architecture.
>
> We are also looking to ensure that we are not designing the X3D alignment
> in a way that blocks other audio API approaches.  To our expected
> advantage, the W3C Audio API design has already composed and harmonized
> requirements from multiple existing implementations.  That expectation is
> based on (a) harmonization is the nature of W3C specification process, and
> (b) multiple implementations already exist, including sound libraries
> beyond the four listed above.
>
> Of interest, and as expected, we intend to become active in the W3C Audio
> Group (as Web3D liaisons) and any differences or lessons learned from X3D4
> use of this API will no doubt hold interest there.
>
> We do need prose describing channels and concepts as a new section in
> Sound component.  We'll start with Efi's documents for that.  We will add
> following to draft for review:
>
>         16.4.18 ChannelSplitter (likely, need channel design discussion,
> next session)
>         16.4.19 ChannelMerger
>
> ----
>
> Thanks for more great progress.  Have fun with X3D4 Sound and W3C Audio!
>
> v/r Don
>
>
> On 7/9/2020 11:10 AM, Don Brutzman wrote:
> > Minutes for 2-hour meeting today.  Attendees Efi Lakka, Thanos Malamos,
> Dick Puk, Don Brutzman.
> >
> > No member-only information is included in these minutes.
> >
> > Warmup thoughts, design question: is there a way to slave ListenerPoint
> (a) to current view, or (b) each provided Viewpoint?  Also, how do you
> return to current view if a separate ListenterPoint was bound?
> >
> > Details follow.  No new attachments for this week's meeting, see last
> week's notes.
> >
> > On 7/8/2020 7:31 PM, Don Brutzman wrote:
> >> 1. continuing...
> >>
> >> On 7/8/2020 10:14 AM, Richard F. Puk wrote:
> >>> [...]
> >>> I have the following comments on the minutes (Shown in red):
> >>>
> >>> In my opinion, the new X3DSoundSourceNodes should not have the work
> “Source” in the names of the concrete nodes as the node names are clear
> without it, it is redundant, and also is different from the existing nodes.
> >>
> >> As discussed, that is one possible solution, but other considerations
> also pertain.
> >>
> >> Relevant node names of interest are:
> >>
> >> +- X3DTimeDependentNode -+- TimeSensor
> >> | |
> >> | +- X3DSoundSourceNode -+- AudioClip         (~Web Audio API:
> MediaElementAudioSourceNode)
> >> |                        +- MovieTexture
> >> |                        +- AudioBufferSource (~Web Audio API:
> AudioBuffer + AudioBufferSourceNode)
> >> |                        +- OscillatorSource  (~Web Audio API:
> OscillatorNode)
> >> |                        +- StreamAudioSource (~Web Audio API:
> MediaStreamAudioSourceNode)
> >> |                        +- MicrophoneSource
> >> |
> >> | +- X3DSoundDestinationNode -+- AudioDestination (~Web Audio API:
> AudioDestinationNode)
> >
> > As elaboration: specifically AudioDestination is a hardware device ID,
> maintained by operating system (OS).
> >
> > FWIW, without loss of functionality, a synonym for AudioDestination
> might be HardwareAudioDestination (in contrast to StreamAudioDestination).
> >
> >> |                             +- StreamAudioDestination (~Web Audio
> API:MediaStreamAudioDestinationNode)
> >
> > nowadays typically WebRTC, which is stable and approved (and usable!!)
> specification.  8)
> >
> >> + +- X3DSoundProcessingNode -+- BiquadFilter
> >> |                            +- Convolver
> >> |                            +- Delay
> >> |                            +- DynamicsCompressor
> >> |                            +- Gain
> >> |                            +- WaveShaper
> >> |                            +- PeriodicWave
> >> |
> >> | +- X3DSoundAnalysisNode -+- Analyser
> >> |
> >> | +- X3DSoundChannelNode -+- ChannelSplitter
> >> |                         +- ChannelMerger
> >>
> >> AudioClip and MovieTexture are well understood legacy nodes and their
> names will not change.
> >>
> >> Want to voice caution here.  Conceptually, the base names
> "AudioBuffer", "Oscillator" and "StreamAudio" by themselves might be
> referring to sources of audio (i.e. outputs from streams) or sinks for
> audio (i.e. inputs to streams).  When choosing node names (i.e. the words
> in the X3D language) we strive for clarity and want to avoid ambiguity.  So
> it may make sense to emphasize purpose by keeping the suffix "Source" for
> these nodes.
> >>
> >> Am confident that when we start putting Web Audio graphs together using
> this new set of nodes, implementation/evaluation results will either make
> good sense (like the Web Audio javascript) or else confusing gaps will
> become more evident.  Example usage is important to consider.
> >>
> >> We'll discuss and reach initial consensus on good names during Thursday
> morning's teleconference, 0900-1030 pacific)
> >
> > We discussed this topic in exhilarating detail.
> >
> > Naming heuristic: "when the going gets tough, the tough get... verbose."
> >
> > Suggestion: for the near term we go for longer names for clarity, but
> once we have examples in hand, we reconsider whether shorter names can work.
> >
> >> 2. Dick and I made excellent progress updating the Concepts 4.4.2.3
> Interface Hierarchy to match the latest meeting notes.
> >>
> >> We are ready to move acoustic fields from Material to
> AcousticProperties node in Shape component when Michalis finishes his merge
> of glTF PBR Pull Request 8.
> >
> > confirmed
> >
> >> Several editorial issues are pending, including:
> >>
> >>> The concrete node derivations need to replace “AudioNode” with the
> appropriate abstract node type.
> >>>
> >>> nDick
> >>
> >> Our agenda for Thursday is to finalize node list and interfaces, then
> start dropping in prose from Efi's detailed documentation report.
> > a. Agreed on abstract types:
> >
> >      16.3 Abstract types
> >          16.3.1 X3DAudioListenerNode
> >          16.3.2 X3DSoundAnalysisNode
> >          16.3.3 X3DSoundChannelNode
> >          16.3.4 X3DSoundDestinationNode
> >          16.3.5 X3DSoundProcessingNode
> >          16.3.6 X3DSoundNode
> >          16.3.7 X3DSoundSourceNode
> >
> > Triage follows for nodes, in order to edit document further for draft
> release:
> >
> >      16.4 Node reference
> >
> > b. *Accepted*
> >
> >          16.4.1 AudioClip
> >          16.4.2 Sound
> >          16.4.3 SpatialSound
> >          16.4.4 ListenerPoint
> >          16.4.4 AudioBufferSource
>
> [... snip duplicate ..]
>
> >          16.4.2 OscillatorSource
> >          16.4.7 BiquadFilter
> >          16.4.8 Convolver
> >          16.4.9 Delay
> >          16.4.10 DynamicsCompressor
> >          16.4.12 WaveShaper
> >          16.4.13 PeriodicWave
> >          16.4.17 Analyser
> >          16.4.** StreamAudioSource
> >
> >          16.4.14 AudioDestination
> >          16.4.xx StreamAudioDestination
> >
> >          16.4.yy MicrophoneSource (physical hardware device)
> >
> > c. *Not Included*
> >
> >          16.4.5 MediaElementAudioSource (same as AudioClip)
> >          16.4.6 MediaStreamAudioSource  (same as StreamAudioSource)
> >
> >          16.4.15 MediaStreamAudioDestination (same as
> StreamAudioDestination)
> >          16.4.16 MediaStreamTrack
> >          16.4.1  AudioParam (merged functionality? not used)
> >          16.4.21 Panner     (merged functionality in SpatialSound)
> >
> > d. *Pending major issues, decision TBD*
> >
> >          16.4.5 BinauralListenerPoint (will attempt to merge with
> ListenerPoint)
> >                (SurroundSound might also be a variation on ListenerPoint)
> >
> >          16.4.18 ChannelSplitter (likely, need channel design
> discussion, next session)
> >          16.4.19 ChannelMerger   (likely, need channel design
> discussion, next session)
> >
> >          16.4.1 AudioContext (perhaps integrated within interface
> hierarchy, more discussion)
> >
> >          16.4.11 Gain (included as a field in many nodes, or within
> interface hierarchy)
> >
> >          Virtual Microphone Sensor (perhaps same as ListenerPoint or
> media stream?)
> >
> >      How to simply pan or rebalance left/right?
> >
> >      SFNode/MFNode fields defining parent-child node relationships,
> allowing tree-like construction of a W3C Audio API graph using X3D nodes.
> >
> > ==================
> >
> > 3. Next steps:
> >
> > a. Request: Efi please confirm all field definitions are correct for the
> Accepted nodes - so far so great.
> >
> > b. TODO Don and Dick will update draft specification in github,
> specifically nodes fields and prose, using Efi's two provided
> analysis/report documents.
> >
> > c. Objective: shift sharpest focus from our draft notes to actual github
> spec as our outcome document.
> >
> > d. What are our example audio graphs to represent in X3D?Looking at
> examples will help us confirm that the right fields and parent-child node
> relationships are described well.  Efi has a detailed example comparing
> earlier/evolved examples that will help.  She can (and will soon) release
> this.  As they are each reviewed and considered mature, we will place them
> online in version control as part of X3D Examples Archive.
> >
> > e. It is clear that a good amount of work remains, but at least 80% is
> well defined and sensible.  We need to ship a draft X3D4 Sound Component
> that reflects this progress, noting more work to follow.  This will allow
> review and assessment (and possibly engagement) by many others.
> >
> > f. It is not completely certain yet, but we are very close hope to
> resolve this sufficiently to support a Web3D 2020 Conference paper.  This
> is important and primarily a restructuring of Efi's excellent documents
> updates in paper form. We can certainly support a Web3D 2020 Conference
> tutorial.
> >
> > * Statistics joke: "The first 90% of the work takes 90% of the time.
> The last 10% of the work takes the other 90% of the time."  (perhaps
> similar to Zeno's Paradox?!)
> >
> > * Zeno's Paradoxes
> >    https://en.wikipedia.org/wiki/Zeno%27s_paradoxes
> >
> > Summary of our near-term objective: produce a sufficiently advanced X3D4
> Sound Component that enables meaningful understanding, implementation,
> evaluation and finalization.
> >
> >> We are getting pretty close to releasing the Sound component for
> inclusion in draft X3D4 Specification.
> >>
> >> Due diligence continues.  All feedback remains helpful and welcome.
> >
> > Next meeting of group: back to Wednesday 15 July, same time.
> >
> > "What say?"  Have fun with X3D Audio and Sound!  8)
> >
> > all the best, Don
>
> all the best, Don
> --
> Don Brutzman  Naval Postgraduate School, Code USW/Br
> brutzman at nps.edu
> Watkins 270,  MOVES Institute, Monterey CA 93943-5000 USA   +1.831.656.2149
> X3D graphics, virtual worlds, navy robotics
> http://faculty.nps.edu/brutzman
> _______________________________________________
> x3d-public mailing list
> x3d-public at web3d.org
> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20200715/58858bac/attachment-0001.html>


More information about the x3d-public mailing list