<div dir="ltr"><div>I looked at the diff and the addition of AudioProperties looks fine: <a href="https://github.com/Web3DConsortium/X3D/commit/1601e441adc9ebcbaaa65d82979da97fcb229a92">https://github.com/Web3DConsortium/X3D/commit/1601e441adc9ebcbaaa65d82979da97fcb229a92</a> .<br></div><div><br></div><div>Note: As you observed, the section numbers in the "Shape" component have now changed, as a result of adding AudioProperties. This has to be reflected in the "nodeIndex.html" page, as it contains the section number for each node. Just reminding, to not forget about it :)<br></div><div><br></div><div>Regards,</div><div>Michalis<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">śr., 15 lip 2020 o 20:01 Don Brutzman <<a href="mailto:brutzman@nps.edu">brutzman@nps.edu</a>> napisał(a):<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Attendees Efi Lakka, Thanos Malamos, Dick Puk, Don Brutzman.<br>
<br>
---<br>
<br>
1. Updated X3D4 draft Shape component to show Acoustic Properties details.<br>
<br>
=====================================<br>
12.4.1 AcousticProperties<br>
<br>
AcousticProperties : X3DAppearanceChildNode {<br>
SFFloat [in,out] absorption 0 [0,1]<br>
SFFloat [in,out] diffuse 0 [0,1]<br>
SFNode [in,out] metadata NULL [X3DMetadataObject]<br>
SFFloat [in,out] refraction 0 [0,1]<br>
SFFloat [in,out] specular 0 [0,1]<br>
}<br>
<br>
The AcousticProperties node determines acoustic effects including surface reflection, physical phenomena such as absorption, specular, diffuse and refraction coefficients of materials.<br>
<br>
The absorption field specifies the sound absorption coefficient of a surface which is the ratio of the sound intensity absorbed or otherwise not reflected by a specific surface that of the initial sound intensity. This characteristic depends on the nature and thickness of the material. Particularly, the sound is absorbed when it encounters fibrous or porous materials, panels that have some flexibility, volumes of air that resonate, openings in the room boundaries (e.g. doorways). Moreover, the absorption of sound by a particular material/panel depends on the frequency and angle of incidence of the sound wave.<br>
<br>
The diffuse field describes the sound specular coefficient, which is one of the physical phenomena of sound that occurs when a sound wave strikes a plane surface, and a part of the sound energy is reflected back into space but the angle of reflection is equal to the angle of incidence.<br>
<br>
The refraction field determines the sound diffusion coefficient, which aims to measure the degree of scattering produced on reflection. Specifically, it is produced in the same way as the specular reflection, but in this case, the sound wavelength is comparable with the corrugation dimensions of an irregular reflection surface and the incident sound wave will be scattered in all directions. In other words, it is a measure of the surface’s ability to uniformly scatter in all directions.<br>
<br>
The specular field describes the sound specular coefficient, which is one of the physical phenomena of sound that occurs when a sound wave strikes a plane surface, and a part of the sound energy is reflected back into space but the angle of reflection is equal to the angle of incidence.<br>
=====================================<br>
<br>
Several questions:<br>
<br>
* Editors note: what is correct name, refraction or reflection? Aren't diffuse and specular coefficients for reflection? Typically refraction refers to wave bending as defined by Snell's Law.<br>
<br>
We believe that refraction is for sound passing through the material. Good description:<br>
<br>
[1] Wikipedia: Refraction<br>
<a href="https://en.wikipedia.org/wiki/Refraction" rel="noreferrer" target="_blank">https://en.wikipedia.org/wiki/Refraction</a><br>
<br>
* Editors note: aren't some equations needed here?<br>
<br>
TODO:<br>
- Efi share document illustrating refraction<br>
- Improved definition for refraction in paragraph above<br>
- Great diagram attached, we will probably redraw for specification<br>
- What diagrams and prose for a new section, "16.2.x Sound Model"<br>
<br>
Sound component 16.2.2 Sound attenuation and spatialization<br>
<br>
might be renamed as 16.2.2 Sound propagation, attenuation and spatialization<br>
<br>
Alternatively a separate section "16.2.3 Sound propagation" can be written.<br>
<br>
Suggested path forward: review Efi's document again, then integrate prose and diagrams in *16.2.3 Sound propagation* as available and discussed.<br>
<br>
Having simple sound equations adds clarity to our coefficient terms, similar to the X3D lighting model sections. We are indeed fortunate that we are taking advantage of centuries of knowledge, not trying to re-invent how sound works! Distilling this immense body of knowledge for X3D4 clarity is a simple straightforward task.<br>
<br>
Likely future discussion points: what do we mean by refraction "through" a material? For example, a Box is provided with a refraction value in it's corresponding AcousticProperties.<br>
- Do we expect an aural render to compute how much of the Box is occluding a sound source and adjust the received signal?<br>
- Do we expect such computation of propagation intensity to occur external to objects or also internally?<br>
- Can Collision proxy geometry be used for simplified geometric analysis?<br>
- How do we indicate whether solid inside? (Use of Box Cone Cylinder Sphere geoemetry primitives defines the "solid" field already).<br>
- Perhaps we should add "solid" field to Shape?<br>
<br>
Suggest that our near-term goal is to capture the issues in the soon-to-be-released X3D4 draft specification so that wide-ranging consideration can continue.<br>
<br>
----<br>
<br>
2. Accepted nodes.<br>
<br>
We confirmed consensus to add the "accepted nodes" from last week's minutes (listed below) to X3D4 draft. This includes nodes, fields and descriptions in Efi's documentation.<br>
<br>
----<br>
<br>
3. Channels<br>
<br>
We discussed the notion of channels, particularly whether there is a one waveform (for example, one superpositioned-sinusoid audio signal) per channel, or multiple signals (such as stereo) per channel.<br>
<br>
[2] Wikipedia: Audio signal<br>
<a href="https://en.wikipedia.org/wiki/Audio_signal" rel="noreferrer" target="_blank">https://en.wikipedia.org/wiki/Audio_signal</a><br>
<br>
[3] Wikipedia: Audio signal flow<br>
<a href="https://en.wikipedia.org/wiki/Audio_signal_flow" rel="noreferrer" target="_blank">https://en.wikipedia.org/wiki/Audio_signal_flow</a><br>
<br>
[4] Wikipedia: Multiplexing<br>
<a href="https://en.wikipedia.org/wiki/Multiplexing" rel="noreferrer" target="_blank">https://en.wikipedia.org/wiki/Multiplexing</a><br>
<br>
Great news! Update published:<br>
<br>
[4] Web Audio API<br>
W3C Candidate Recommendation, 11 June 2020<br>
Latest published version: <a href="https://www.w3.org/TR/webaudio/" rel="noreferrer" target="_blank">https://www.w3.org/TR/webaudio/</a><br>
Editor's Draft version: <a href="https://webaudio.github.io/web-audio-api/" rel="noreferrer" target="_blank">https://webaudio.github.io/web-audio-api/</a><br>
<br>
see<br>
* 1.14 The ChannelMergerNode Interface<br>
* 1.15 The ChannelSplitterNode Interface<br>
* 1.30 The StereoPannerNode Interface<br>
<br>
[5] Web Audio API 1.0 Implementation Report<br>
Chris Lilley, 6 September 2018<br>
<a href="https://webaudio.github.io/web-audio-api/implementation-report.html" rel="noreferrer" target="_blank">https://webaudio.github.io/web-audio-api/implementation-report.html</a><br>
<br>
This report shows status of Safari, Chrome/Blink, Firefox, Edge for 8426 WebAudio API tests.<br>
<br>
Key issue: are channels monophonic, or can they be stereo. Answer: monophonic.<br>
<br>
The Web Audio API (appears to) defines channels as corresponding to a single source.<br>
<br>
We are aligning with that. We are now working to best align Web Audio API with X3D architecture.<br>
<br>
We are also looking to ensure that we are not designing the X3D alignment in a way that blocks other audio API approaches. To our expected advantage, the W3C Audio API design has already composed and harmonized requirements from multiple existing implementations. That expectation is based on (a) harmonization is the nature of W3C specification process, and (b) multiple implementations already exist, including sound libraries beyond the four listed above.<br>
<br>
Of interest, and as expected, we intend to become active in the W3C Audio Group (as Web3D liaisons) and any differences or lessons learned from X3D4 use of this API will no doubt hold interest there.<br>
<br>
We do need prose describing channels and concepts as a new section in Sound component. We'll start with Efi's documents for that. We will add following to draft for review:<br>
<br>
16.4.18 ChannelSplitter (likely, need channel design discussion, next session)<br>
16.4.19 ChannelMerger<br>
<br>
----<br>
<br>
Thanks for more great progress. Have fun with X3D4 Sound and W3C Audio!<br>
<br>
v/r Don<br>
<br>
<br>
On 7/9/2020 11:10 AM, Don Brutzman wrote:<br>
> Minutes for 2-hour meeting today. Attendees Efi Lakka, Thanos Malamos, Dick Puk, Don Brutzman.<br>
> <br>
> No member-only information is included in these minutes.<br>
> <br>
> Warmup thoughts, design question: is there a way to slave ListenerPoint (a) to current view, or (b) each provided Viewpoint? Also, how do you return to current view if a separate ListenterPoint was bound?<br>
> <br>
> Details follow. No new attachments for this week's meeting, see last week's notes.<br>
> <br>
> On 7/8/2020 7:31 PM, Don Brutzman wrote:<br>
>> 1. continuing...<br>
>><br>
>> On 7/8/2020 10:14 AM, Richard F. Puk wrote:<br>
>>> [...]<br>
>>> I have the following comments on the minutes (Shown in red):<br>
>>><br>
>>> In my opinion, the new X3DSoundSourceNodes should not have the work “Source” in the names of the concrete nodes as the node names are clear without it, it is redundant, and also is different from the existing nodes.<br>
>><br>
>> As discussed, that is one possible solution, but other considerations also pertain.<br>
>><br>
>> Relevant node names of interest are:<br>
>><br>
>> +- X3DTimeDependentNode -+- TimeSensor<br>
>> | |<br>
>> | +- X3DSoundSourceNode -+- AudioClip (~Web Audio API: MediaElementAudioSourceNode)<br>
>> | +- MovieTexture<br>
>> | +- AudioBufferSource (~Web Audio API: AudioBuffer + AudioBufferSourceNode)<br>
>> | +- OscillatorSource (~Web Audio API: OscillatorNode)<br>
>> | +- StreamAudioSource (~Web Audio API: MediaStreamAudioSourceNode)<br>
>> | +- MicrophoneSource<br>
>> |<br>
>> | +- X3DSoundDestinationNode -+- AudioDestination (~Web Audio API: AudioDestinationNode)<br>
> <br>
> As elaboration: specifically AudioDestination is a hardware device ID, maintained by operating system (OS).<br>
> <br>
> FWIW, without loss of functionality, a synonym for AudioDestination might be HardwareAudioDestination (in contrast to StreamAudioDestination).<br>
> <br>
>> | +- StreamAudioDestination (~Web Audio API:MediaStreamAudioDestinationNode)<br>
> <br>
> nowadays typically WebRTC, which is stable and approved (and usable!!) specification. 8)<br>
> <br>
>> + +- X3DSoundProcessingNode -+- BiquadFilter<br>
>> | +- Convolver<br>
>> | +- Delay<br>
>> | +- DynamicsCompressor<br>
>> | +- Gain<br>
>> | +- WaveShaper<br>
>> | +- PeriodicWave<br>
>> |<br>
>> | +- X3DSoundAnalysisNode -+- Analyser<br>
>> |<br>
>> | +- X3DSoundChannelNode -+- ChannelSplitter<br>
>> | +- ChannelMerger<br>
>><br>
>> AudioClip and MovieTexture are well understood legacy nodes and their names will not change.<br>
>><br>
>> Want to voice caution here. Conceptually, the base names "AudioBuffer", "Oscillator" and "StreamAudio" by themselves might be referring to sources of audio (i.e. outputs from streams) or sinks for audio (i.e. inputs to streams). When choosing node names (i.e. the words in the X3D language) we strive for clarity and want to avoid ambiguity. So it may make sense to emphasize purpose by keeping the suffix "Source" for these nodes.<br>
>><br>
>> Am confident that when we start putting Web Audio graphs together using this new set of nodes, implementation/evaluation results will either make good sense (like the Web Audio javascript) or else confusing gaps will become more evident. Example usage is important to consider.<br>
>><br>
>> We'll discuss and reach initial consensus on good names during Thursday morning's teleconference, 0900-1030 pacific)<br>
> <br>
> We discussed this topic in exhilarating detail.<br>
> <br>
> Naming heuristic: "when the going gets tough, the tough get... verbose."<br>
> <br>
> Suggestion: for the near term we go for longer names for clarity, but once we have examples in hand, we reconsider whether shorter names can work.<br>
> <br>
>> 2. Dick and I made excellent progress updating the Concepts 4.4.2.3 Interface Hierarchy to match the latest meeting notes.<br>
>><br>
>> We are ready to move acoustic fields from Material to AcousticProperties node in Shape component when Michalis finishes his merge of glTF PBR Pull Request 8.<br>
> <br>
> confirmed<br>
> <br>
>> Several editorial issues are pending, including:<br>
>><br>
>>> The concrete node derivations need to replace “AudioNode” with the appropriate abstract node type.<br>
>>><br>
>>> nDick<br>
>><br>
>> Our agenda for Thursday is to finalize node list and interfaces, then start dropping in prose from Efi's detailed documentation report.<br>
> a. Agreed on abstract types:<br>
> <br>
> 16.3 Abstract types<br>
> 16.3.1 X3DAudioListenerNode<br>
> 16.3.2 X3DSoundAnalysisNode<br>
> 16.3.3 X3DSoundChannelNode<br>
> 16.3.4 X3DSoundDestinationNode<br>
> 16.3.5 X3DSoundProcessingNode<br>
> 16.3.6 X3DSoundNode<br>
> 16.3.7 X3DSoundSourceNode<br>
> <br>
> Triage follows for nodes, in order to edit document further for draft release:<br>
> <br>
> 16.4 Node reference<br>
> <br>
> b. *Accepted*<br>
> <br>
> 16.4.1 AudioClip<br>
> 16.4.2 Sound<br>
> 16.4.3 SpatialSound<br>
> 16.4.4 ListenerPoint<br>
> 16.4.4 AudioBufferSource<br>
<br>
[... snip duplicate ..]<br>
<br>
> 16.4.2 OscillatorSource<br>
> 16.4.7 BiquadFilter<br>
> 16.4.8 Convolver<br>
> 16.4.9 Delay<br>
> 16.4.10 DynamicsCompressor<br>
> 16.4.12 WaveShaper<br>
> 16.4.13 PeriodicWave<br>
> 16.4.17 Analyser<br>
> 16.4.** StreamAudioSource<br>
> <br>
> 16.4.14 AudioDestination<br>
> 16.4.xx StreamAudioDestination<br>
> <br>
> 16.4.yy MicrophoneSource (physical hardware device)<br>
> <br>
> c. *Not Included*<br>
> <br>
> 16.4.5 MediaElementAudioSource (same as AudioClip)<br>
> 16.4.6 MediaStreamAudioSource (same as StreamAudioSource)<br>
> <br>
> 16.4.15 MediaStreamAudioDestination (same as StreamAudioDestination)<br>
> 16.4.16 MediaStreamTrack<br>
> 16.4.1 AudioParam (merged functionality? not used)<br>
> 16.4.21 Panner (merged functionality in SpatialSound)<br>
> <br>
> d. *Pending major issues, decision TBD*<br>
> <br>
> 16.4.5 BinauralListenerPoint (will attempt to merge with ListenerPoint)<br>
> (SurroundSound might also be a variation on ListenerPoint)<br>
> <br>
> 16.4.18 ChannelSplitter (likely, need channel design discussion, next session)<br>
> 16.4.19 ChannelMerger (likely, need channel design discussion, next session)<br>
> <br>
> 16.4.1 AudioContext (perhaps integrated within interface hierarchy, more discussion)<br>
> <br>
> 16.4.11 Gain (included as a field in many nodes, or within interface hierarchy)<br>
> <br>
> Virtual Microphone Sensor (perhaps same as ListenerPoint or media stream?)<br>
> <br>
> How to simply pan or rebalance left/right?<br>
> <br>
> SFNode/MFNode fields defining parent-child node relationships, allowing tree-like construction of a W3C Audio API graph using X3D nodes.<br>
> <br>
> ==================<br>
> <br>
> 3. Next steps:<br>
> <br>
> a. Request: Efi please confirm all field definitions are correct for the Accepted nodes - so far so great.<br>
> <br>
> b. TODO Don and Dick will update draft specification in github, specifically nodes fields and prose, using Efi's two provided analysis/report documents.<br>
> <br>
> c. Objective: shift sharpest focus from our draft notes to actual github spec as our outcome document.<br>
> <br>
> d. What are our example audio graphs to represent in X3D?Looking at examples will help us confirm that the right fields and parent-child node relationships are described well. Efi has a detailed example comparing earlier/evolved examples that will help. She can (and will soon) release this. As they are each reviewed and considered mature, we will place them online in version control as part of X3D Examples Archive.<br>
> <br>
> e. It is clear that a good amount of work remains, but at least 80% is well defined and sensible. We need to ship a draft X3D4 Sound Component that reflects this progress, noting more work to follow. This will allow review and assessment (and possibly engagement) by many others.<br>
> <br>
> f. It is not completely certain yet, but we are very close hope to resolve this sufficiently to support a Web3D 2020 Conference paper. This is important and primarily a restructuring of Efi's excellent documents updates in paper form. We can certainly support a Web3D 2020 Conference tutorial.<br>
> <br>
> * Statistics joke: "The first 90% of the work takes 90% of the time. The last 10% of the work takes the other 90% of the time." (perhaps similar to Zeno's Paradox?!)<br>
> <br>
> * Zeno's Paradoxes<br>
> <a href="https://en.wikipedia.org/wiki/Zeno%27s_paradoxes" rel="noreferrer" target="_blank">https://en.wikipedia.org/wiki/Zeno%27s_paradoxes</a><br>
> <br>
> Summary of our near-term objective: produce a sufficiently advanced X3D4 Sound Component that enables meaningful understanding, implementation, evaluation and finalization.<br>
> <br>
>> We are getting pretty close to releasing the Sound component for inclusion in draft X3D4 Specification.<br>
>><br>
>> Due diligence continues. All feedback remains helpful and welcome.<br>
> <br>
> Next meeting of group: back to Wednesday 15 July, same time.<br>
> <br>
> "What say?" Have fun with X3D Audio and Sound! 8)<br>
> <br>
> all the best, Don<br>
<br>
all the best, Don<br>
-- <br>
Don Brutzman Naval Postgraduate School, Code USW/Br <a href="mailto:brutzman@nps.edu" target="_blank">brutzman@nps.edu</a><br>
Watkins 270, MOVES Institute, Monterey CA 93943-5000 USA +1.831.656.2149<br>
X3D graphics, virtual worlds, navy robotics <a href="http://faculty.nps.edu/brutzman" rel="noreferrer" target="_blank">http://faculty.nps.edu/brutzman</a><br>
_______________________________________________<br>
x3d-public mailing list<br>
<a href="mailto:x3d-public@web3d.org" target="_blank">x3d-public@web3d.org</a><br>
<a href="http://web3d.org/mailman/listinfo/x3d-public_web3d.org" rel="noreferrer" target="_blank">http://web3d.org/mailman/listinfo/x3d-public_web3d.org</a><br>
</blockquote></div>