[x3d-public] Sound component: Sound effects processing summary, Collision proxy for computational reduction

Don Brutzman brutzman at nps.edu
Tue Jul 28 15:45:15 PDT 2020


Thanks for the response Efi.  It looks like Dick and I are oversubscribed with ISO SC24 meetings so we will continue the dialog here in email this week.

On 7/28/2020 6:10 AM, Eftychia Lakka wrote:
> Dear Don,
> 
> 
> please find attached the file with answers/update in the open issues that you have already noticed by emails. We had a discussion with Thanos yesterday and today, so you can find our clarifications (highlighted in green) in the document.
> 
> 
> Best regards,
> Efi

=============================

1. Diffraction sources

> “Diffraction sources are not explicitly represented in this component, but instead are best modeled by audio chains including ListenerPoint and SpatialSound to emulate such sophisticated sound propagation”
> 
> We do not understand what you mean by that

Well, when we look at the prose and your excellent figure in "16.2.3 Sound propagation", we see that there is a picture and good descriptions describing diffraction.

"Diffraction: the fact that a listener can hear sounds around corners and around barriers involves a diffraction model of sound. It is the spread of waves around corners, behind obstacles or around the edges of an opening as illustrated in Figure 16.2 (inset d). The amount of diffraction increases with wavelength, meaning that sound waves with lower frequencies, and thus with greater wavelengths than obstacles or openings dimensions, is spread over larger regions behind the openings or around the obstacles."

Some of this might be handled by sound-propagation engines, for example sound on the far side of a block.

But there is no explicit mention there, or in fields of AcousticProperties, regarding how the modeling of complex geometry (such as holes or cracks) permitting diffraction sources might be accomplished.  Thus it seemed worth mentioning in specification.

Such propagation is highly dependent on geometry of holes allowing diffracted sound through a gap, and thus of course on an input sound arriving at the point of the gap. And so, it seemed appropriate to mention that an author might put a ListenerSource on the entry side of a gap, have an audio graph filter/diminish that signal, and then connect that to a virtual source on the far side of the gap.

Hope that makes sense.  Better paragraph in specification welcome, looking forward to discussing.

=============================

2. Thanks for abstract node descriptions.  Accepted, with some editing.

16.3.1 X3DSoundAnalysisNode

Made it a bit more generic:

         This is the base node type for nodes which receive real-time generated data,
         without any change from the input to output sound information.

For now, we only have Analyser node implementing this interface, but am expecting that additional more-sophisticated nodes will emerge in future so it seems worth keeping.

=============================

16.3.2 X3DSoundChannelNode

This is the base node type for nodes that handle of channels in an audio stream, allowing them to be split and merged.


=============================

16.3.3 X3DSoundDestinationNode

This is the base node type for all sound destination nodes, which represent the final destination of an audio signal and are what the user can ultimately hear. Such nodes are often considered as audio output devices which are connected to speakers. All rendered audio that is intended to be heard gets routed to these terminal nodes.

=============================

16.3.4 X3DSoundProcessingNode

This is the base node type for all sound processing nodes, which are used to enhance audio with filtering, delaying, changing gain, etc.

=============================

3. enabled/connect

> TODO: if enabled FALSE, does signal pass through unmodified or is it blocked? Perhaps an additional boolean is needed for pass-through state?
>> There is no kind of constraint of this issue in W3C Web Audio API (does not support enabled attribute). It supports attributes connect/disconnect as method of AudioNode object.
>> Thus, it is open to use either or both case in the implementation of X3D audio. The same to all the corresponding questions into other objects as well.

Since we have not defined "connect" anywhere yet, am thinking we should keep this.  Added to the TODO:

	Modeling the 'connect' attribute and defining defaults is necessary for each case.

=============================

16.4.14 PeriodicWave

> PeriodicWave defines a periodic waveform that can be used to shape the output of an Oscillator. 

> TODO confirm and describe attributes 

>> the corresponding Web Audio API “PeriodicWave” does not have any attribute. It is not clear which node is its “parent”, so it is not clear if it heritage any attributes. So, it heritages the fields from the X3D abstract node 16.3.4 X3DSoundProcessingNode

Agreed it is rather hard to follow in the Web Audio API.  Am pretty sure it is used to modify the output of an Oscillator node.  Let's look at examples and discuss further.

=============================

cone fields: thanks for diagrams!  i will work on those and report back later.

=============================

> [Editor's note] TODO continued design, implementation and evaluation work is needed to ensure that full coverage of capabilities is achieved. --> 
> > We do not understand what you mean by that

That is just an editors note for the overall document regarding how it is still a work in progress.

=============================

Have pushed update on these items - thank you.  Onward we go.

all the best, Don
-- 
Don Brutzman  Naval Postgraduate School, Code USW/Br       brutzman at nps.edu
Watkins 270,  MOVES Institute, Monterey CA 93943-5000 USA   +1.831.656.2149
X3D graphics, virtual worlds, navy robotics http://faculty.nps.edu/brutzman



More information about the x3d-public mailing list