<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>Extensible 3D (X3D), ISO/IEC 19775-1:202x, 16 Sound component</title>
<link rel="stylesheet" href="../X3D.css" type="text/css">
</head>
<body style="background-position: top center; background-attachment: fixed; background-image:url(../WorkingDraftWatermark.png);">
<div class="CenterDiv">
<a href="../Architecture.html" title="to index">
<img class="x3dlogo" src="../../Images/x3d.png" alt="X3D logo" style="border-width: 0px; width: 176px; height: 88px"></a>
</div>
<div class="CenterDiv">
<p class="HeadingPart">
Extensible 3D (X3D)<br />
Part 1: Architecture and base components</p>
<p class="HeadingClause"><span class="proposed" title="Mantis 1267 new sound component">16 Sound component</span></p>
</div>
<img class="x3dbar" src="../../Images/x3dbar.png" alt="--- X3D separator bar ---" width="430" height="23">
<h1><a name="Introduction"></a>
<img class="cube" src="../../Images/cube.gif" alt="cube" width="20" height="19">
16.1 Introduction</h1>
<h2><a name="Name"></a>16.1.1 Name</h2>
<p>The name of this component is "Sound". This name shall be used when referring
to this component in the COMPONENT statement (see
<a href="core.html#COMPONENTStatement">7.2.5.4 Component statement</a>).</p>
<h2><a name="Overview"></a>16.1.2 Overview</h2>
<p>This clause describes the Sound component of this part of ISO/IEC 19775. This
includes how sound is delivered to an X3D world as well as how sounds are accessed.
<a href="#t-Topics">Table 16.1</a> provides links to the major topics in
this clause.</p>
<div class="CenterDiv">
<p class="TableCaption">
<a name="t-Topics"></a>
Table 16.1 — Topics</p>
<table class="topics">
<tr>
<td>
<ul>
<li><a href="#Introduction">16.1 Introduction</a>
<ul>
<li><a href="#Name">16.1.1 Name</a></li>
<li><a href="#Overview">16.1.2 Overview</a> </li>
</ul></li>
<li><a href="#Concepts">16.2 Concepts</a>
<ul>
<li><a href="#Soundpriority">16.2.1 Sound priority</a></li>
<li><a href="#Soundattenandspatial">16.2.2 Sound attenuation and spatialization</a></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#SoundPropagation">16.2.3 Sound propagation</a> </span></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#SoundEffectsProcessing">16.2.4 Sound effects processing</a> </span></li>
</ul></li>
<li><a href="#Abstracttypes">16.3 Abstract types</a>
<ul>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#X3DSoundAnalysisNode" >16.3.1 <i>X3DSoundAnalysisNode</i></a></span></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#X3DSoundChannelNode" >16.3.2 <i>X3DSoundChannelNode</i></a></span></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#X3DSoundDestinationNode" >16.3.3 <i>X3DSoundDestinationNode</i></a></span></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#X3DSoundProcessingNode" >16.3.4 <i>X3DSoundProcessingNode</i></a></span></li>
<li><a href="#X3DSoundNode">16.3.5 <i>X3DSoundNode</i></a></li>
<li><a href="#X3DSoundSourceNode">16.3.6 <i>X3DSoundSourceNode</i></a></li>
</ul></li>
<li><a href="#Nodereference">16.4 Node reference</a>
<ul>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#Analyser">16.4.1 Analyser</a></span></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#AudioBufferSource">16.4.2 AudioBufferSource</a></span></li>
<li><a href="#AudioClip">16.4.3 AudioClip</a></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#AudioDestination">16.4.4 AudioDestination</a></span></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#BiquadFilter">16.4.5 BiquadFilter</a></span></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#ChannelMerger">16.4.6 ChannelMerger</a></span></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#ChannelSplitter">16.4.7 ChannelSplitter</a></span></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#Convolver">16.4.8 Convolver</a></span></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#Delay">16.4.9 Delay</a></span></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#DynamicsCompressor">16.4.10 DynamicsCompressor</a></span></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#ListenerPoint">16.4.11 ListenerPoint</a></span></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#MicrophoneSource">16.4.12 MicrophoneSource</a></span></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#OscillatorSource">16.4.13 OscillatorSource</a></span></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#PeriodicWave">16.4.14 PeriodicWave</a></span></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#SpatialSound">16.4.15 SpatialSound</a></span></li>
<li><a href="#Sound">16.4.16 Sound</a></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#StreamAudioDestination">16.4.17 StreamAudioDestination</a></span></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#StreamAudioSource">16.4.18 StreamAudioSource</a></span></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#WaveShaper">16.4.19 WaveShaper</a></span></li>
</ul></li>
<li><span class="proposed" title="Mantis 1267 new sound component"><a href="#SupportLevels">16.5 Support levels</span></a></li>
</ul>
<ul>
<li><a href="#f-Stereopanning">Figure 16.1 — Stereo panning</a></li>
<li><a href="#f-Soundnodegeometry">Figure 16.2 — Sound node geometry</a></li>
</ul>
<ul>
<li><a href="#t-Topics">Table 16.1 — Topics</a></li>
<li><a href="#t-SupportLevels">Table 16.2 — Sound component support levels</a></li>
</ul>
</td>
</tr>
</table>
</div>
<h1><img class="cube" src="../../Images/cube.gif" alt="cube" width="20" height="19">
<a name="Concepts"></a>
16.2 Concepts</h1>
<h2><a name="Soundpriority"></a>
16.2.1 Sound priority</h2>
<p>If the browser does not have the resources to play all of the currently active
sounds, it is recommended that the browser sort the active sounds into an ordered
list using the following sort keys in the order specified:</p>
<ol type="a">
<li>decreasing <i>priority;</i></li>
<li>for sounds with <i>priority</i> > 0.5, increasing (now-<i>startTime</i>);</li>
<li>decreasing <i>intensity</i> at viewer location (<i>intensity </i>× "intensity
attenuation");</li>
</ol>
<p>where <i>priority </i>is the <i>priority</i> field of the Sound node, now represents
the current time, <i>startTime</i> is the <i>startTime</i> field of the audio
source node specified in the <i>source</i> field, and "intensity attenuation"
refers to the intensity multiplier derived from the linear decibel attenuation
ramp between inner and outer ellipsoids.</p>
<p>It is important that sort key 2 be used for the high priority (event and cue)
sounds so that new cues will be heard even when the browser is "full"
of currently active high priority sounds. Sort key 2 should not be used for
normal priority sounds, so selection among them will be based on sort key 3
(intensity at the location of the viewer).</p>
<p>The browser shall play as many sounds from the beginning of this sorted list
as it can given available resources and allowable latency between rendering.
On most systems, the resources available for MIDI streams are different from
those for playing sampled sounds, thus it may be beneficial to maintain a separate
list to handle MIDI data.</p>
<h2><a name="Soundattenandspatial"></a>
16.2.2 Sound attenuation and spatialization</h2>
<p>In order to create a linear decrease in loudness as the viewer moves from the
inner to the outer ellipsoid of the sound, the attenuation must be based on
a linear decibel ramp. To make the falloff consistent across browsers, the decibel
ramp is to vary from 0 dB at the minimum ellipsoid to -20 dB at the outer ellipsoid.
Sound nodes with an outer ellipsoid that is ten times larger than the minimum
will display the inverse square intensity drop-off that approximates sound attenuation
in an anechoic environment.</p>
<p>Browsers may support spatial localization of sounds whose <i>spatialize</i>
field is <code>TRUE</code> as well as their underlying sound
libraries will allow. Browsers shall at least support stereo panning of non-MIDI
sounds based on the angle between the viewer and the source. This angle is obtained
by projecting the <a href="#Sound">Sound</a> <i>location</i> (in global space) onto the XZ plane
of the viewer. Determine the angle between the Z-axis and the vector from the
viewer to the transformed <i>location</i>, and assign a pan value in the range
[0.0, 1.0] as depicted in <a href="#f-Stereopanning">Figure 16.1</a>. Given
this pan value, left and right channel levels can be obtained using the following
equations:</p>
<p class="Equation"> leftPanFactor = 1 - pan<sup>2</sup><br />
rightPanFactor = 1 - (1 - pan)<sup>2</sup><br />
</p>
<div class="CenterDiv">
<a name="f-Stereopanning"></a>
<img src="../../Images/Sound2.gif" alt="Stereo Panning" width="394" height="288">
<p class="FigureCaption">
Figure 16.1 — Stereo panning</p>
</div>
<p>Using this technique, the loudness of the sound is modified by the <i> intensity</i>
field value, then distance attenuation to obtain the unspatialized audio output.
The values in the unspatialized audio output are then scaled by leftPanFactor
and rightPanFactor to determine the final left and right output signals. The
use of more sophisticated localization techniques is encouraged, but not required
(see <a href="../bibliography.html#[SNDB]">
[SNDB]</a>).</p>
<div class="proposed" title="Mantis 1267 new sound component">
<h2><a name="SoundPropagation"></a> 16.2.3 Sound propagation</h2>
<p>
Sound-propagation techniques can be used to simulate sound waves as they travel from each source to scene listening points
by taking into account the expected interactions with various objects in the scene. In other words, spatial sound rendering includes
the estimation of physical effects involved in sound propagation such as surface reflection (specular, diffusion)
and wave phenomena (refraction, diffraction) within a 3D scene.
<a href="#f-SoundPropagationPhenomena">Figure 16.2</a>
provides an overview of the physical models of sound propagation that are considered.
</p>
</div>
<div class="CenterDiv">
<a name="f-SoundPropagationPhenomena"></a>
<img src="../../Images/Sound_Propagation_Phenomena.png" alt="Sound Propagation Phenomena" width="90%"><!-- width="1169" height="688" -->
<p class="FigureCaption">
<span class="proposed" title="Mantis 1267 new sound component">
Figure 16.2 — Sound Propagation Phenomena
</span>
</p>
</div>
<div class="proposed" title="Mantis 1267 new sound component">
<ul>
<li>Reflection (Specular - Diffusion): During the propagation of a sound wave in an enclosed space, the wave hits objects or
room boundaries and its free propagation is disturbed. Moreover, during this process, at least a portion
of the incident wave will be thrown back, a phenomenon known as reflection. If the wavelength of the
sound wave is small enough with respect to the dimensions of the reflecting object and large compared
with possible irregularities of the reflecting surface, a specular reflection occurs. This phenomenon is
illustrated in <a href="#f-SoundPropagationPhenomena">Figure 16.2</a> (a), in which the angle of reflection is equal to the
angle of incidence. In contrast, if the sound wavelength is comparable to the corrugation dimensions of an irregular reflection surface,
the incident sound wave will be scattered in many directions. In this case, the phenomenon is called diffuse reflection
(<a href="#f-SoundPropagationPhenomena">Figure 16.2</a> (b)).
</li>
<li>Refraction: It is the change in the propagation direction of waves when they obliquely cross the boundary between two mediums where their
speed is different (<a href="#f-SoundPropagationPhenomena">Figure 16.2</a> (c)).
For transmission of a plane sound wave from air into another medium, the refraction index in following equation (Snell’s Law) is used, for calculating the geometric
conditions.
<br />
<!-- TODO look at styles of all equations -->
<code> n = c'/c = sinθ'/sinθ</code>
where c’ and c the sound speed in the two media, θ the angle of incidence and θ’ the angle of refraction.
</li>
<li>Diffraction: The fact that a listener can hear sounds around corners and around barriers involves a diffraction model of sound. It is the spread
of waves around corners, behind obstacles or around the edges of an opening (<a href="#f-SoundPropagationPhenomena">Figure 16.2</a> (d)). The amount
of diffraction increases with wavelength, meaning that sound waves with lower frequencies, and thus with greater wavelengths than obstacles or openings
dimensions, is spread over larger regions behind the openings or around the obstacles.</li>
</ul>
</div>
<div class="proposed" title="Mantis 1267 new sound component">
<h2><a name="SoundEffectsProcessing"></a> 16.2.4 Sound effects processing</h2>
<p>
Sound streams can be manipulated by a variety of sound effects...
</p>
</div>
<h1><img src="../../Images/cube.gif" width="20" height="19" alt="cube"><a name="Abstracttypes"></a>16.3 Abstract types</h1>
<div class="proposed" title="Mantis 1267 new sound component">
<div class="proposed" title="Mantis 1267 new sound component">
<h2><a name="X3DSoundAnalysisNode"></a>
16.3.1 <i>X3DSoundAnalysisNode</i></h2>
<pre class="node">X3DSoundAnalysisNode : X3DTimeDependentNode {
SFString [in,out] description ""
SFBool [in,out] enabled TRUE
SFBool [in,out] loop FALSE
SFNode [in,out] metadata NULL [X3DMetadataObject]
SFTime [in,out] pauseTime 0 (-∞,∞)
SFTime [in,out] resumeTime 0 (-∞,∞)
SFTime [in,out] startTime 0 (-∞,∞)
SFTime [in,out] stopTime 0 (-∞,∞)
SFTime [out] elapsedTime
SFBool [out] isActive
SFBool [out] isPaused
}
</pre>
<p>
TODO: description of this node type.
</p>
<p>
TODO: if enabled FALSE, does signal pass through or is it blocked? Perhaps an additional boolean is needed for pass-through state?
</p>
</div>
<div class="proposed" title="Mantis 1267 new sound component">
<h2><a name="X3DSoundChannelNode"></a>
16.3.2 <i>X3DSoundChannelNode</i></h2>
<pre class="node">X3DSoundChannelNode : X3DTimeDependentNode {
SFString [in,out] description ""
SFBool [in,out] enabled TRUE
SFBool [in,out] loop FALSE
SFNode [in,out] metadata NULL [X3DMetadataObject]
SFTime [in,out] pauseTime 0 (-∞,∞)
SFTime [in,out] resumeTime 0 (-∞,∞)
SFTime [in,out] startTime 0 (-∞,∞)
SFTime [in,out] stopTime 0 (-∞,∞)
SFTime [out] elapsedTime
SFBool [out] isActive
SFBool [out] isPaused
}
</pre>
<p>
TODO: description of this node type.
</p>
<p>
TODO: if enabled FALSE, does signal pass through or is it blocked? Perhaps an additional boolean is needed for pass-through state?
</p>
</div>
<div class="proposed" title="Mantis 1267 new sound component">
<h2><a name="X3DSoundDestinationNode"></a>
16.3.3 <i>X3DSoundDestinationNode</i></h2>
<pre class="node">X3DSoundDestinationNode : X3DTimeDependentNode {
SFString [in,out] description ""
SFBool [in,out] enabled TRUE
SFBool [in,out] loop FALSE
SFNode [in,out] metadata NULL [X3DMetadataObject]
SFTime [in,out] pauseTime 0 (-∞,∞)
SFTime [in,out] resumeTime 0 (-∞,∞)
SFTime [in,out] startTime 0 (-∞,∞)
SFTime [in,out] stopTime 0 (-∞,∞)
SFTime [out] elapsedTime
SFBool [out] isActive
SFBool [out] isPaused
}
</pre>
<p>
TODO: description of this node type.
</p>
<p>
TODO: if enabled FALSE, does signal pass through or is it blocked? Perhaps an additional boolean is needed for pass-through state?
</p>
</div>
<div class="proposed" title="Mantis 1267 new sound component">
<h2><a name="X3DSoundProcessingNode"></a>
16.3.4 <i>X3DSoundProcessingNode</i></h2>
<pre class="node">X3DSoundProcessingNode : X3DTimeDependentNode {
SFString [in,out] description ""
SFBool [in,out] enabled TRUE
SFBool [in,out] loop FALSE
SFNode [in,out] metadata NULL [X3DMetadataObject]
SFTime [in,out] pauseTime 0 (-∞,∞)
SFTime [in,out] resumeTime 0 (-∞,∞)
SFTime [in,out] startTime 0 (-∞,∞)
SFTime [in,out] stopTime 0 (-∞,∞)
SFTime [out] elapsedTime
SFBool [out] isActive
SFBool [out] isPaused
}
</pre>
<p>
TODO: description of this node type.
</p>
<p>
TODO: if enabled FALSE, does signal pass through or is it blocked? Perhaps an additional boolean is needed for pass-through state?
</p>
</div>
</div>
<h2><a name="X3DSoundNode"></a>16.3.5 <i>X3DSoundNode</i></h2>
<pre class="node">X3DSoundNode : X3DChildNode {
SFString [in,out] description ""
SFNode [in,out] metadata NULL [X3DMetadataObject]
}
</pre>
<p>This abstract node type is the base for all sound nodes.</p>
<h2><a name="X3DSoundSourceNode"></a>16.3.6 <i>X3DSoundSourceNode</i></h2>
<pre class="node">X3DSoundSourceNode : X3DTimeDependentNode {
SFString [in,out] description ""
SFBool [in,out] loop FALSE
SFNode [in,out] metadata NULL [X3DMetadataObject]
SFTime [in,out] pauseTime 0 (-∞,∞)
SFFloat [in,out] pitch 1.0 (0,∞)
SFTime [in,out] resumeTime 0 (-∞,∞)
SFTime [in,out] startTime 0 (-∞,∞)
SFTime [in,out] stopTime 0 (-∞,∞)
SFTime [out] duration_changed
SFTime [out] elapsedTime
SFBool [out] isActive
SFBool [out] isPaused
}
</pre>
<p>This abstract node type is used to derive node types that can emit audio data.</p>
<h1><img src="../../Images/cube.gif" width="20" height="19" alt="cube">
<a name="Nodereference"></a>16.4 Node reference</h1>
<div class="proposed" title="Mantis 1267 new sound component">
<!-- ====================================================== -->
<h2><a name="Analyser"></a>16.4.1 Analyser</h2>
<pre class="node">Analyser : X3DSoundAnalysisNode {
SFString [in,out] description ""
SFInt32 [in,out] fftSize 2048 [0,∞)
SFInt32 [in,out] frequencyBinCount 1024 [0,∞)
SFFloat [in,out] minDecibels -100 (-∞,∞)
SFFloat [in,out] maxDecibels -30 (-∞,∞)
SFFloat [in,out] smoothingTimeConstant 0.8 [0,∞)
<!-- Related to W3C Audio API AudioNode -->
SFInt32 [in,out] numberOfInputs 0 [0,∞)
SFInt32 [in,out] numberOfOutputs 0 [0,∞)
SFInt32 [in,out] channelCount 0 [0,∞)
SFString [in,out] channelCountMode "max"
SFString [in,out] channelInterpretation "speakers"
}
</pre>
<p> The Analyser node provides real-time frequency and time-domain analysis information, without any change to the input.</p>
<p>The <i>fftSize</i> field is an unsigned long value representing the size of the FFT (<a href="https://en.wikipedia.org/wiki/Fast_Fourier_transform">Fast Fourier Transform</a>)
to be used to determine the frequency domain.</p>
<p>The <i>frequencyBinCount</i> field is an unsigned long value half that of the FFT size. This generally equates
to the number of data values you will have to play with for the visualization.</p>
<p>The <i>minDecibels</i> field is a value representing the minimum power value in the scaling range for the FFT analysis data, for conversion to unsigned byte values.</p>
<p>The <i>maxDecibels</i> field is a value representing the maximum power value in the scaling range for the FFT analysis data, for conversion to unsigned byte values.</p>
<p>The <i>smoothingTimeConstant</i> field is a value representing the averaging constant with the last analysis frame.</p>
<!-- ====================================================== -->
<h2><a name="AudioBufferSource"></a>16.4.2 AudioBufferSource</h2>
<pre class="node">AudioBufferSource : X3DSoundSourceNode {
SFString [in,out] description ""
SFFloat [in,out] sampleRate 0 [0,∞)
SFInt32 [in,out] length 0 [0,∞)
SFFloat [in,out] duration 0 [0,∞)
SFInt32 [in,out] numberOfChannels 0 [0,∞)
SFFloat [in,out] detune 0 [0,∞)
SFBool [in,out] loop FALSE
SFFloat [in,out] loopStart 0 [0,∞)
SFFloat [in,out] loopEnd 0 [0,∞)
SFFloat [in,out] playbackRate 0 [0,∞)
MFFloat [in,out] buffer NULL [−1,1]
<!-- Related to W3C Audio API AudioNode -->
SFInt32 [in,out] numberOfInputs 0 [0,∞)
SFInt32 [in,out] numberOfOutputs 0 [0,∞)
SFInt32 [in,out] channelCount 0 [0,∞)
SFString [in,out] channelCountMode "max"
SFString [in,out] channelInterpretation "speakers"
}
</pre>
<p>
AudioBufferSource represents an audio asset residing in memory.
</p>
</div>
<!-- ====================================================== -->
<h2><a name="AudioClip"></a>16.4.3 AudioClip</h2>
<pre class="node">AudioClip : X3DSoundSourceNode, X3DUrlObject {
SFString [in,out] description ""
SFBool [in,out] loop FALSE
SFNode [in,out] metadata NULL [X3DMetadataObject]
SFTime [in,out] pauseTime 0 (-∞,∞)
SFFloat [in,out] pitch 1.0 (0,∞)
SFTime [in,out] resumeTime 0 (-∞,∞)
SFTime [in,out] startTime 0 (-∞,∞)
SFTime [in,out] stopTime 0 (-∞,∞)
MFString [in,out] url [] [URI]
SFTime [out] duration_changed
SFTime [out] elapsedTime
SFBool [out] isActive
SFBool [out] isPaused
}
</pre>
<p>An AudioClip node specifies audio data that can be referenced
by <a href="#Sound">Sound</a> nodes.</p>
<p>The <i>description</i> field specifies a textual description
of the audio source. A browser is not required to display the <i>description</i>
field but may choose to do so in addition to playing the sound.</p>
<p>The <i>url</i> field specifies the URL from which the
sound is loaded. Browsers shall support at least the <i>wavefile</i>
format in uncompressed PCM format (see
<a href="../bibliography.html#[WAV]">[WAV]</a>). It is recommended that
browsers also support the MIDI file type 1 sound format
(see <a href="../references.html#[MIDI]">2.[MIDI]</a>) and the MP3 compressed
format (see <a href="../references.html#[I11172_1]">2.[I11172-1]</a>). MIDI files are
presumed to use the General
MIDI patch set. <a href="networking.html#URLs">9.2.1 URLs</a> contains
details on the <i>url</i> field.</p>
<p>The <i>loop, pauseTime, resumeTime, startTime,</i> and<i> stopTime</i> inputOutput fields
and the <i>elapsedTime, isActive, </i>and<i> isPaused</i> outputOnly fields, and their effects on the AudioClip node,
are discussed in detail in <a href="time.html">8 Time component</a>.
The "<i>cycle"</i> of an AudioClip is the length of time in
seconds for one playing of the audio at the specified <i>pitch</i>.</p>
<p>The <i>pitch</i> field specifies a multiplier for the
rate at which sampled sound is played. Values for the <i>pitch</i> field
shall be greater than zero<i>.</i> Changing the <i>pitch</i> field affects
both the pitch and playback speed of a sound. A <i>set_pitch</i> event
to an active AudioClip is ignored and no <i>pitch_changed</i> field
is generated. If <i>pitch</i> is set to 2.0, the sound shall be played
one octave higher than normal and played twice as fast. For a sampled
sound, the <i>pitch</i> field alters the sampling rate at which the
sound is played. The proper implementation of pitch control for MIDI
(or other note sequence sound clips) is to multiply the tempo of the
playback by the <i>pitch</i> value and adjust the MIDI Coarse Tune and
Fine Tune controls to achieve the proper pitch change.</p>
<p>A <i>duration_changed</i> event is sent whenever there
is a new value for the "normal" duration of the clip. Typically,
this will only occur when the current <i>url</i> in use changes and
the sound data has been loaded, indicating that the clip is playing
a different sound source. The duration is the length of time in seconds
for one cycle of the audio for a <i>pitch</i> set to 1.0. Changing the
<i>pitch</i> field will not trigger a <i>duration_changed</i> event.
A duration value of "−1" implies that the sound data has not
yet loaded or the value is unavailable for some reason. A <i>duration_changed</i>
event shall be generated if the AudioClip node is loaded when the X3D
file is read or the AudioClip node is added to the scene graph.</p>
<p>The <i>isActive</i> field may be used by other nodes to
determine if the clip is currently active. If an AudioClip is active,
it shall be playing the sound corresponding to the sound time (<i>i.e.</i>, in
the sound's local time system with sample 0 at time 0):</p>
<pre class="listing"> t = (now − startTime) modulo (<tt>duration</tt> / pitch)
</pre>
<div class="proposed" title="Mantis 1267 new sound component">
<!-- ====================================================== -->
<h2><a name="AudioDestination"></a>16.4.4 AudioDestination</h2>
<pre class="node">AudioDestination : X3DSoundDestinationNode {
SFString [in,out] description ""
SFInt32 [in,out] maxChannelCount 2 [0,∞)
<!-- Related to W3C Audio API AudioNode -->
SFInt32 [in,out] numberOfInputs 0 [0,∞)
SFInt32 [in,out] numberOfOutputs 0 [0,∞)
SFInt32 [in,out] channelCount 0 [0,∞)
SFString [in,out] channelCountMode "max"
SFString [in,out] channelInterpretation "speakers"
}
</pre>
<p>
AudioDestination represents the final audio destination and is what user ultimately hears,
typically from the speakers of user device.
</p>
<!-- ====================================================== -->
<h2><a name="BiquadFilter"></a>16.4.5 BiquadFilter</h2>
<pre class="node">BiquadFilter : X3DSoundProcessingNode {
SFString [in,out] description ""
SFInt32 [in,out] frequency 0 [0,∞)
SFFloat [in,out] detune 0 [0,∞)
SFFloat [in,out] Q 0 [0,∞)
SFFloat [in,out] gain 0 [0,∞)
SFString [in,out] type "lowpass"
<!-- Related to W3C Audio API AudioNode -->
SFInt32 [in,out] numberOfInputs 0 [0,∞)
SFInt32 [in,out] numberOfOutputs 0 [0,∞)
SFInt32 [in,out] channelCount 0 [0,∞)
SFString [in,out] channelCountMode "max"
SFString [in,out] channelInterpretation "speakers"
}
</pre>
<p>
BiquadFilter represents different kinds of filters, tone control devices, and graphic equalizers.
</p>
<!-- ====================================================== -->
<h2><a name="ChannelMerger"></a>16.4.6 ChannelMerger</h2>
<pre class="node">ChannelMerger : X3DSoundChannelNode {
SFString [in,out] description ""
<!-- Related to W3C Audio API AudioNode -->
SFInt32 [in,out] numberOfInputs 0 [0,∞)
SFInt32 [in,out] numberOfOutputs 0 [0,∞)
SFInt32 [in,out] channelCount 0 [0,∞)
SFString [in,out] channelCountMode "max"
SFString [in,out] channelInterpretation "speakers"
}
</pre>
<p>
ChannelMerger unites different monophonic input channels into a single output channel.
</p>
<!-- ====================================================== -->
<h2><a name="ChannelSplitter"></a>16.4.7 ChannelSplitter</h2>
<pre class="node">ChannelSplitter : X3DSoundChannelNode {
SFString [in,out] description ""
<!-- Related to W3C Audio API AudioNode -->
SFInt32 [in,out] numberOfInputs 0 [0,∞)
SFInt32 [in,out] numberOfOutputs 0 [0,∞)
SFInt32 [in,out] channelCount 0 [0,∞)
SFString [in,out] channelCountMode "max"
SFString [in,out] channelInterpretation "speakers"
}
</pre>
<p>
ChannelSplitter separates the different channels of an audio source into a set of monophonic output channels.
</p>
<!-- ====================================================== -->
<h2><a name="Convolver"></a>16.4.8 Convolver</h2>
<pre class="node">Convolver : X3DSoundProcessingNode {
SFString [in,out] description ""
MFFloat [in,out] buffer NULL [−1,1]
SFBool [in,out] normalize FALSE
<!-- Related to W3C Audio API AudioNode -->
SFInt32 [in,out] numberOfInputs 0 [0,∞)
SFInt32 [in,out] numberOfOutputs 0 [0,∞)
SFInt32 [in,out] channelCount 0 [0,∞)
SFString [in,out] channelCountMode "max"
SFString [in,out] channelInterpretation "speakers"
}
</pre>
<p>
Convolver performs a linear convolution on a given AudioBuffer, often used to achieve a reverberation effect.
</p>
<!-- ====================================================== -->
<h2><a name="Delay"></a>16.4.9 Delay</h2>
<pre class="node">Delay : X3DSoundProcessingNode {
SFString [in,out] description ""
SFInt32 [in,out] delayTime 0 [0,∞)
<!-- Related to W3C Audio API AudioNode -->
SFInt32 [in,out] numberOfInputs 0 [0,∞)
SFInt32 [in,out] numberOfOutputs 0 [0,∞)
SFInt32 [in,out] channelCount 0 [0,∞)
SFString [in,out] channelCountMode "max"
SFString [in,out] channelInterpretation "speakers"
}
</pre>
<p>
Delay causes a time delay between the arrival of input data and subsequent propagation to the output.
</p>
<!-- ====================================================== -->
<h2><a name="DynamicsCompressor"></a>16.4.10 DynamicsCompressor</h2>
<pre class="node">DynamicsCompressor : X3DSoundProcessingNode {
SFString [in,out] description ""
SFFloat [in,out] threshold -24 [0,∞)
SFInt32 [in,out] knee 30 [0,∞)
SFInt32 [in,out] ratio 12 [0,∞)
SFFloat [in,out] reduction 0 [0,∞)
SFFloat [in,out] attack 0. 003 [0,∞)
SFInt32 [in,out] release 0.25 (-∞,∞)
<!-- Related to W3C Audio API AudioNode -->
SFInt32 [in,out] numberOfInputs 0 [0,∞)
SFInt32 [in,out] numberOfOutputs 0 [0,∞)
SFInt32 [in,out] channelCount 0 [0,∞)
channelCountMode "max"
SFString [in,out] channelInterpretation "speakers"
}
</pre>
<p>
DynamicsCompressor implements a dynamics compression effect,
lowering the volume of the loudest parts of the signal and raises the volume of the softest parts.
</p>
<!-- ====================================================== -->
<h2><a name="ListenerPoint"></a>16.4.11 ListenerPoint</h2>
<pre class="node">ListenerPoint : X3DAudioListenerNode {
SFBool [in] set_bind
SFString [in,out] description ""
SFBool [in,out] enabled TRUE
SFInt32 [in,out] gain 1 [0,∞)
SFFloat [in out] interauralDistance 0 [0, infinity)
SFNode [in,out] metadata NULL [X3DMetadataObject]
SFRotation [in,out] orientation 0 0 1 0 [-1,1],(-∞,∞)
SFVec3f [in,out] position 0 0 10 (-∞,∞)
SFBool [in,out] trackCurrentView FALSE
SFTime [out] bindTime
SFBool [out] isBound
}
</pre>
<p>
ListenerPoint represents the position and orientation of the person listening to the audio scene.
It provides single or multiple sound channels as output.
Multiple ListenerPoint nodes can be active for sound processing, but only one can be bound as the active listening point for the user.
</p>
<!-- ====================================================== -->
<h2><a name="MicrophoneSource"></a>16.4.12 MicrophoneSource</h2>
<pre class="node">MicrophoneSource : X3DSoundSourceNode {
SFString [in,out] description ""
SFBool [in,out] enabled TRUE
SFBool [in,out] isActive FALSE
SFString [in,out] mediaDevicesid ""
MFNode [in,out] audioGraph [] [X3DChildNode]
}
</pre>
<p>
MicrophoneSource captures input from a physical microphone.
</p>
<!-- ====================================================== -->
<h2><a name="OscillatorSource"></a>16.4.13 OscillatorSource</h2>
<pre class="node">Oscillator : X3DSoundProcessingNode {
SFString [in,out] description ""
SFInt32 [in,out] frequency 0 [0,∞)
SFString [in,out] type "square"
SFFloat [in,out] detune 0 [0,∞)
}
</pre>
<p>
The Oscillator node represents an audio source generating a periodic waveform, providing a constant tone.
</p>
<p>The <i>frequency</i> field is an a-rate AudioParam representing the frequency of oscillation in hertz (though the AudioParam
returned is read-only, the value it represents is not). The default value is 440 Hz (a standard middle-A note) .</p>
<p>The <i>type</i> field is a string which specifies the shape of waveform to play; this can be one of a number of
standard values, or custom to use a PeriodicWave to describe a custom waveform. Different waves will produce different tones.
Standard values are "sine", "square", "sawtooth", "triangle" and "custom". The default is "sine".</p>
<p>The <i>detune</i> field is an a-rate AudioParam representing detuning of oscillation in cents (though the AudioParam returned is read-only,
the value it represents is not). The default value is 0.</p>
<!-- ====================================================== -->
<h2><a name="PeriodicWave"></a>16.4.14 PeriodicWave</h2>
<pre class="node">PeriodicWave : X3DSoundProcessingNode {
SFString [in,out] description ""
SFInt32 [in,out] frequency 0 [0,∞)
SFString [in,out] type "square"
SFFloat [in,out] detune 0 [0,∞)
}
</pre>
<p>
PeriodicWave defines a periodic waveform that can be used to shape the output of an Oscillator.
</p>
<!-- ====================================================== -->
<h2><a name="SpatialSound"></a>16.4.15 <i>SpatialSound</i></h2>
<pre class="node">SpatialSound : X3DSoundNode {
SFString [in,out] description ""
SFVec3f [in,out] direction 0 0 1 (-∞,∞)
SFString [in,out] distanceModel "INVERSE" ["LINEAR" "INVERSE" "EXPONENTIAL"]
SFFloat [in,out] intensity 1 [0,1]
SFVec3f [in,out] location 0 0 0 (-∞,∞)
SFFloat [in,out] maxDistance 10000 [0,∞)
SFNode [in,out] metadata NULL [X3DMetadataObject]
SFBool [in,out] enableHRTF FALSE
SFFloat [in,out] coneInnerAngle 2pi [0,∞)
SFFloat [in,out] coneOuterAngle 2pi [0,∞)
SFFloat [in,out] coneOuterGain 0 (-∞,∞)
SFFloat [in,out] referenceDistance 1 [0,∞)
SFFloat [in,out] rolloffFactor 1 [0,∞)
SFFloat [in,out] priority 0 [0,1]
SFNode [in,out] source NULL [X3DSoundSourceNode] # and other types
SFBool [] spatialize TRUE
}
</pre>
<p>
SpatialSound represents a processing node which positions, emits and spatializes an audio stream in three-dimensional space.
</p>
<p>
The <i>direction</i> <i>intensity</i>, <i>location</i>, <i>priority</i>, <i>source</i> and <i>spatialize</i> fields
match field definitions for Sound node.
</p>
<p>
The <i>referenceDistance</i> field is reference distance for reducing volume as source moves further from the listener.
</p>
<p>
The <i>rolloffFactor</i> field indicates how quickly volume is reduced as source moves further from listener.
</p>
<p>
The <i>distanceModel</i> field specifies which algorithm to use for sound attenuation,
corresponding to distance between an audio source and a listener.
as it moves away from the listener.
</p>
<ol type="a">
<li>
LINEAR gain model determined by
<br />
<code>1 - rolloffFactor * (distance - referenceDistance) / (maxDistance - referenceDistance)</code>
</li>
<li>
INVERSE gain model determined by
<br />
<code>refDistance / (referenceDistance + rolloffFactor * (Math.max(distance, referenceDistance) - referenceDistance))</code>
</li>
<li>
EXPONENTIAL gain model determined by
<br />
<code>pow((Math.max(distance, referenceDistance) / referenceDistance, -rolloffFactor)</code>
</li>
</ol>
<p>
The <i>enableHRTF</i> field specifies whether to enable Head Related Transfer Function (HRTF) auralization, if available.
</p>
<p>
The <i>maxDistance</i> field is the maximum distance where sound is renderable between source and listener, after which no reduction in sound volume occurs.
</p>
</div>
<div class="proposed" title="Mantis 1267 new sound component">
<p>Spatial sound has a conceptual role in the Web3D environments, due to highly realism scenes that can provide.
Since Web Audio API is the most popular sound engine, we propose to get the necessary steps required to
make X3D fully compatible with this library. In fact, we propose the enrichment of X3D with spatial sound features,
using the structure and the functionality of <a href="https://www.w3.org/TR/webaudio/">Web Audio API</a>.
</p>
<p>Particularly, the Web Audio API involves handling audio operations inside an audio context and has
been designed to allow modular routing. Also, the approach of Web Audio API is based on the concept of audio context,
which represents the direction of audio stream flows between sound nodes.
</p>
<p><span class="editorsNote">
TODO better description of how all interfaces/nodes of Web Audio API are supported in X3D.
</span></p>
</div>
<!-- ====================================================== -->
<h2><a name="Sound"></a>16.4.16 Sound</h2>
<pre class="node">Sound : X3DSoundNode {
SFVec3f [in,out] direction 0 0 1 (-∞,∞)
SFFloat [in,out] intensity 1 [0,1]
SFVec3f [in,out] location 0 0 0 (-∞,∞)
SFFloat [in,out] maxBack 10 [0,∞)
SFFloat [in,out] maxFront 10 [0,∞)
SFNode [in,out] metadata NULL [X3DMetadataObject]
SFFloat [in,out] minBack 1 [0,∞)
SFFloat [in,out] minFront 1 [0,∞)
SFFloat [in,out] priority 0 [0,1]
SFNode [in,out] source NULL [X3DSoundSourceNode]
SFBool [] spatialize TRUE
}
</pre>
<p>The Sound node specifies the spatial presentation of a
sound in a X3D scene. The sound is located at a point in the local coordinate
system and emits sound in an elliptical pattern (defined by two ellipsoids).
The ellipsoids are oriented in a direction specified by the <i>direction</i>
field. The shape of the ellipsoids may be modified to provide more or
less directional focus from the location of the sound.</p>
<p>The <i>source</i> field specifies the sound source for
the Sound node. If the <i>source</i> field is not specified, the Sound
node will not emit audio. The <i>source</i> field shall specify either
an <a href="#AudioClip">AudioClip</a> node or a
<a href="texturing.html#MovieTexture">MovieTexture</a> node. If a MovieTexture node is
specified as the sound source, the MovieTexture shall refer to a movie
format that supports sound (<span class="example">EXAMPLE</span> MPEG-1Systems, see
<a href="../references.html#[I11172_1]">ISO/IEC 11172-1</a>).</p>
<p>The <i>intensity </i>field adjusts the loudness (decibels)
of the sound emitted by the Sound node. The <i>intensity</i>
field has a value that ranges from 0.0 to 1.0 and specifies a factor
which shall be used to scale the normalized sample data of the sound
source during playback. A Sound node with an intensity of 1.0 shall
emit audio at its maximum loudness (before attenuation), and a Sound
node with an intensity of 0.0 shall emit no audio. Between these values,
the loudness should increase linearly from a -20 dB change approaching
an <i>intensity</i> of 0.0 to a 0 dB change at an <i>intensity</i> of
1.0.</p>
<p class="Example">NOTE This is different
from the traditional definition of intensity with respect to sound;
see <a href="../bibliography.html#[SNDA]">[SNDA]</a>.</p>
<p>The <i>priority</i> field provides a hint for the browser
to choose which sounds to play when there are more active Sound nodes
than can be played at once due to either limited system resources or
system load. <a href="#Concepts">16.2 Concepts</a> describes a
recommended algorithm for determining which sounds to play under such
circumstances. The <i>priority</i> field ranges from 0.0 to 1.0, with
1.0 being the highest priority and 0.0 the lowest priority.</p>
<p>The <i>location</i> field determines the location of the
sound emitter in the local coordinate system. A Sound node's output
is audible only if it is part of the traversed scene. Sound nodes that
are descended from <a href="navigation.html#LOD">LOD</a>,
<a href="grouping.html#Switch">Switch</a>, or any grouping or prototype node that
disables traversal (<i>i.e.</i>,<i> </i>drawing) of its children are not
audible unless they are traversed. If a Sound node is disabled by a
Switch or LOD node, and later it becomes part of the traversal again,
the sound shall resume where it would have been had it been playing
continuously.</p>
<p>The Sound node has an inner ellipsoid that defines a volume
of space in which the maximum level of the sound is audible. Within
this ellipsoid, the normalized sample data is scaled by the <i>intensity</i>
field and there is no attenuation. The inner ellipsoid is defined by
extending the <i>direction</i> vector through the <i>location</i>. The
<i>minBack</i> and <i>minFront</i> fields specify distances behind and
in front of the <i>location</i> along the <i>direction</i> vector respectively.
The inner ellipsoid has one of its foci at <i>location</i> (the second
focus is implicit) and intersects the <i>direction</i> vector at <i>minBack</i>
and <i>minFront</i>.</p>
<p>The Sound node has an outer ellipsoid that defines a volume
of space that bounds the audibility of the sound. No sound can be heard
outside of this outer ellipsoid. The outer ellipsoid is defined by extending
the <i>direction</i> vector through the <i>location</i>. The <i>maxBack</i>
and <i>maxFront </i>fields specify distances behind and in front of
the <i>location</i> along the <i>direction</i> vector respectively.
The outer ellipsoid has one of its foci at <i>location</i> (the second
focus is implicit) and intersects the <i>direction</i> vector at <i>maxBack</i>
and <i>maxFront</i>.</p>
<p>The <i>minFront</i>, <i>maxFront</i>, <i>minBack</i>,
and <i>maxBack</i> fields are defined in local coordinates, and shall
be greater than or equal to zero. The <i>minBack</i> field shall be
less than or equal to <i>maxBack</i>, and <i>minFront</i> shall
be less than or equal to <i>maxFront.</i> The ellipsoid parameters
are specified in the local coordinate system but the ellipsoids' geometry
is affected by ancestors' transformations.</p>
<p>Between the two ellipsoids, there shall be a linear attenuation
ramp in loudness, from 0 dB at the minimum ellipsoid to -20 dB at the
maximum ellipsoid:</p>
<pre class="listing"> attenuation = -20 × (d' / d")
</pre>
<p>where d' is the distance along the location-to-viewer
vector, measured from the transformed minimum ellipsoid boundary to
the viewer, and d" is the distance along the location-to-viewer
vector from the transformed minimum ellipsoid boundary to the transformed
maximum ellipsoid boundary (see
<a href="#f-Soundnodegeometry">Figure 16.2</a>).</p>
<div class="CenterDiv">
<a name="f-Soundnodegeometry"></a>
<img src="../../Images/Sound.gif" alt="Sound Node Geometry" width="408" height="289">
<p class="FigureCaption">Figure 16.2 — Sound Node Geometry</p>
</div>
<p>The <i>spatialize</i> field specifies if the sound is
perceived as being directionally located relative to the viewer. If
the <i>spatialize </i>field is <code>TRUE</code>
and the viewer is located between the transformed inner and outer ellipsoids,
the viewer's direction and the relative location of the Sound node should
be taken into account during playback. Details outlining the minimum
required spatialization functionality can be found in
<a href="#Soundattenandspatial">16.2.2 Sound attenuation and spatialization</a>.
If the <i>spatialize</i> field is <code>FALSE</code>,
directional effects are ignored, but the ellipsoid dimensions and<i>
intensity</i> will still affect the loudness of the sound. If the sound
source is multi-channel (<span class="code">EXAMPLE</span> stereo), the source
shall
retain its channel separation during playback.</p>
<div class="proposed" title="Mantis 1267 new sound component">
<!-- ====================================================== -->
<h2><a name="StreamAudioDestination"></a>16.4.17 StreamAudioDestination</h2>
<pre class="node">StreamAudioDestination : X3DSoundDestinationNode {
SFString [in,out] description ""
MFFloat [in,out] stream NULL [−1,1]
<!-- Related to W3C Audio API AudioNode -->
SFInt32 [in,out] numberOfInputs 0 [0,∞)
SFInt32 [in,out] numberOfOutputs 0 [0,∞)
SFInt32 [in,out] channelCount 0 [0,∞)
SFString [in,out] channelCountMode "max"
SFString [in,out] channelInterpretation "speakers"
}
</pre>
<p>
StreamAudioDestination is an audio destination representing a MediaStream with a single MediaStreamTrack whose kind is "audio".
</p>
<!-- ====================================================== -->
<h2><a name="StreamAudioSource"></a>16.4.198 StreamAudioSource</h2>
<pre class="node">StreamAudioSource : X3DSoundSourceNode {
SFString [in,out] description ""
MFFloat [in,out] mediaStream NULL [−1,1]
<!-- Related to W3C Audio API AudioNode -->
SFInt32 [in,out] numberOfInputs 0 [0,∞)
SFInt32 [in,out] numberOfOutputs 0 [0,∞)
SFInt32 [in,out] channelCount 0 [0,∞)
SFString [in,out] channelCountMode "max"
SFString [in,out] channelInterpretation "speakers"
}
</pre>
<p>
StreamAudioSource operates as an audio source whose media is received from a MediaStream
obtained using the WebRTC or Media Capture and Streams APIs.
This media source might originate from a microphone or sound-processing channed provided by a remote peer on a WebRTC call.
</p>
<!-- ====================================================== -->
<h2><a name="WaveShaper"></a>16.4.19 WaveShaper</h2>
<pre class="node">WaveShaper : X3DSoundProcessingNode {
SFString [in,out] description ""
MFInt32 [in,out] curve 0 [-1,-1]
SFString [in,out] oversample "none"
<!-- Related to W3C Audio API AudioNode -->
SFInt32 [in,out] numberOfInputs 0 [0,∞)
SFInt32 [in,out] numberOfOutputs 0 [0,∞)
SFInt32 [in,out] channelCount 0 [0,∞)
SFString [in,out] channelCountMode "max"
SFString [in,out] channelInterpretation "speakers"
}
</pre>
<p>
WaveShaper represents a nonlinear distorter that applies a wave-shaping distortion curve to the signal.
</p>
</div>
<h1><img class="cube" src="../../Images/cube.gif" alt="cube" width="20" height="19">
<a name="SupportLevels"></a>16.5 Support levels</h1>
<p>The Sound component provides one level of support as
specified in <a href="#t-SupportLevels">Table 16.2</a>.</p>
<div class="CenterDiv">
<p class="TableCaption">
<a name="t-SupportLevels"></a>Table 16.2 — Sound component support levels</p>
<table>
<tr>
<th><b>Level</b></th>
<th>Prerequisites</th>
<th>Nodes/Features</th>
<th>Support</th>
</tr>
<tr>
<td>
<p align="center"><b>1</b></td>
<td>Core 1<br />
Time 1</td>
<td> </td>
<td> </td>
</tr>
<tr>
<td> </td>
<td> </td>
<td><i>X3DSoundSourceNode </i>(abstract)</td>
<td>n/a</td>
</tr>
<tr>
<td> </td>
<td> </td>
<td><i>X3DSoundNode </i>(abstract)</td>
<td>n/a</td>
</tr>
<tr>
<td> </td>
<td> </td>
<td>AudioClip</td>
<td>All fields fully supported.</td>
</tr>
<tr>
<td> </td>
<td> </td>
<td>Sound</td>
<td>All fields fully supported.</td>
</tr>
<tr class="proposed" title="Mantis 1267 new sound component">
<td>
<p align="center"><b>2</b></td>
<td>Core 1<br />
Time 1</td>
<td> </td>
<td> </td>
</tr>
<tr class="proposed" title="Mantis 1267 new sound component">
<td> </td>
<td> </td>
<td>All level 1 Sound nodes</td>
<td>All fields fully supported.</td>
</tr>
<tr class="proposed" title="Mantis 1267 new sound component">
<td> </td>
<td> </td>
<td><i>X3DSoundAnalysisNode</i>, <i>X3DSoundChannelNode</i>, <i>X3DSoundDestinationNode</i>, <i>X3DSoundProcessingNode</i> </td>
<td>All fields fully supported.</td>
</tr>
<tr class="proposed" title="Mantis 1267 new sound component">
<td> </td>
<td> </td>
<td>
Analyser,
AudioBufferSource,
AudioBufferSource,
AudioDestination,
BiquadFilter,
ChannelMerger,
ChannelSplitter,
Convolver,
Delay,
DynamicsCompressor,
ListenerPoint,
MicrophoneSource,
OscillatorSource,
PeriodicWave,
SpatialSound,
StreamAudioDestination,
StreamAudioSource,
WaveShaper
</td>
<td>All fields fully supported.</td>
</tr>
</table>
</div>
<p>
<img class="x3dbar" src="../../Images/x3dbar.png" alt="--- X3D separator bar ---" width="430" height="23"></p>
</body>
</html>