<div dir="ltr">webAudio API shows AudioNode being constructed with <strike>AudioNode</strike> AudioContext as<br>parameter, and methods show it can do an outputOnly on <strike>AudioNode</strike>.AudioContext</div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Jul 22, 2020 at 7:16 AM GPU Group <<a href="mailto:gpugroup@gmail.com">gpugroup@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">> 5. Am completely lost by multiple entries of "Heritage from AudioNode" - did we miss an abstract node type?<br>
Hypothesis:: heritage == inheritance == derived from<br>
<a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API" rel="noreferrer" target="_blank">https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API</a><br>
<a href="https://developer.mozilla.org/en-US/docs/Web/API/AudioNode" rel="noreferrer" target="_blank">https://developer.mozilla.org/en-US/docs/Web/API/AudioNode</a><br>
webAudio API shows AudioNode being constructed with AudioNode as<br>
parameter, and methods show it can do an outputOnly on AudioNode.<br>
Hypothesis: this is a containment pattern, and the readOnly /<br>
outputOnly mode is so the audioContext can;t be changed on an<br>
AudioNode after its constructed.<br>
<br>
If so, then what's an equivalent design pattern in declarative X3D?<br>
<br>
Hypothesis: Parent-child<br>
AudioContext (or node standing-in-for it )<br>
children (or .connect chain) [<br>
AudioNode1<br>
AudioNode2<br>
AudioNode3<br>
]<br>
In browser code, when the browser traverses the scenegraph and hits an<br>
AudioContext it would push that context onto a stack, and when it hits<br>
an AudioNode for first time, it would use the AudioContext on the<br>
stack to create and connect the AudioNode.<br>
If someone DEF/USEs an AudioNode in 2 AudioContexts, it would have the<br>
same field parameters on startup but different context internally.<br>
-Doug Sanden<br>
<br>
On Tue, Jul 21, 2020 at 11:56 PM Don Brutzman <<a href="mailto:brutzman@nps.edu" target="_blank">brutzman@nps.edu</a>> wrote:<br>
><br>
> Some more comments for group resolution. Most nodes have definitions and interfaces in github.<br>
><br>
> We still have a number of gaps. If we get the biggest sorted out, and have up-to-date field definitions, then am hoping we have a sufficiently mature "work in progress" (backed up by Web Audio API CR) for draft publication.<br>
><br>
> Look below for 4..8, kept in order for simplest review and update. See you Wednesday.<br>
><br>
> -----<br>
><br>
> On 7/21/2020 10:58 AM, Don Brutzman wrote:<br>
> > Efi, am continuing to update specification. Am using<br>
> ><br>
> > AnalysisNodes01_07_2020.pdf<br>
> > Report01_07_2020updated.pdf<br>
> ><br>
> > Some gaps on my end, please help:<br>
> ><br>
> > ----<br>
> ><br>
> > 1. Not finding description but am finding interfaces for<br>
> ><br>
> > X3DAudioListenerNode<br>
> > X3DSoundAnalysisNode<br>
> > X3DSoundChannelNode<br>
> > X3DSoundDestinationNode<br>
> > X3DSoundProcessingNode<br>
> ><br>
> > Not finding description or interfaces for<br>
> ><br>
> > AudioContext<br>
> > BinauralListenerPoint<br>
> > MicrophoneSource<br>
> > VirtualMicrophoneSource<br>
> ><br>
> > ----<br>
> ><br>
> > 2. Need resolution of comments in<br>
> ><br>
> > 16.3.7 X3DSoundSourceNode<br>
> ><br>
> > TODO do these field go here or elsewhere in hierarchy?<br>
> > SFNode [in,out] transform NULL [Transform]<br>
> > SFNode [in,out] panner NULL [Panner]<br>
> > SFNode [in,out] filter NULL [BiquadFilter]<br>
> > SFNode [in,out] delay NULL [Delay]<br>
> ><br>
> > ----<br>
> ><br>
> > 3. Inheritance questions<br>
> ><br>
> > In several cases you have inheritance such as<br>
> ><br>
> > BiquadFilter : SoundProcessingGroup<br>
> ><br>
> > What does SoundProcessingGroup correspond to?<br>
> ><br>
> > ----<br>
> ><br>
> > have most interfaces added in github, please review.<br>
><br>
> -----<br>
><br>
> 4. ListenerPoint and BinauralPoint.<br>
><br>
> a. The following fields should be SFVec3f or SFRotation for type safety. Think about animation, we want to be able to use PositionInterpolator and OrientationInterpolator (for example) to animate these points.<br>
><br>
> SFFloat [in,out] positionX 0 (-∞,∞)<br>
> SFFloat [in,out] positionY 0 (-∞,∞)<br>
> SFFloat [in,out] positionZ 0 (-∞,∞)<br>
> SFFloat [in,out] forwardX 0 (-∞,∞)<br>
> SFFloat [in,out] forwardY 0 (-∞,∞)<br>
> SFFloat [in,out] forwardZ -1 (-∞,∞)<br>
> SFFloat [in,out] upX 0 (-∞,∞)<br>
> SFFloat [in,out] upY 1 (-∞,∞)<br>
> SFFloat [in,out] upZ 0 (-∞,∞)<br>
><br>
> Also note that if we are treating ListenerPoint similar to Viewpoint, we do not need to specify the upDirection vector. Viewpoint navigation already knows "up" since that is the +Y axis for the overall scene, as used by NavigationInfo already.<br>
><br>
> Suggested interface, matching X3DViewpointNode:<br>
><br>
> SFRotation [in,out] orientation 0 0 1 0 [-1,1],(-∞,∞)<br>
> SFVec3f [in,out] position 0 0 10 (-∞,∞)<br>
><br>
> b. Next. Looking at interfaces,<br>
><br>
> ==============================================<br>
> BinauralListenerPoint : X3DAudioListenerNode {<br>
> or<br>
> ListenerPoint : X3DAudioListenerNode {<br>
> SFBool [in] set_bind<br>
> SFString [in,out] description ""<br>
> SFBool [in,out] enabled TRUE<br>
> SFInt32 [in,out] gain 1 [0,∞)<br>
> SFNode [in,out] metadata NULL [X3DMetadataObject]<br>
> SFRotation [in,out] orientation 0 0 1 0 [-1,1],(-∞,∞)<br>
> SFVec3f [in,out] position 0 0 10 (-∞,∞)<br>
> SFInt32 [in,out] gain 1 [0,∞)<br>
> # SFBool [in,out] isViewpoint TRUE # TODO needed? rename?<br>
> SFTime [out] bindTime<br>
> SFBool [out] isBound<br>
> }<br>
><br>
> ListenerPoint represents the position and orientation of the person listening to the audio scene.<br>
> It provides single or multiple sound channels as output.<br>
> or<br>
> BinauralListenerPoint represents the position and orientation of the person listening to the audio scene, providing binaural output.<br>
> ==============================================<br>
><br>
> Can BinauralListenerPoint be handled equivalently by ListenerPoint? The output from this node is implicit and so no separate typing of output stream is needed. The main difference is separation distance of two ears:<br>
><br>
> [1] Wikipedia: Binaural<br>
> <a href="https://en.wikipedia.org/wiki/Binaural" rel="noreferrer" target="_blank">https://en.wikipedia.org/wiki/Binaural</a><br>
><br>
> [2] Wikipedia: Sound localization<br>
> <a href="https://en.wikipedia.org/wiki/Sound_localization" rel="noreferrer" target="_blank">https://en.wikipedia.org/wiki/Sound_localization</a><br>
><br>
> To keep a separate node, we would need to define interAuralDistance value. For specification context, I think that will be a necessary parameter for WebXR headsets.<br>
><br>
> Let's discuss. If interAuralDistance field seems sensible, we might simply add it to ListenerPoint with default value of 0. Does that sound OK?<br>
><br>
> SFFloat [in out] interauralDistance 0 [0, infinity)<br>
><br>
> I think we can safely omit BinauralListenerPoint as an unnecessary node.<br>
><br>
> c. If we do not need BinauralListenerPoint then we might not need intermediate abstract interface X3DAudioListenerNode... though I recall we had some discussion of other potential future listeners. Opinions please.<br>
><br>
> d. isViewpoint deserves discussion. Draft prose says<br>
><br>
> "isViewpoint specifies if the listener position is the viewpoint of camera. If the isViewpoint field is FALSE, the user uses the other fields to determine the listener position."<br>
><br>
> Let's list use cases, and discuss please:<br>
><br>
> d.1. If the base functionality desired is to listen at a specific location in the scene, stationary or animated, then we're done.<br>
><br>
> d.2. If the functionality desired is simply to follow a user's view with no audio processing involved, then there is no need for ListenerPoint since Sound or SpatialSound can be used directly.<br>
><br>
> d.3. If the audio from a specific Viewpoint is needed, then the ListenerPoint might be a child field of X3DViewpointNode, just as we are now connecting NavigationInfo to viewpoints. Such a ListenerPoint might also be animated simulatenously with a given Viewpoint, simple to do.<br>
><br>
> d.4. If the audio from a the current viewing position is desired, then we might improve clarity of this field's semantics by renaming it as "trackCurrentView". This permits author creation of multiple ListenerPoint nodes for generating audio chains of different effects on sound received at the current user location.<br>
><br>
> Let's go with "trackCurrentView"for now - improved name candidates welcome. Let's definitely avoid "isViewpoint" because that is confusing and seems to conflate purpose of different nodes.<br>
><br>
> SFBool [in,out] trackCurrentView FALSE<br>
><br>
> ----<br>
><br>
> 5. Am completely lost by multiple entries of "Heritage from AudioNode" - did we miss an abstract node type?<br>
><br>
> ----<br>
><br>
> 6. if VirtualMicrophoneSource is a virtual microphone source, then isn't this the same as ListenerPoint?<br>
><br>
> Is there a different definition? Can we omit this node?<br>
><br>
> ----<br>
><br>
> 7. What are fields for MicrophoneSource?<br>
><br>
> ----<br>
><br>
> 8. SpatialSound<br>
><br>
> Similarly changed:<br>
> SFFloat [in,out] positionX 0 (-∞,∞)<br>
> SFFloat [in,out] positionY 0 (-∞,∞)<br>
> SFFloat [in,out] positionZ 0 (-∞,∞)<br>
> SFFloat [in,out] orientationX 1 (-∞,∞)<br>
> SFFloat [in,out] orientationY 0 (-∞,∞)<br>
> SFFloat [in,out] orientationZ 0 (-∞,∞)<br>
><br>
> to<br>
><br>
> SFVec3f [in,out] direction 0 0 1 (-∞,∞)<br>
> SFFloat [in,out] intensity 1 [0,1]<br>
> SFVec3f [in,out] location 0 0 0 (-∞,∞)<br>
><br>
> matching Sound node.<br>
><br>
> Potential problem: direction vector is hard to animate... typically if changed orientation is needed, then it is placed in a parent Transform, so we probably can leave it alone.<br>
><br>
> For SpatialSound, the "gain" field should be "intensity" in order to match Sound node.<br>
><br>
> Am avoiding abbreviations. Precise purpose of referenceDistance isn't yet clear to me.<br>
><br>
> I'm not clear about cone parameters... for now, changed degrees to radians. Please explain further, got diagram?<br>
><br>
> If EQUAL_POWER is simple gain, is that the same as Sound node spatialize field?<br>
><br>
> Is there a way to specify HRTF, or (presumably) is that part of browser configuration? Those might be considered Personal Identifying Information (PII) and so am in no hurry to support that; might be part of WebXR.<br>
><br>
> Perhaps HRTF should be a simple boolean, in combination with the spatialize field. Seems simpler and sufficient.<br>
><br>
> ----<br>
><br>
> 9. Still needed, perhaps distilled from paper or Web Audio API?<br>
><br>
> 16.2.3 Sound effects processing<br>
> Sound streams can be manipulated by a variety of sound effects...<br>
><br>
> ----<br>
><br>
> all the best, Don<br>
> --<br>
> Don Brutzman Naval Postgraduate School, Code USW/Br <a href="mailto:brutzman@nps.edu" target="_blank">brutzman@nps.edu</a><br>
> Watkins 270, MOVES Institute, Monterey CA 93943-5000 USA +1.831.656.2149<br>
> X3D graphics, virtual worlds, navy robotics <a href="http://faculty.nps.edu/brutzman" rel="noreferrer" target="_blank">http://faculty.nps.edu/brutzman</a><br>
><br>
> _______________________________________________<br>
> x3d-public mailing list<br>
> <a href="mailto:x3d-public@web3d.org" target="_blank">x3d-public@web3d.org</a><br>
> <a href="http://web3d.org/mailman/listinfo/x3d-public_web3d.org" rel="noreferrer" target="_blank">http://web3d.org/mailman/listinfo/x3d-public_web3d.org</a><br>
</blockquote></div>