[x3d-public] X3D agenda 2 OCT 2020: shadows, audio concepts

Don Brutzman brutzman at nps.edu
Fri Oct 2 07:05:20 PDT 2020


Review meeting today.

[1] Web3D Teleconference Information
      https://www.web3d.org/member/teleconference-information

> Please use the following link for all Web3D Consortium Meetings.
> 
> Join URL: https://us02web.zoom.us/j/81634670698?pwd=a1VPeU5tN01rc21Oa3hScUlHK0Rxdz09

---

1. Web3D Outreach and Web3D 2020 Conference

a. X3D in a Box, Anita

[1.0] ref

b. Conference registration is FREE and open, schedule going online over next week

[b] Web3D 2020 Conference registration
     https://web3d.siggraph.org

With free registration, and with all conference content going online, we may keep registration open... indefinitely.  Interesting opportunities to think about.

---

2. X3D Shadows

For the new Shape component and X3D4 support of glTF, we think that
- all the node signatures and field definitions work,
- prose definitions need some polishing by Dick and Don,
- good progress achieved on validation and examples.

Question: do we now support shadows satisfactorily?  Are any more fields needed for scoping (turning on/off) shadows in a scene?

---

3. X3D4 Audio and Sound

Excellent progress towards finalizing the Sound component.

[3.1] [x3d-public] X3D4 Sound meeting 30 SEP 2020: Web3D 2020 preparations, Gain and ChannelSelector nodes,
       avoiding channel indices via parent-child MFNode field relationships
       https://www.web3d.org/mailman/private/x3d-public_web3d.org/2020-October/013721.html

[3.2] [x3d-public] X3D4 Sound meeting 30 SEP 2020: (part 2)
       https://www.web3d.org/mailman/private/x3d-public_web3d.org/2020-October/013722.html

Definition for review:

> We need an X3D definition for "audio graph" term.  Suggested draft:
> 
> * An /audio graph/ is a collection of nodes structured to process audio inputs and outputs
>    in a manner that is constrained to match the structure allowed by the Web Audio API.
> 
> We have defined all of the new nodes (beyond Sound, Spatial Sound and AudioClip) to match the terms and capabilities of Web Audio API.
> 
> This means a collection of the new nodes, that together can create and process sound, produce a result that needs to inputs for our Sound and SpatialSound nodes.  In combination, the output is similar to a computational version of a simple AudioClip node.  It is a source, computationally created, whereas the AudioClip is a prerecorded version.

Audio and sound concept summary:

> ========================================
> Basic stages for flow of sound, from source to destination:
> 
> a. Sources of sound (perhaps an audio file or MicrophoneSource, perhaps signal processing of channels in audio graph),
> 
> b. X3D Sound or SpatialSound node (defining location direction and characteristics of expected sound production in virtual 3D space),
> 
> c. Propagation (attenuation model, may be modified by AcousticProperties based on surrounding geometry),
> 
> d. Reception point (avatar "ears" or recordable listening point at some location and direction, that "hears" result, with left-right pan and spatialization).
> ========================================

---

Be there or be square!  8)

all the best, Don
-- 
Don Brutzman  Naval Postgraduate School, Code USW/Br       brutzman at nps.edu
Watkins 270,  MOVES Institute, Monterey CA 93943-5000 USA   +1.831.656.2149
X3D graphics, virtual worlds, navy robotics http://faculty.nps.edu/brutzman



More information about the x3d-public mailing list