<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><br id="lineBreakAtBeginningOfSignature"><div dir="ltr"><div><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.296875); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">Best Regards,</span></div><div><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.296875); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">Anita-Havele</span></div><br></div><div dir="ltr"><br>Begin forwarded message:<br><br></div><blockquote type="cite"><div dir="ltr"><b>From:</b> "Bergstrom, Aaron via AI" <ai@web3d.org><br><b>Date:</b> December 15, 2025 at 2:47:08 PM PST<br><b>To:</b> ai@web3d.org<br><b>Cc:</b> "Bergstrom, Aaron" <aaron.bergstrom@und.edu><br><b>Subject:</b> <b>[AI] Monthly X3D-AI Meeting</b><br><b>Reply-To:</b> Interaction of AI with Web3D Technologies <ai@web3d.org><br><br></div></blockquote><blockquote type="cite"><div dir="ltr">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style>@font-face { font-family: Wingdings; }
@font-face { font-family: "Cambria Math"; }
@font-face { font-family: Calibri; }
p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0in; font-size: 11pt; font-family: Calibri, sans-serif; }
a:link, span.MsoHyperlink { color: rgb(5, 99, 193); text-decoration: underline; }
span.EmailStyle18 { font-family: Calibri, sans-serif; color: windowtext; }
.MsoChpDefault { font-size: 10pt; }
@page WordSection1 { size: 8.5in 11in; margin: 1in; }
div.WordSection1 { page: WordSection1; }
ol { margin-bottom: 0in; }
ul { margin-bottom: 0in; }</style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
<div class="WordSection1">
<p class="MsoNormal">All,<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Just a reminder that we have an AI Working Group meeting this week.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Probably the last one we’ll have at 6am Pacific.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">I will put Gaussian Splats on the agenda again, Generative AI, and discuss link to the Meta YouTube presentation that was mentioned in the MSF AI group today.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Monthly AI Working Group Meeting<o:p></o:p></p>
<p class="MsoNormal">Wed, Dec 17<sup>th</sup><o:p></o:p></p>
<p class="MsoNormal">6am Pacific – 9am Eastern<o:p></o:p></p>
<p class="MsoNormal"><a href="https://us02web.zoom.us/j/86225731225?pwd=aUh3twfhNUaacXLyJolEof1tsZXSyA.1">https://us02web.zoom.us/j/86225731225?pwd=aUh3twfhNUaacXLyJolEof1tsZXSyA.1</a><o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Aaron<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><b>From:</b> x3d-public <x3d-public-bounces@web3d.org> <b>On Behalf Of
</b>GPU Group via x3d-public<br>
<b>Sent:</b> Sunday, November 30, 2025 11:50 AM<br>
<b>To:</b> x3d-ai@web3d.org<br>
<b>Cc:</b> GPU Group <gpugroup@gmail.com>; Extensible 3D (X3D) Graphics public discussion <x3d-public@web3d.org><br>
<b>Subject:</b> Re: [x3d-public] Gaussian splats<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Hypothesis: Splat rendering could be done with Proto containing first node Shape, and with geometry node PointSet with attrib field containing (31.4.2) FloatVertexAttribute node with 1 float per vertex representing scale , a ProgrammableShader
to render points as individually scaled 2D rectangles, a Script node to parse .ply file to PointSet, and a new node type FileLoader to load a .ply file for parsing.<o:p></o:p></p>
<p class="MsoNormal">-Doug<o:p></o:p></p>
<p class="MsoNormal"><a href="https://www.useblurry.com/blog/anatomy-of-a-ply-file" target="_blank">https://www.useblurry.com/blog/anatomy-of-a-ply-file</a> <o:p></o:p></p>
<p class="MsoNormal">- shows variant of .ply for gaussian splats<o:p></o:p></p>
<p class="MsoNormal">- John noted Aaron's ply samples don't have spherical harmonic parameters (so a simpler shader)<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">On Fri, Nov 28, 2025 at 10:24 PM Don Brutzman <<a href="mailto:don.brutzman@gmail.com" target="_blank">don.brutzman@gmail.com</a>> wrote:<o:p></o:p></p>
<p class="MsoNormal">Just found several more interesting references relating to Gaussian splats.<o:p></o:p></p>
<ul type="disc">
<li class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l5 level1 lfo3">
<b>NY Times, Spatial Journalism: A Field Guide To Gaussian Splatting</b><o:p></o:p></li><li class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l5 level1 lfo3">
by By A.J. Chavar, Oscar Durand, Mint Boonyapanachoti<o:p></o:p></li><li class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l5 level1 lfo3">
December 16, 2024<o:p></o:p></li><li class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l5 level1 lfo3">
Gaussian splatting holds a lot of promise for 3D recreation and spatial storytelling. It’s faster and more photorealistic than photogrammetry, and much easier to process and interact with than neural radiance fields — giving journalists and readers the best
of both worlds.<o:p></o:p></li><li class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l5 level1 lfo3">
These advantages are due to the novel way that splats reconstruct 3D scenes. In a splat, people, places, and objects are made up of a point cloud defined by gaussian functions. Each gaussian function is essentially a 2D disc assigned to a point in 3D space
with attributes for orientation, color, transparency, and size. When viewed in aggregate, these coalesce into a 3D scene that can very accurately represent certain things that other volumetric captures cannot: reflection, transparency, fine detail and the
qualities of the light in a scene.<o:p></o:p></li><li class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l5 level1 lfo3">
In this guide, we give an overview of the practical takeaways we learned exploring exploring gaussian splatting for spatial journalism. We tested a variety of capture and processing techniques using a variety of hardware and software, and found solutions on
desktop and mobile devices. We also assessed splatting against the benchmarks we typically associate with photogrammetry, the current standard for 3D recreation.<o:p></o:p></li><li class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l5 level1 lfo3">
<a href="https://rd.nytimes.com/projects/gaussian-splatting-guide" target="_blank">https://rd.nytimes.com/projects/gaussian-splatting-guide</a><o:p></o:p></li></ul>
<ul type="disc">
<li class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l2 level1 lfo6">
<b>NY Times, Spatial Journalism: Pushing the Limits of Gaussian Splatting for Spatial Storytelling</b><o:p></o:p></li><li class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l2 level1 lfo6">
By AJ Chavar, Mint Boonyapanchoti, Juan Cabrera, Oscar Durand, Yasmin Elayat, Claudia Miranda, Kojo Opuni<o:p></o:p></li><li class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l2 level1 lfo6">
December 16, 2024<o:p></o:p></li><li class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l2 level1 lfo6">
Abstract. Advancements in capturing and rendering 3D scenes have the potential to create spatial stories faster and more easily than has previously been possible. Building off of our earlier experiments with photogrammetry and neural radiance fields (NeRF),
R&D, with support from the Graphics Department, set out to understand gaussian splatting and continue to explore the future of photography and 3D media. Modern gaussian splatting caught our eye after "3D Gaussian Splatting for Real-Time Radiance Field Rendering"
was named one of the best papers of SIGGRAPH 2023. While photogrammetry is a tried-and-tested method, it’s extremely labor- and compute- intensive, and it reliably fails to reconstruct certain textures and settings. NeRF technology addresses some of these
concerns, but is extremely difficult to deliver these radiance fields to audiences. We wanted to see how splatting fared under similar circumstances.<o:p></o:p></li><li class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l2 level1 lfo6">
<a href="https://rd.nytimes.com/projects/gaussian-splatting" target="_blank">https://rd.nytimes.com/projects/gaussian-splatting</a><o:p></o:p></li></ul>
<ul type="disc">
<li class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l4 level1 lfo9">
<b>3D Gaussian Splatting for Real-Time Radiance Field Rendering</b><o:p></o:p></li><li class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l4 level1 lfo9">
SIGGRAPH 2023 (ACM Transactions on Graphics)<o:p></o:p></li><li class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l4 level1 lfo9">
Bernhard Kerbl* 1,2 Georgios Kopanas* 1,2 Thomas Leimkühler3 George Drettakis1,2<o:p></o:p></li><li class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l4 level1 lfo9">
Abstract. Radiance Field methods have recently revolutionized novel-view synthesis of scenes captured with multiple photos or videos. However, achieving high visual quality still requires neural networks that are costly to train and render, while recent faster
methods inevitably trade off speed for quality. For unbounded and complete scenes (rather than isolated objects) and 1080p resolution rendering, no current method can achieve real-time display rates.<o:p></o:p></li><li class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l4 level1 lfo9">
We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times and importantly allow high-quality real-time (≥ 100 fps) novel-view synthesis at 1080p resolution.<o:p></o:p></li><li class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l4 level1 lfo9">
First, starting from sparse points produced during camera calibration, we represent the scene with 3D Gaussians that preserve desirable properties of continuous volumetric radiance fields for scene optimization while avoiding unnecessary computation in empty
space; Second, we perform interleaved optimization/density control of the 3D Gaussians, notably optimizing anisotropic covariance to achieve an accurate representation of the scene; Third, we develop a fast visibility-aware rendering algorithm that supports
anisotropic splatting and both accelerates training and allows realtime rendering. We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets.<o:p></o:p></li><li class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l4 level1 lfo9">
<a href="https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting" target="_blank">https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting</a><o:p></o:p></li></ul>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
</div></blockquote></body></html>