[x3d-public] metaverse

GPU Group gpugroup at gmail.com
Fri Sep 23 07:44:13 PDT 2022


I thought I'd look into the metaverse. Sounds like its a way of talking and
coordinating between a lot of players in a lot of domains. Maybe a way to
describe how our world economy is becoming more digitally integrated and
entwined.

NVidia CGT
I listened to some sessions put on this week by Nvidia at their CGT -an
online conference with 'sessions'- and Nvidia itself while loaded with
possibilities seemed to be emphasizing a few aspects of metaverse in
particular that its good at:

USD 'universal scene description' - a file format opensourced by Pixar in
2016, and adopted by nVidia in its Omniverse suite of tools -much of which
can be downloaded free, and welcomes extensions. Trick: nVidia Omniverse
display tools need RTX real time raytracing capable video boards which
nVidia just by coincidence happens to sell GPUs for. I bought one -
cheapest I could find locally, and the Ominverse Create scene authoring
tool then could show an area light reflecting off a box as I moved the box
in realtime.). USD will take weeks of study to understand, but basically a
3D scene authoring format that can consume other formats, and has
extensible 'connector' API for connecting other authoring and display tools
to. Sessions at the CGT explained edge devices like smart phones and
XR/AR/VR headsets might not 3D raytrace in real-time or not the whole
scene, but rather high speed low-latency wireless networking including 5G
would allow untethered edge XR devices to receive low-latency streaming
frames rendered on nearby or on-property datacenter/cluster GPUs, with
predictive pose estimation to reduce pose-frame-latency sickness. The
wireless network needs low-latency in both directions as XR sends pose and
camera data to server in realtime. USD format has surprisingly been picked
up by industrial and commercial companies as a way to streamline design and
operations tasks, and they are the major constituency for USD tools.

AI - the tools and techniques are getting more convenient, and with large
datasets already being collected there's lots of data to train. In 3D AI
such as driverless cars, AI tools can help with initial data capture
from photos and lidar, process into 3D elements, and help humans
annotate/label the objects, and also help generate randomized training
scenes from scene elements, and render to what the training AI thinks is
cameras mounted on a car hood but are rendered in a simulator. This
involvement in AI in generating datasets for AI training can increase
training confidence. Then the trained AI can be tested against yet-again
more AI generated random scenes to see if the training is robust. Trick:
nVidia just happens to sell chips with AI specific processing steps as well
as the GPU parallelization needed for fast training (cuDNN is nVidia basic
API for running deep neural network training on GPU, and a choice of a few
tools run on top of that - TensorFlow and PyTorche). USD holds the 3D
training scenes. AI is everywhere else in business these days from natural
language processing, finance risk, digitial security, chemistry and
biology, online recommendation systems etc. - anywhere there's lots of
data, and since data flows are continuous and terrabyte level these days,
AI has to keep up in real time, and more specialized processing chips are
here and more coming. Expect to live in an AI dominated economy.

-Doug
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20220923/40a3e8ad/attachment.html>


More information about the x3d-public mailing list