[X3D-Public] Fwd: Re: [X3D] X3D HTML5 meeting discussions:Declarative 3D interest group at W3C
Philipp Slusallek
slusallek at cs.uni-saarland.de
Tue Jan 4 21:03:43 PST 2011
Hi Chris,
Am 05.01.2011 00:29, schrieb Chris Marrin:
> On Jan 3, 2011, at 8:44 PM, Philipp Slusallek wrote:
> Speaking as a WebKit developer, it would be exceedingly difficult to
> allow arbitrary properties in CSS. The properties are parsed into
> fixed structures for efficiency. Resolving style can be an expensive
> proposition, so keeping it efficient is crucial. But I''d like to
> understand your issues with keeping a fixed property CSS system and
> perhaps adding a small number of properties for the needs of the
> declarative 3D nodes.
I will ask Kristian Sons to respond with all the details to you directly
when he comes back.
>> I see CSS mainly as an orthogonal way to associate data with content
>> objects. This works well for "style" but does not need to be limited to
>> it. E.g. we used it to provide physics parameter (mass density, friction
>> coefficients, etc.) to objects.
>
> Hmmm. That gets into some murky areas of what is style and what is
> content. And I'm not sure if physics is a good example right now. I
> don't see a fully fleshed out physics engine being added to browsers
> in the short term. That may be a great addition in the future and
> would in fact have uses outside 3D. But I think it would be a mistake
> to include it in the first pass. I'm all about baby steps these days.
> It not only aids adoption, but it lets you see more clearly where the
> next steps should be.
I fully agree with baby steps here. Our development is twofold. On one
side we are trying out new ideas (and this is one we have not even tried
yet) and on the other side we are looking into pushing those that are
mature and we are reasonable comportable with to be generally useful
towards the browser.
We want to be use the entire infrastructure (likely without the browser
interface) also for standalone VR applications with display walls and
such. Having a common portable and widely available basis for such work
will be important. Most of this will be in-house or in collaborative
projects with some industry, solving specific problems (a la xulRunner).
DFKI (German Research center for Artificial Intelligence, where a lot of
the applied work is done) with its ~800 employees is one of the largest
applied research institutes in its field with a mission to do tech
transfer. At the Intel Visual Computing Institute (Intel VCI, which I am
co-heading also) we are focusing more on the basic research aspects
(such as XFlow, server-based rendering based on XML3D, AnySL and its
successor AnyDSL and many others) but work closely with my group at the
DFKI and the Max-Planck-Institutes here on Campus as well. BTW, we would
be happy to work more closely in joint projects with you at Apple or
with other partners.
>> Regarding speed for geometry processing: Chris, at the moment we are
>> talking about dozens of wide SIMD cores with optimized data paths for
>> streaming operations on the GPU itself versus a far away CPU based JS
>> implementation (that today at least still needs to transfer the results
>> via a rather slow PCIe bus to the GPU for rendering). There are orders
>> of magnitude between the two that no compiler optimization for JS (which
>> have done amazing things, no questions) can address. I did talk about
>> some proposed data-parallel extension to JS but they are still deep
>> within research labs still and its unclear where this will go.
>
> Baby steps. I don't think it's a showstopper if a web app can only push 100,000 particles rather than 10 million.
I somewhat agree, which is why we have made it a separate module.
However, realistically avatars will become important quickly, which
means a lot of morphing, skinning, subdiv, and animations. Realistic
environments will need lots of geometric details as well. And once we
have been down the "expose OpenGL" route, the question is "why not
expose the other HW APIs as well". Which is what XFlow is all about
(besides the low-level API, which we are not that interested in anyway.
>> So for the time being, I simply see no alternative to something like
>> XFlow to tackle the problem. I appreciate and fully agree with your
>> approach of "lets start small and add only when we find out its really
>> needed", but this is an area, where its clear from the start that JS
>> alone will be insufficient. For these JS extensions we could simply
>> expose CUDA/OpenCL/DirectCompute/whatever but this would bring us fully
>> back to procedural land again and be counter productive to our main goals.
>
> I was not able to get many details about XFlow from a web search. But
> it seems clear that JS alone IS sufficient for some compelling 3D on
> the web. The handful of WebGL examples proves that. Moving the
> rendering, event handing and (some) animations to native code can
> only be better.
Attached is an older presentation on XFlow (will track down and post a
later version). Wherever you see script in there think about a CUDA or
so implementation, even though we plan to drive this via AnySL (you find
a paper on our portable shading system AnySL on our publication page
(http://graphics.cs.uni-saarland.de/publications/), where is also the
XML3D paper.
Philipp
> -----
> ~Chris
> chris at marrin.com
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: xflow_rtt_presentation.pdf
Type: application/pdf
Size: 2383099 bytes
Desc: not available
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20110105/eff45a3f/attachment-0001.pdf>
More information about the X3D-Public
mailing list