[X3D-Public] Fwd: Re: [X3D] X3D HTML5 meeting discussions:Declarative 3D interest group at W3C

Philipp Slusallek slusallek at cs.uni-saarland.de
Thu Jan 6 21:08:00 PST 2011


Hi,

I agree with a lot you say here and would advocate for some middle
ground between you and Chris. Making small and realistic steps is
important because people will be more comfortable with them instead of
this one big step most that is less controllable. On the other side we
need to be reasonably comfortable that the baby steps we take do not
take us into a dead end eventually. So we need to evaluate as many
options as reasonably possible before committing to the baby steps.

That is why we are developing new shader technology, play with XFlow,
apply realtime ray tracing, and lots of other things. We will hopefully
be able to point out a possibly interesting direction that we may want
to go into, which can help us do the right steps now to be able to
eventually get there without having to retreat or pay to high costs for
mistakes along the way. So even if we as researchers sometimes go off on
a tangent here and there, this can be extremely useful input, so please
bear with us :-).

In this context, for example, we should indeed take into account that
ray tracing is now in a position to play a major role for graphics on
the Web going forward (for full disclosure: I am somewhat biased here,
having done a lot of the initial work in this area). It all boils down
to the fact that on the Web people are not experts and setting up
rasterization to correctly display even some seemingly simple scenes can
be challenging, as soon as things like refraction, reflection, of
indirect lighting  are expected (easily for "realistic scenes"). There
is the saying that: "Rasterization is fast but getting it right is
really hard. Ray tracing is right by default but getting it fast is
really hard". On the Web, I would argue, that the second part is
actually more important -- once we are above some threshold of ray
tracing performance. And it seems that we are at least pretty close to
that point for the PC area (not on mobiles, though) as Nvidia and others
are showing.


A highly important point that Chris addresses is related but different.
The Web community is much broader than the general graphics community in
many dimensions: There are many more people but they are also using much
broader set of hardware. While a 4 years old graphics card may seem old
to graphics people and get dismissed, they are perfectly fine for
working on today's Web (probably except for HD video). And we have to
address this.

One incentive of the 3D on the Web for the hardware manufacturer among
us certainly is the chance to sell more powerful graphics systems at a
higher price to huge markets that never needed them so far. The business
market is notorious for having really low-end requirements and up-sell
into this huge market would be highly interesting to all of them. At the
same time we have to make sure that that the screen does not go blank on
those at the (much longer for graphics than usual) long tail of the
hardware scale.

Essentially this mean that we need to build our solution for
scalability. Most of this scalability can only be on the content side,
and we must give content producers the right dials to control content
scalability. Thus we are not making policy decision at the system level
but leave it to the content producer, whether they want to still address
a certain low-end market or not. This is pretty much what Web developers
do today. We just have to make sure that they have the right hooks and
tools to control things reliably. For example, geometric LOD is a well
know (but still thorny) technique but its certainly not the only one and
not enough. The key issue here however, is to provide some high-level
control for these options (CSS seems like an interesting option here,
similar to what can be done via media types already).

Which brings me to a related but different topic that has received
little attention so far: control over appearance. Recently, the Web has
finally gotten the ability to control which fonts are used to render
some text. While the Web has worked pretty well even without that
capability for a long time, there are some important applications that
need tight control over that appearance. The same is true (even more so,
I believe) in 3D. So while we need to be able for content developers to
control the performance of their content, we need to make sure that we
also allow them to specify how accurately they want to control its
appearance.

For obvious reasons a low-level library like OpenGL must fully control
exactly how its output appears on the screen so its users can depend on
depend on what it does. However, for a declarative format, the situation
is quite different. Most application do not care exactly how an objects
will appear on the screen as long as they have a good understanding of
what is happing (a la, its enough to control the font size, color, and
boldness but I do not really care exactly which font is used). However,
we must also cater for the cases where more exact control is really
required.

Building the right tools and controls for this is not fully understood
so far, I would argue. But its important to get at least a grip on these
issue and address them in our work. It will most likely be a staged
approach and we have to see what we can bite off for the first few baby
steps keeping in mind where they will eventually lead us.

	Philipp


Am 06.01.2011 23:42, schrieb GLG:
>> So you're saying that my iPhone can do real-time
>> raytracing? Or my 4 year old ATI graphics card? I've seen
>> nice demos of ray-tracing in a fragment shader, and I know
>> new Nvidia (and presumably ATI) is very powerful and have
>> the potential to do ray-tracing much faster. But that's not
>> mainstream.
> 
> Hello Chris,
> 
> I am very surprised to hear this from you. IMO, a 4 year old
> graphics card is not mainstream, it is either obsolete, or
> about to be, and has little relevance in this discussion. If
> I can voice my personal point-of-view, I believe we should
> focus solely on the future without too much regards to past
> and present hardware/software. Taking example from the
> gaming industry, the latest and greatest games and hardware
> are always at the absolute leading edge. What will be is
> what we need to meet, otherwise the product of the effort
> will already be somewhat dated by the time it comes out. I
> think we need to exhibit extreme foresight, and attempt to
> provide the foundation for as much of all things 3D as we
> can, because if we don't, then later we'll start hearing
> things like -why didn't they do it? Although we might not
> implement things like shaders and physics, and use ray
> tracing in the short term, making sure that there will be a
> way to support these in later versions is paramount. Heck,
> if I had my way I would even seriously look into the
> possibilities of a DAG implementation, and start laying the
> groundwork for that. I know, baby steps, but the general
> directions should not lead to dead ends anywhere. Babies
> tend to walk in circles. We should attempt to envision clear
> paths of possibilities that are as wide as we can make them,
> even if not implemented today, or failing that, it will have
> to be reworked again when the lifecycle of the
> implementation is complete. By that time, Web content will
> be much more complex, and, once more, it will be necessary
> to break it when that happens (as in the incompatibility
> with existing 3D worlds). I am just saying, let's try to do
> it in a way so this never happens again; In a way so that,
> no big money company come again and say, look, we can do
> this better, as it happened so many time with VRML/X3D. I am
> sure you at least partially agree with the above, and I
> realize this may be far more difficult, but if it were easy,
> it probably would not be worth doing. That may sound a
> little idealistic, but after all, we are building for
> 'virtual' reality. :)
> 
> Having said the above, I do not understand why you often say
> "it will fail" if we try to do more. I'd rather have
> something fail early if it is to fail, because failing late
> breaks content and that is not a better alternative. Again,
> my opinion, but I would like to hear you speak more on this
> topic.
> 
> Cheers,
> Lauren  
> 
> 
>> -----Original Message-----
>> From: x3d-public-bounces at web3d.org [mailto:x3d-public-
>> bounces at web3d.org] On Behalf Of Chris Marrin
>> Sent: Thursday, January 06, 2011 2:56 PM
>> To: Dmitri Rubinstein
>> Cc: X3D mailing list Graphics public
>> Subject: Re: [X3D-Public] Fwd: Re: [X3D] X3D HTML5 meeting
>> discussions:Declarative 3D interest group at W3C
>>
>>
>> On Jan 6, 2011, at 11:42 AM, Dmitri Rubinstein wrote:
>>
>>> Chris Marrin schrieb:
>>>> On Jan 5, 2011, at 4:32 PM, Dmitri Rubinstein wrote:
>>>>> Chris Marrin schrieb:
>>>>>> On Jan 5, 2011, at 11:15 AM, Joe D Williams wrote:
>>>>>>>> (such as XFlow, server-based rendering based on
>> XML3D, AnySL and its successor AnyDSL
>>>>>>> Is there anything public on AnyDSL?
>>>>>>> What is the best link for XFlow?
>>>>>>> What is Khronos association with ASL?
>>>>>> I don't think this group should be trying to blaze the
>> trail with new shader languages. We actually considered
>> that in WebGL and decided instead to support a very strict
>> version of GLSL ES 1.0. To make all this work with HLSL
>> (via the translators in ANGLE) we tightened up a few
>> requirements of the underlying system. As GLSL ES
>> progresses with more of the desktop functionality, we can
>> rev the WebGL spec. So I think you should consider the
>> shading language a solved problem.
>>>>> Is it really solved ? How do you want to combine GLSL
>> with ray tracers like NVIDIA's OptiX or our RTfact ? Ray
>> tracers are real time now, what native XML3D browser
>> clearly demonstrate. And making GLSL as the only shading
>> language for 3D in Web couples it tightly with
>> rasterization rendering techniques. This was already a big
>> problem with VRML and X3D, when we wanted to connect it
>> with our real time ray tracer.
>>>> Experiments and science are wonderful things. But this
>> is engineering, and our work has to work on a wide variety
>> of hardware and OS configurations. One day non-
>> rasterization rendering techniques might be mainstream. But
>> that day is not today.
>>>
>>> I don't think that real time ray tracing is just a
>> science, since everybody can run it on a today hardware (I
>> understand this as mainstream). You don't need anymore a
>> cluster of PCs or dedicated hardware. And also a hardware
>> power of GPUs and CPUs grows every year, so it will be
>> faster and faster. So when I am a customer and have already
>> 3D in a browser, and I also have an app that can render me
>> this 3D in a real-time without a browser then I will ask a
>> question: Why it doesn't work within the browser ?
>>
>> So you're saying that my iPhone can do real-time
>> raytracing? Or my 4 year old ATI graphics card? I've seen
>> nice demos of ray-tracing in a fragment shader, and I know
>> new Nvidia (and presumably ATI) is very powerful and have
>> the potential to do ray-tracing much faster. But that's not
>> mainstream.
>>
>>>
>>> When this is somehow engineering problem, and we also
>> discussing engineering details here (Do we ?), then let me
>> ask you engineering question:
>>> Why not add into a browser a more sophisticated plugin
>> API that allows a plugin to provide a native implementation
>> for elements of an arbitrary XML namespace and propagating
>> all events like DOM changes and CSS information into
>> plugin. Combined with direct access to OpenGL's 3D context,
>> and I think this is already available via NPAPI:Pepper,
>> this should be enough to implement scene graph engine
>> without modifying a browser. Then the decision what to
>> support and how 3D DOM should look like is up to the plugin
>> author.
>>
>> The single most important aspect of a web browser isn't
>> power or performance or even feature set. It is security.
>> Every browser vendor rails against plugins because it opens
>> the browser to intentional and unintentional security
>> breaches. When discovered, the only recourse the browser
>> has is to send out an update which blacklists the guilty
>> plugin. The user sees his or her web application suddenly
>> stop working because of this and so they blame the browser,
>> not the plugin vendor.
>>
>> Adding a feature like WebGL to the browser is done to add
>> features and performance to be sure. But more importantly,
>> it allows the browser to control the security of that
>> component.
>>
>> Allowing plugin access to a bare OpenGL context is another
>> huge security issue. OpenGL was not designed to be hardened
>> against attacks. We are just now working with vendors to
>> add these capabilities. So I don't see any possibility of
>> opening the internal workings of the browser to any 3rd
>> party plugins.
>>
>> -----
>> ~Chris
>> chris at marrin.com
>>
>>
>> _______________________________________________
>> X3D-Public mailing list
>> X3D-Public at web3d.org
>> http://web3d.org/mailman/listinfo/x3d-public_web3d.org
> 
> 
> _______________________________________________
> X3D-Public mailing list
> X3D-Public at web3d.org
> http://web3d.org/mailman/listinfo/x3d-public_web3d.org




More information about the X3D-Public mailing list