[GeoWeb3DContest] SSR + Rotterdam data

Information and updates on Web3D's Rotterdam city data competition geoweb3dcontest at web3d.org
Sun Mar 22 11:05:00 PDT 2015


I found some jpg decompression methods were taking a long time, so virtual reality was slow to start up, and preparing the images in uncompressed .bmp sped up loading from 4 minutes to 4 seconds.
Afrikuunderwuk takes up 337MB RAM when loaded in freewrl virtual reality engine -without terrain, just textured buildings.
For 80 areas in Rotterdam that would be 26GB - so a city might fit in 32GB RAM.
-Doug
more..
For SSR -Server-side rendering- using virtual reality engine freewrl, it was taking 4 minutes to load 1/80th of Rotterdam -the Afrikuunderwuk area- with the images in .jpg format. In win32. Freewrl uses MS Windows Imaging Component (WIC) to decompress jpeg images. I found this was the slow part. If I pre-convert to .bmp image format -a native windows uncompressed RGB 888 format- then the image files on disk were 23x bigger -3MB vs 133kb- but then it took only 4 seconds (versus 4 minutes) to load the Afrikuunderwuk scene.

There are much faster decompressors -for example imagemagick only took a few seconds to convert 163 .jpg files to .bmp.
In the case of SSR -which runs on the same computer as the image files- the files could be decompressed to .bmp on disk for quick loading. 


>
> I found I could render 1/80th of Rotterdam -the AFRICKUUNDERWUK zone- at 9 FPS in virtual reality, as is, on intel i5 computer.
> http://dug9.users.sourceforge.net/web3d/temp/freewrl_afrikuunderwuk.png
> - it takes minutes to load (it mipmaps the textures as it would for all vrml scenes)
> - no ground/orthophoto - must have been lost in conversion from .gml to .wrl?
> - no picking
> - takes up 100s of MB RAM
> This might be OK for SSR: with high performance server, it can take time prepare a dataset for rendering snapshot request. Then as a viewpoint pose is received, it would take 1/9th of a second to move the camera, render a frame and return the screenshot.
> -Doug
>
>
>>
>> A reason for wanting per-building texture atlases (versus jumping randomly around several textures to complete a building): openGL draws sequentially through faces made from 3D vertices. If a building is coherent - the faces for a building are all together - then its faster for opengl if there's only one texture to transmit from the CPU memory to the GPU, to complete all those faces. If it has to jump from texture to texture -loading each one to do a face- to complete a building, it renders much slower.
>> -Doug
>>
>>> Correction to the vrml editor screenshot:
>>> http://dug9.users.sourceforge.net/web3d/temp/dune_screenshot_vrml_rotterdam_afrikaanderwuk.jpg
>>>>
>>>> The Rotterdam texture atlases appear to be completely random.
>>>> When I open a vrml version of the afrikuunderwuk data in a vrml editor:
>>>> the long buidling in the lower left, I can pick chunks/sections of it, and just in the little stretch we see, there are 3 different texture files involved. So to draw that building, you'd need to load at least 3 1024x1024 image textures. That's sub-optimal.
>>>> I don't see a pattern to how they did the textures.
>>>> -Doug
>>>> more..
>>>> ROTTERDAM> AFRIKUNNDERWUK TEXTURES
>>>> (8 x 19) + 5 ~= 160
>>>> 160 x 1024x1024x4 = 160 x 4MB = 640MB
>>>> with mipmapping lets say 128 layers 1.3GB
>>>> Yes that's texture heavy.
>>>>
>>>> Hypotheses for seeming randomness to the texture atlases:
>>>> H0: military requirements: they had to scramble the textures in little bits so an enemy could not reconstruct a coherent base map from the texture images
>>>> H1: the software they used for authoring accidentally randomized
>>>> H2: there's a pattern / a method to the madness and I just can't see it
>>>>
>>>> A first job of fixing might be to reconstruct a conherent base/orhtophoto texture
>>>> Method 1: use googleEarth snapshots
>>>> Method 2: get a vrml browser to render (however slowly) the buildings and ground as given, as seen from above. Then do a snapshot in the vrml browser to get a vertical texture. Tile those textures to get an orthophoto
>>>> Method 3: ask Rotterdam for their original ortho images
>>>>
>>>> Once there are coherent orthophoto textures for the ground, then the building texture coordinates would need to be re-done.
>>>> Method A: manually
>>>> Method B: use image correlation and search algorithms to find roof texture in orthophoto, then shift the texture coordinates
>>>> Because there are textures for the sides of the buildings too, and if we desire per-building atlases, then
>>>> Method C: iterate over buildings, and for each building:
>>>> - iterate over shape faces, and for each face
>>>> -- from texture coordinates and original atlas image, extract the pixels for that face, and place in a new per-building atlas
>>>> and in this way automatically build per-building atlases including sides and roof
>>>>

 		 	   		  


More information about the GeoWeb3DContest mailing list