[GeoWeb3DContest] SSR + Rotterdam data

Information and updates on Web3D's Rotterdam city data competition geoweb3dcontest at web3d.org
Tue Mar 31 11:16:53 PDT 2015


A ZONE SERVER would fall into a category called "application level load balancers".
-Doug
more..
- it would sniff the request body and in our case based on geographic location either:
a) send an http 303 try another server response redirecting the client to the specific SSR server (this exposes your inner network) or
b) create an http client, and send a request to the specific SSR server, wait for a response, and forward the response to the final client
http://www.javaworld.com/article/2077922/architecture-scalability/server-load-balancing-architectures-part-2-application-level-load-balanci.html

>
> SSR> RAM needed for Rotterdam textured building datasets:
> 128GB RAM on one server, or 4 servers with 32GB RAM each and a ZONE SERVER, would be sufficient to hold and render all the rotterdam textured buildings instantly, with textures reduced to 512x512.
>
> -Doug
> more..
>
> After downloading all the rotterdam textured building .zip, I calculated I would need about 1GB RAM per sub-area, if I reduce the textures to 512x512 from 1024x1024. About half the RAM is for the Vector and half for the Textures. I can reduce textures another half, by changing from 32bit to 16bit opengl texture format.
>
> There are 88 scenes x 1GB per scene = 88GB RAM needed (+ 4 for OS) = 92GB.
> Most desktop computers take 32GB max (4 x 8GB DIMM). But Gigabyte has a few motherboards that will take 8 x 16GB = 128GB, using SDRAM modules usually used in servers . It still adds up to about $2000 for RAM.
> Another alternative is to use a layered approach, with a ZONE SERVER directing traffic:
> client m:1 ZONE SERVER 1:m SSRserver
> The ZoneServer would peak at the viewpoint coordinates, look in a map to see which zone/scene/subarea its in, and then look in a table to see which SSRserver is rendering that zone, and send the data to that SSR server.
> Using the ZoneServer approach, with each SSR server using normal 32GB RAM, 4 x (32-4) = 4x28 = 112GB which would be sufficient to hold Rotterdam. At $1000 per computer, that would be $4000. It would have an advantage over the 1-computer-128GB scenario: a) it's more incrementally scalable to cover larger geographic areas and more layers of data, and b) there would be more CPUs and GPUs to render, allowing it to server more clients simaltaneously.
>
> WHY NOT DISK INSTEAD OF RAM
> On an i5 computer, with freewrl VRML browser, it takes about 8 seconds to load, parse and mipmap an average rotterdam scene from HDD. Most of that is VRML parsing to nodes. For interactive purposes, 8 seconds is generally too slow, although a friendly interface could let the user know, and do a countdown. But if there were a lot of users in different areas, the server would be very busy loading scenes, it would fall behind and 8 seconds would blow up into minutes with several users.
> An SSD is much faster than an HDD for data reading speeds. An SSD can load at .5GB/second. However loading of an average 10 or 20MB per scene is just a fraction of the 8 second load time, and speeding up that part won't make much difference.
> more..
> It would take 16 minutes to load, parse vrml into nodes and mipmap image textures for all Rotterdam on one i5 computer, from HDD (hard disk drive).
> Using the ZONE SERVER approach, the more computers working, the less chance of a server bottleneck when multiple clients are requesting simaltaneously. If there are enough computers, and they all have RAM then the dataset could be in RAM anyway.
>
> WHY NOT PREDICTIVE ZONES
> with only one client, predicting what zones the client is moving toward, and loading those in anticipation (and dropping scenes moving away from) would make sense. But if there are multiple clients accessing different areas at the same time, it is more complicated managing scene anticipation.
>
>
>
>>
>> For FCL (Fastest Client Load) use case, VRML could load all Rotterdam in 32GB RAM in 10 minutes, and use LOD nodes to adjust detail to render each incoming viewpoint pose snapshot request in < 1 second.
>> -Doug
>> more..
>> For SSR (server-side rendering) use case of Fastest Client Loading (Q. is there a Volker award for Fastest Client Load (FCL)?) then if it takes 8 seconds to load an area of Rotterdam like Afrikuunderwuk, and there are 80 areas, it will take 8 x 80 = 640 seconds (/60s = 10.6 minutes) to load the whole city. That's too long to wait.
>> SSR gets one viewpoint pose per request, positions the camera/viewpoint in the scene, and renders a frame, takes a snapshot and returns the snapshot image to the client. To reduce the geometry to render on a frame, it could do LOD (level of detail) based on distance, and this would reduce the geometry being rendered, so a frame would render faster. Virtual reality VRML has in-memory LOD nodes for that. So if all Rotterdam is in RAM, it should be fairly instantaneous to adjust LOD for a given viewpoint pose.
>> VRML also has GeoLOD nodes -for tiled Quads, suitable for terrain- which load and unload from memory on demand. But if an area needs to be loaded, and takes 8 seconds or multiple frames to load, that may be too long to win any FCL award.
>>
>> If loading GeoLOD from disk, SSD drives (Solid State Drive) may be faster than HDD (Hard Disk Drive)
>> (PC magazine review, HDD vs SSD. Warning: commercialism):
>> http://www.pcmag.com/article2/0,2817,2404258,00.asp
>>
>> more..
>> I did speed up freewrl's loading of Afrikuunderwuk .jpg from 4 minutes to 8 seconds by changing from MS WIC to gdiplus image loading software component.
>>
>>
>>>
>>> I found some jpg decompression methods were taking a long time, so virtual reality was slow to start up, and preparing the images in uncompressed .bmp sped up loading from 4 minutes to 4 seconds.
>>> Afrikuunderwuk takes up 337MB RAM when loaded in freewrl virtual reality engine -without terrain, just textured buildings.
>>> For 80 areas in Rotterdam that would be 26GB - so a city might fit in 32GB RAM.
>>> -Doug
>>> more..
>>> For SSR -Server-side rendering- using virtual reality engine freewrl, it was taking 4 minutes to load 1/80th of Rotterdam -the Afrikuunderwuk area- with the images in .jpg format. In win32. Freewrl uses MS Windows Imaging Component (WIC) to decompress jpeg images. I found this was the slow part. If I pre-convert to .bmp image format -a native windows uncompressed RGB 888 format- then the image files on disk were 23x bigger -3MB vs 133kb- but then it took only 4 seconds (versus 4 minutes) to load the Afrikuunderwuk scene.
>>>
>>> There are much faster decompressors -for example imagemagick only took a few seconds to convert 163 .jpg files to .bmp.
>>> In the case of SSR -which runs on the same computer as the image files- the files could be decompressed to .bmp on disk for quick loading.
>>>
>>>
>>>>
>>>> I found I could render 1/80th of Rotterdam -the AFRICKUUNDERWUK zone- at 9 FPS in virtual reality, as is, on intel i5 computer.
>>>> http://dug9.users.sourceforge.net/web3d/temp/freewrl_afrikuunderwuk.png
>>>> - it takes minutes to load (it mipmaps the textures as it would for all vrml scenes)
>>>> - no ground/orthophoto - must have been lost in conversion from .gml to .wrl?
>>>> - no picking
>>>> - takes up 100s of MB RAM
>>>> This might be OK for SSR: with high performance server, it can take time prepare a dataset for rendering snapshot request. Then as a viewpoint pose is received, it would take 1/9th of a second to move the camera, render a frame and return the screenshot.
>>>> -Doug
>>>>
>>>>
>>>>>
>>>>> A reason for wanting per-building texture atlases (versus jumping randomly around several textures to complete a building): openGL draws sequentially through faces made from 3D vertices. If a building is coherent - the faces for a building are all together - then its faster for opengl if there's only one texture to transmit from the CPU memory to the GPU, to complete all those faces. If it has to jump from texture to texture -loading each one to do a face- to complete a building, it renders much slower.
>>>>> -Doug
>>>>>
>>>>>> Correction to the vrml editor screenshot:
>>>>>> http://dug9.users.sourceforge.net/web3d/temp/dune_screenshot_vrml_rotterdam_afrikaanderwuk.jpg
>>>>>>>
>>>>>>> The Rotterdam texture atlases appear to be completely random.
>>>>>>> When I open a vrml version of the afrikuunderwuk data in a vrml editor:
>>>>>>> the long buidling in the lower left, I can pick chunks/sections of it, and just in the little stretch we see, there are 3 different texture files involved. So to draw that building, you'd need to load at least 3 1024x1024 image textures. That's sub-optimal.
>>>>>>> I don't see a pattern to how they did the textures.
>>>>>>> -Doug
>>>>>>> more..
>>>>>>> ROTTERDAM> AFRIKUNNDERWUK TEXTURES
>>>>>>> (8 x 19) + 5 ~= 160
>>>>>>> 160 x 1024x1024x4 = 160 x 4MB = 640MB
>>>>>>> with mipmapping lets say 128 layers 1.3GB
>>>>>>> Yes that's texture heavy.
>>>>>>>
>>>>>>> Hypotheses for seeming randomness to the texture atlases:
>>>>>>> H0: military requirements: they had to scramble the textures in little bits so an enemy could not reconstruct a coherent base map from the texture images
>>>>>>> H1: the software they used for authoring accidentally randomized
>>>>>>> H2: there's a pattern / a method to the madness and I just can't see it
>>>>>>>
>>>>>>> A first job of fixing might be to reconstruct a conherent base/orhtophoto texture
>>>>>>> Method 1: use googleEarth snapshots
>>>>>>> Method 2: get a vrml browser to render (however slowly) the buildings and ground as given, as seen from above. Then do a snapshot in the vrml browser to get a vertical texture. Tile those textures to get an orthophoto
>>>>>>> Method 3: ask Rotterdam for their original ortho images
>>>>>>>
>>>>>>> Once there are coherent orthophoto textures for the ground, then the building texture coordinates would need to be re-done.
>>>>>>> Method A: manually
>>>>>>> Method B: use image correlation and search algorithms to find roof texture in orthophoto, then shift the texture coordinates
>>>>>>> Because there are textures for the sides of the buildings too, and if we desire per-building atlases, then
>>>>>>> Method C: iterate over buildings, and for each building:
>>>>>>> - iterate over shape faces, and for each face
>>>>>>> -- from texture coordinates and original atlas image, extract the pixels for that face, and place in a new per-building atlas
>>>>>>> and in this way automatically build per-building atlases including sides and roof
>>>>>>>
>>>
>>>
>
>
> _______________________________________________
> GeoWeb3DContest mailing list
> GeoWeb3DContest at web3d.org
> http://web3d.org/mailman/listinfo/geoweb3dcontest_web3d.org
 		 	   		  


More information about the GeoWeb3DContest mailing list