From geoweb3dcontest at web3d.org Wed Apr 1 06:57:53 2015 From: geoweb3dcontest at web3d.org (Information and updates on Web3D's Rotterdam city data competition) Date: Wed, 1 Apr 2015 07:57:53 -0600 Subject: [GeoWeb3DContest] SSR + Rotterdam data In-Reply-To: References: , , , , ,,,,,,,,,,,,,,, , , , , , , , , , , , , , , , , , , , , ,,,,,,,,, , , , , , , , , , , , , , , , , , , , , , , , , , ,,,,,,,,,,,,, , , , , , , , , , , , , , , , , , , , , , , , ,,,,,,,,,,,,,,, , , , , , , , , , , , , , , , , , , , , ,,,,,,,,, , , , , , , , , , , , , , , , , , , , , , , , , , ,,,,,,,,,,,,, , , , , , , , , , , , , , , , , , , , ,,,,,,,,,,,,, , , , , , , , , , , , , , , , , , , , , ,,,,,,,,,,,,,,,,,,,,, , , , , , , , ,,,,,,,,, ,,,,,,, , , , , , , , ,,,,,,, ,,,,, , , , , , , , ,,,,, ,,, , , , , , , , ,,, , , , , , , , , , , , , , , , , , , Message-ID: ZONE SERVER .. Another term: reverse proxy. It means the gateway server turns itself into a client when facing the cluster/grid of internal servers. -Doug more.. But it's not necessarily complicated or expensive. In the gateway/load-balancer/reverse-proxy server's request handler you would do code like this: if( static html) { serve directly } else if working as SSRserver { render snapshot and return response } else if working as zone balancer { 1. sniff request for geographic coords 2. do point-in-poly on a list of IP:port, poly to determine which SSR to relay to 3. open a TCP connection using sockets, and forward the request 4. wait for response 5. when it arrives, copy the response received from SSR to the zone balancer's response and return response to client } more... https://gist.github.com/nolim1t/126991 - Dead simple GET in c via sockets - the zone balancer wouldn't need to form a GET, just use sockets code like this relay it to the appropriate SSR > > A ZONE SERVER would fall into a category called "application level load balancers". > -Doug > more.. > - it would sniff the request body and in our case based on geographic location either: > a) send an http 303 try another server response redirecting the client to the specific SSR server (this exposes your inner network) or > b) create an http client, and send a request to the specific SSR server, wait for a response, and forward the response to the final client > http://www.javaworld.com/article/2077922/architecture-scalability/server-load-balancing-architectures-part-2-application-level-load-balanci.html > >> >> SSR> RAM needed for Rotterdam textured building datasets: >> 128GB RAM on one server, or 4 servers with 32GB RAM each and a ZONE SERVER, would be sufficient to hold and render all the rotterdam textured buildings instantly, with textures reduced to 512x512. >> >> -Doug >> more.. >> >> After downloading all the rotterdam textured building .zip, I calculated I would need about 1GB RAM per sub-area, if I reduce the textures to 512x512 from 1024x1024. About half the RAM is for the Vector and half for the Textures. I can reduce textures another half, by changing from 32bit to 16bit opengl texture format. >> >> There are 88 scenes x 1GB per scene = 88GB RAM needed (+ 4 for OS) = 92GB. >> Most desktop computers take 32GB max (4 x 8GB DIMM). But Gigabyte has a few motherboards that will take 8 x 16GB = 128GB, using SDRAM modules usually used in servers . It still adds up to about $2000 for RAM. >> Another alternative is to use a layered approach, with a ZONE SERVER directing traffic: >> client m:1 ZONE SERVER 1:m SSRserver >> The ZoneServer would peak at the viewpoint coordinates, look in a map to see which zone/scene/subarea its in, and then look in a table to see which SSRserver is rendering that zone, and send the data to that SSR server. >> Using the ZoneServer approach, with each SSR server using normal 32GB RAM, 4 x (32-4) = 4x28 = 112GB which would be sufficient to hold Rotterdam. At $1000 per computer, that would be $4000. It would have an advantage over the 1-computer-128GB scenario: a) it's more incrementally scalable to cover larger geographic areas and more layers of data, and b) there would be more CPUs and GPUs to render, allowing it to server more clients simaltaneously. >> >> WHY NOT DISK INSTEAD OF RAM >> On an i5 computer, with freewrl VRML browser, it takes about 8 seconds to load, parse and mipmap an average rotterdam scene from HDD. Most of that is VRML parsing to nodes. For interactive purposes, 8 seconds is generally too slow, although a friendly interface could let the user know, and do a countdown. But if there were a lot of users in different areas, the server would be very busy loading scenes, it would fall behind and 8 seconds would blow up into minutes with several users. >> An SSD is much faster than an HDD for data reading speeds. An SSD can load at .5GB/second. However loading of an average 10 or 20MB per scene is just a fraction of the 8 second load time, and speeding up that part won't make much difference. >> more.. >> It would take 16 minutes to load, parse vrml into nodes and mipmap image textures for all Rotterdam on one i5 computer, from HDD (hard disk drive). >> Using the ZONE SERVER approach, the more computers working, the less chance of a server bottleneck when multiple clients are requesting simaltaneously. If there are enough computers, and they all have RAM then the dataset could be in RAM anyway. >> >> WHY NOT PREDICTIVE ZONES >> with only one client, predicting what zones the client is moving toward, and loading those in anticipation (and dropping scenes moving away from) would make sense. But if there are multiple clients accessing different areas at the same time, it is more complicated managing scene anticipation. >> >> >> >>> >>> For FCL (Fastest Client Load) use case, VRML could load all Rotterdam in 32GB RAM in 10 minutes, and use LOD nodes to adjust detail to render each incoming viewpoint pose snapshot request in < 1 second. >>> -Doug >>> more.. >>> For SSR (server-side rendering) use case of Fastest Client Loading (Q. is there a Volker award for Fastest Client Load (FCL)?) then if it takes 8 seconds to load an area of Rotterdam like Afrikuunderwuk, and there are 80 areas, it will take 8 x 80 = 640 seconds (/60s = 10.6 minutes) to load the whole city. That's too long to wait. >>> SSR gets one viewpoint pose per request, positions the camera/viewpoint in the scene, and renders a frame, takes a snapshot and returns the snapshot image to the client. To reduce the geometry to render on a frame, it could do LOD (level of detail) based on distance, and this would reduce the geometry being rendered, so a frame would render faster. Virtual reality VRML has in-memory LOD nodes for that. So if all Rotterdam is in RAM, it should be fairly instantaneous to adjust LOD for a given viewpoint pose. >>> VRML also has GeoLOD nodes -for tiled Quads, suitable for terrain- which load and unload from memory on demand. But if an area needs to be loaded, and takes 8 seconds or multiple frames to load, that may be too long to win any FCL award. >>> >>> If loading GeoLOD from disk, SSD drives (Solid State Drive) may be faster than HDD (Hard Disk Drive) >>> (PC magazine review, HDD vs SSD. Warning: commercialism): >>> http://www.pcmag.com/article2/0,2817,2404258,00.asp >>> >>> more.. >>> I did speed up freewrl's loading of Afrikuunderwuk .jpg from 4 minutes to 8 seconds by changing from MS WIC to gdiplus image loading software component. >>> >>> >>>> >>>> I found some jpg decompression methods were taking a long time, so virtual reality was slow to start up, and preparing the images in uncompressed .bmp sped up loading from 4 minutes to 4 seconds. >>>> Afrikuunderwuk takes up 337MB RAM when loaded in freewrl virtual reality engine -without terrain, just textured buildings. >>>> For 80 areas in Rotterdam that would be 26GB - so a city might fit in 32GB RAM. >>>> -Doug >>>> more.. >>>> For SSR -Server-side rendering- using virtual reality engine freewrl, it was taking 4 minutes to load 1/80th of Rotterdam -the Afrikuunderwuk area- with the images in .jpg format. In win32. Freewrl uses MS Windows Imaging Component (WIC) to decompress jpeg images. I found this was the slow part. If I pre-convert to .bmp image format -a native windows uncompressed RGB 888 format- then the image files on disk were 23x bigger -3MB vs 133kb- but then it took only 4 seconds (versus 4 minutes) to load the Afrikuunderwuk scene. >>>> >>>> There are much faster decompressors -for example imagemagick only took a few seconds to convert 163 .jpg files to .bmp. >>>> In the case of SSR -which runs on the same computer as the image files- the files could be decompressed to .bmp on disk for quick loading. >>>> >>>> >>>>> >>>>> I found I could render 1/80th of Rotterdam -the AFRICKUUNDERWUK zone- at 9 FPS in virtual reality, as is, on intel i5 computer. >>>>> http://dug9.users.sourceforge.net/web3d/temp/freewrl_afrikuunderwuk.png >>>>> - it takes minutes to load (it mipmaps the textures as it would for all vrml scenes) >>>>> - no ground/orthophoto - must have been lost in conversion from .gml to .wrl? >>>>> - no picking >>>>> - takes up 100s of MB RAM >>>>> This might be OK for SSR: with high performance server, it can take time prepare a dataset for rendering snapshot request. Then as a viewpoint pose is received, it would take 1/9th of a second to move the camera, render a frame and return the screenshot. >>>>> -Doug >>>>> >>>>> >>>>>> >>>>>> A reason for wanting per-building texture atlases (versus jumping randomly around several textures to complete a building): openGL draws sequentially through faces made from 3D vertices. If a building is coherent - the faces for a building are all together - then its faster for opengl if there's only one texture to transmit from the CPU memory to the GPU, to complete all those faces. If it has to jump from texture to texture -loading each one to do a face- to complete a building, it renders much slower. >>>>>> -Doug >>>>>> >>>>>>> Correction to the vrml editor screenshot: >>>>>>> http://dug9.users.sourceforge.net/web3d/temp/dune_screenshot_vrml_rotterdam_afrikaanderwuk.jpg >>>>>>>> >>>>>>>> The Rotterdam texture atlases appear to be completely random. >>>>>>>> When I open a vrml version of the afrikuunderwuk data in a vrml editor: >>>>>>>> the long buidling in the lower left, I can pick chunks/sections of it, and just in the little stretch we see, there are 3 different texture files involved. So to draw that building, you'd need to load at least 3 1024x1024 image textures. That's sub-optimal. >>>>>>>> I don't see a pattern to how they did the textures. >>>>>>>> -Doug >>>>>>>> more.. >>>>>>>> ROTTERDAM> AFRIKUNNDERWUK TEXTURES >>>>>>>> (8 x 19) + 5 ~= 160 >>>>>>>> 160 x 1024x1024x4 = 160 x 4MB = 640MB >>>>>>>> with mipmapping lets say 128 layers 1.3GB >>>>>>>> Yes that's texture heavy. >>>>>>>> >>>>>>>> Hypotheses for seeming randomness to the texture atlases: >>>>>>>> H0: military requirements: they had to scramble the textures in little bits so an enemy could not reconstruct a coherent base map from the texture images >>>>>>>> H1: the software they used for authoring accidentally randomized >>>>>>>> H2: there's a pattern / a method to the madness and I just can't see it >>>>>>>> >>>>>>>> A first job of fixing might be to reconstruct a conherent base/orhtophoto texture >>>>>>>> Method 1: use googleEarth snapshots >>>>>>>> Method 2: get a vrml browser to render (however slowly) the buildings and ground as given, as seen from above. Then do a snapshot in the vrml browser to get a vertical texture. Tile those textures to get an orthophoto >>>>>>>> Method 3: ask Rotterdam for their original ortho images >>>>>>>> >>>>>>>> Once there are coherent orthophoto textures for the ground, then the building texture coordinates would need to be re-done. >>>>>>>> Method A: manually >>>>>>>> Method B: use image correlation and search algorithms to find roof texture in orthophoto, then shift the texture coordinates >>>>>>>> Because there are textures for the sides of the buildings too, and if we desire per-building atlases, then >>>>>>>> Method C: iterate over buildings, and for each building: >>>>>>>> - iterate over shape faces, and for each face >>>>>>>> -- from texture coordinates and original atlas image, extract the pixels for that face, and place in a new per-building atlas >>>>>>>> and in this way automatically build per-building atlases including sides and roof >>>>>>>> >>>> >>>> >> >> >> _______________________________________________ >> GeoWeb3DContest mailing list >> GeoWeb3DContest at web3d.org >> http://web3d.org/mailman/listinfo/geoweb3dcontest_web3d.org > > _______________________________________________ > GeoWeb3DContest mailing list > GeoWeb3DContest at web3d.org > http://web3d.org/mailman/listinfo/geoweb3dcontest_web3d.org From geoweb3dcontest at web3d.org Wed Apr 1 22:49:24 2015 From: geoweb3dcontest at web3d.org (Information and updates on Web3D's Rotterdam city data competition) Date: Thu, 2 Apr 2015 05:49:24 +0000 Subject: [GeoWeb3DContest] How to submit a contribution to the Web3D City Modelling Competition Message-ID: How to submit a contribution to the Web3D City Modelling Competition http://web3d.org/mailman/listinfo/geoweb3dcontest_web3d.org In order to get comparable contributions, a short description of your contribution should be submitted. It should contain: Submitting Authors / Developers Name, email, affiliation Student (PhD, Master/Bachelor) at German University (yes/no) Young German Startup Company (yes/no) Please highlight your contribution (max. 500 words) ? How did you implement the client? WebGL, using a JavaScript lib such as x3dom.js, XML3D.js, three.js, etc., just images due to server side rendering ? Do you support a tiling scheme? What is the size of the tiles? Do you support a hierarchical tiling? ? How to you transfer the data from the server to the client? Please give a short description of the streaming protocol (if applicable). ? Which browser (version) did you used to test the application? ? Did you include some special data sources / special features you would like to highlight? Link to the website of your contribution ? You can also submit a short video Submission: Deadline 15.4.2015 11:59 PM Submit document to volker.coors at hft-stuttgart.de ----------------------------------------------------------------------------- Prof. Dr. Volker Coors Studiendekan Informationslogistik Hochschule f?r Technik Stuttgart University of Applied Sciences Fakult?t Vermessung, Informatik, Mathematik Schellingstr. 24 D-70174 Stuttgart fon: +49 (0)711 8926 2708 +49 (0)711 8926 2606 (Sekretariat) email: Volker.Coors at hft-stuttgart.de www.coors-online.de ________________________________________ Von: GeoWeb3DContest [geoweb3dcontest-bounces at web3d.org]" im Auftrag von "Information and updates on Web3D's Rotterdam city data competition [geoweb3dcontest at web3d.org] Gesendet: Freitag, 20. M?rz 2015 14:32 An: geoweb3dcontest at web3d.org Betreff: Re: [GeoWeb3DContest] GML + SSR > We expect a short paper / extended abstract to highlight the approach and a link to the implementation. I will create a small guideline on what we expect the next days. > > Deadline is extended to 15. April. OK - that may help Federico. I'm too far from anything myself. Except SSR as an idea. Federico: if you're 'going for it' LMK if there's something I can do to help you - I'll have a few days. How about if I rework some textures somehow? INSIGHT: I think applications like googleEarth and GML renderers have an advantage over VRML renderers: VRML has to re-draw everything on every frame because time-based animations can be moving things. GoogleEarth and GML renderers have no time-based animations. Only navigation and picking. So when navigating they redraw all the geometry on every frame. But when picking (ie mouse-over) they can just render the highlighted geometry over a transparent backbuffer, and composite over the last-rendered navigation frame. So while vrml does both geometry rendering and picking on every frame, GoogleEarth and GML renderers do either navigation+geometry or picking+highlight. That allows them to render faster. > > It financial support depends on the flight costs for obvious reasons. Flight costs usually depend on distance. OK makes sense. > The main reason for the competition is that we would like to see people using 3D city models as benchmark (not only the Stanford bunny) to be able to compare web based visualization of geospatial data. Unfortunately, 3D city models are usually not freely available. But the city of Rotterdam made their model open data, so everyone can make use of it. > >> Also, btw, how do we actually apply to the contest? Do we have to >> prepare a presentation document? > > We expect a short paper / extended abstract to highlight the approach and a link to the implementation. I will create a small guideline on what we expect the next days. > > Deadline is extended to 15. April. > > Regards > Volker > > > ----------------------------------------------------------------------------- > > Prof. Dr. Volker Coors > Studiendekan Informationslogistik > Hochschule f?r Technik Stuttgart > University of Applied Sciences > Fakult?t Vermessung, Informatik, Mathematik Schellingstr. 24 > D-70174 Stuttgart > > fon: +49 (0)711 8926 2708 > +49 (0)711 8926 2606 (Sekretariat) > > fax: +49 (0)711 8926 2556 > > email: Volker.Coors at hft-stuttgart.de > > www.coors-online.de > > > > > > > > > -----Urspr?ngliche Nachricht----- > Von: GeoWeb3DContest [mailto:geoweb3dcontest-bounces at web3d.org] Im Auftrag von Information and updates on Web3D's Rotterdam city data competition > Gesendet: Mittwoch, 18. M?rz 2015 16:09 > An: geoweb3dcontest at web3d.org > Betreff: Re: [GeoWeb3DContest] GML + SSR > > >> Doug, >> >> Our current renderer wroks at interactive frame rates, but without textures. >> It uses java applet and JOGL and a custom format for streaming >> geometry data from a servlet parsing the CGML. > > OK I know little of Java. Thinking of SSR, Q1. is there a way to run the applet as a standalone app -no web browser? > Q2. can I talk on ports? > Q3. is there a way to do a screenshot of the app and save to disk? > If so then one idea is to run it on your data server -where data is all handy- have it pre-load all the data, run as a scene. But once per frame it would check for messages on a port. If there's a message, it would see if it's a camera pose, and if so it would navigate to that pose, do a snapshot, and send the snapshot back on the port There would be glue code for the server that would manage sessions - sending the snapshot back to the correct requester. > If you get 60FPS on the server, then you could serve ~O(60 requests/s) with one server machine. > >> >> http://iscope.graphitech-projects.com/indexTest.html >> To load Rotterdam press ctrl+0->testing->LoadCustom Pilot->Rotterdam >> and navigate to the area. >> > > On the Testing tab, Rotterdam always 'whitescreens' (applet crashes?) for me (windows 8.1, i5 with intel graphics, Chrome, not sure what Java version) and Chicago and Trento never seem to load - the world is all I see. > On the Pilots tab, I visited CLES and TRENTO - they look great. Nice work on the software and datasets. I can get the orthophoto, 3D terrain. I see color, or color index when picking - I gather that's what you would send to the database as ID, or use to look up ID, to get info on an element. > > >> Also there are other datasets: >> http://iscope.graphitech-projects.com >> >> We do support some sort of texturing but it is very specific to this >> project and it consist in applying an orthophoto on top of buildings >> roofs. >> >> Also, btw, how do we actually apply to the contest? Do we have to >> prepare a presentation document? > > I have no idea. It's very vague. The prize is 'financial assistance' to go to Crete. But $1/EU 1 could be considered assistance. So between vague rules, vague prize and vague entry procedures I suspect there's no real/sincere contest. Hypotheses: > H0: it's an attempt to breath life into a dying 'sport' by making it seem somehow alive with exciting contests. > H1: its 'rigged' and the winner has already been decided. > H2: prize is vague out of necessity because the financial viability of the crete conference is in question > > I'm not here to enter or win myself -no time for Crete- just to learn a bit. I did 15 years of photogrammetry including DEM/DTM, orthophoto, satellite, cadastral, GIS software and projects, then 'threw in the towel' and switched over to game programming and virtual reality. This topic seems to recombine the two a bit - some GIS, some orthophoto, some virtual reality. > >> >> Federico Devigili >> >> Analyst and Developer >> >> >> >> Fondazione Graphitech >> >> Via Alla Cascata, 56/C - 38123 Trento - Italy >> >> Office: +39 0461 283397 >> >> Fax: +39 0461 283398 >> >> Skype: fededeviwork >> >> Email: >> federico.devigili at graphitech.it>> >> >> Website: http://www.graphitech.it >> >> On Tue, Mar 17, 2015 at 7:54 PM, Information and updates on Web3D's >> Rotterdam city data competition >> > wrote: >> >> >> FD, >> >> Thanks for your comments. >> Likewise my impression of Rotterdam data: texture heavy. >> When I look today in more detail at the Rotterdam AFRIKUUNDERWUK >> /appearances images, it looks like they may be already mipmapped. Most >> VRML viewers do their own mipmapping of textures at runtime, so these >> would end up double-mipmapped. >> >> Perhaps whoever created the data could have converted textures into >> equivalent vector data, perhaps at a loss of some detail at some LODs, >> and gain at some others. >> >> If you have a client that's performing well on a high performance >> computer, one idea is to convert it into an SSR (server-side >> renderer), like I did with freewrl (a vrml client). Let me know - I >> can give you some hints/tips. >> >> -Doug >> more.. >> >> Here's my vrml server-side renderer URL, I can't tell if it works from >> inside my firewall, and I will shut it down in a few hours. >> http://ssrserver.ddns.net:8080/SSRClient.html >> But the idea is to have all the GPU and CPU power on a server. Then >> mobile gets something very light. >> x there's no click/picking implemented yet, just a walk navigation >> - but it would take a small additional effort to get a picking click >> The client and server-glue code is checked into the develop branch of >> freewrl >> http://sourceforge.net/p/freewrl/git/ci/develop/tree/freex3d/src/SSR/ >> and it uses libmicrohttp - a C web server. >> >> >> >>> >>> Our biggest problem in the development of a client capable of >>> rendering the whole city comes from the textures. >>> Rendering the whole city at LOD 2 in an average PC is doable (mobile? >>> not so much), and once you load the 3D model inside the GPU memory it >>> renders fast (we get 30+fps with mid range gpu rendering the whole >>> city). The trick here is to render big chunks of geometry at once >>> thus lowering the number of draw calls, it still needs considerable >>> amount of GPU memory. >>> Making it fully interactive requires some tricks, you either need to >>> hold the whole geometry hierarchy CPU side to do ray-casting/picking >>> or to use color picking on the GPU on feature level, the whole >>> geometry is rendered on an additional surface buffer where each >>> surface have an unique color with which you can get the feature id >>> and access the GML data (client or server side). >>> >>> The problems are the textures, first having a huge number of texture >>> is extremely inefficient for the gpu, but even if you merge >>> everything in big texture atlases you still have just too much data. >>> The solution here is to implement something like id Software >>> megatexture http://en.wikipedia.org/wiki/MegaTexture, and some sort >>> of aggresive LOD technique. So you will only load the texture in the >>> GPU that are actually used in the rendering of the image AND you do >>> not load the full resolution texture but only a scaled down optimized >>> version of it. When a geometry is close to the camera the full >>> resolution texture is loaded and swapped with the old one. >>> >>> FD >>> >>> Federico Devigili >>> >>> Analyst and Developer >>> >>> >>> >>> Fondazione Graphitech >>> >>> Via Alla Cascata, 56/C - 38123 Trento - Italy >>> >>> Office: +39 0461 283397 >>> >>> Fax: +39 0461 283398 >>> >>> Skype: fededeviwork >>> >>> Email: >> federico.devigili at graphitech.it>>> hitech.it>> >>> >>> Website: http://www.graphitech.it >>> >>> On Mon, Mar 16, 2015 at 9:32 PM, Information and updates on Web3D's >>> Rotterdam city data competition >>> >> > oweb3dcontest at web3d.org>> >> wrote: >>> One idea for GML is to use it as a format for transmitting data that >>> might normally be in a GIS database - the so called attribute tables. >>> Once transmitted, it would be put back into geometry files and >>> database attribute tables. >>> >>> If trying to use Web3d -vrml or x3d- to render, one idea is to put ID >>> into the metadata for the web3d node. Then when searching for >>> relationships, the ID for a picked item would be used to search >>> GIS-like database attribute tables. This might allow skipping some or >>> all of the cityGML-specific format -instead using X3D+database to >>> implement/realize the goemetry and relationships. >>> >>> Another options is a 2 or 3 or 4 layer system: database, GML server, >>> Web3d server, and client with a chain of communications between the >>> layers. >>> -Doug >>> >>>> >>>> The thing to work out, and make public, would be the web-service >>> 'dialog' between client and server, such as how to ask for an >>> overview >>> LOD0 map of the area to get started, then how to tell the server the >>> client's new viewpoint pose: 3 rotations and 3 translations, and ask >>> for a screenshot rendering. Or pose + xy click point, and ask for >>> information on the item clicked. >>>> - then any compatible client could work with any compatible server >>>> -Doug more.. >>>> I worked out an SSR for virtual reality recently, for freewrl, and >>> the client and C web server glue code are checked into freewrl as >>> 'MIT or equivalent' license, meaning 'have fun'. >>>> >>>> I was able to display a bit of rotterdam in vrml, using libcitygml's >>> citygml2vrml.exe converter, followed by Chisel's CONDENSE> Create >>> DEF/USE (otherwise vivaty/freewrl run out of memory re-mallocing >>> textures), then wrapping the result in a Transform with -ve UTM / 3TM >>> coordinates of an Point, so as to get the view frustum within >>> near/far range of the geometry (vs being at the earths equator with >>> the default viewpoint, and not being able to see the neatherlands). >>>> >>>> Then freewrl renders slow - 1.2 FPS. And that's just for one >>> subdivision of Rotterdam, mostly due to what I would call 'heavy >>> textures' 1024x1024x4 textures. But I do see 3D buildings. >>>> >>>> I got Aristoteles to run a bit, but it showed no textures, rendered >>> 'zingers' -lines from each piece of geometry to a few arbirary points >>> - and was hard to navigate and 'enjoy'. >>>> >>>> I tried building an SVN copy of Aristoteles but it doesn't build >>> right out of the gate. Many of these GML software projects were >>> abandoned about 5 years ago. I suspect GML was pushed by geography >>> professors and a few cities trying to break ESRI stranglehold. That's >>> hard, because most cities don't mind ESRI, have healthy budgets for >>> ESRI licenses, and have no interest in anything unconventional/free. >>> The general public doesn't need/want detailed GIS relationships. So >>> who needs GML is the remaining question. >>>> Esri black box server-side rendering of rotterdam: >>>> http://www.esri.com/software/cityengine/industries/rotterdam >>>> >>>> When I look at google maps, and remember back to MS virtual earth, >>> they used server-side rendering. What's changed: with webgl now >>> readily available, it should be possible to adjust viewpoint pose >>> more freely in 3D, including horizontal street views. >>>> >>>> In freewrl's web3d SSR, I do a 2-step dialog with the server: first >>>> I >>> send where the client navigated to. Then the server does collision >>> and gravity adjustments to that pose, and sends it back. Then when >>> the client gets the adjusted pose, it asks for a snapshot in the >>> second step. The server renders the snapshot and sends it back, and >>> the client pastes it over the navigation grid. >>>> >>>> >>>> >>>> >>>> >>>>> >>>>> One idea is to use server-side rendering (SSR) in the following way: >>>>> - render GML LOD0, or 1 on the client side in real time, during >> navigation >>>>> - one the viewpoint is pointed where you want it, lift the >>> mousebutton/touch. The client then sends the viewpoint pose to the >>> server. The server renders that viewpoint using LOD3 or 4, and sends >>> back an overlay image. >>>>> - you can then click on the overlay image, The client sends >>> viewpoint, and point of clicking, back to the server >>>>> - the server constructs a pickray and picks the LOD3,4 geometry, >>> gets the IDs, looks up info in a database and sends back to the client. >>>>> >>>>> Benefit: client stays 'light', and server does all the heavy work. >>>>> - client needs a bit of webgl for rendering LOD0 and/or 1, and for >>> navigating >>>>> >>>>> -Doug >>> >>> >>> _______________________________________________ >>> GeoWeb3DContest mailing list >>> >> GeoWeb3DContest at web3d.org> Web3DContest at web3d.org> >>> http://web3d.org/mailman/listinfo/geoweb3dcontest_web3d.org >>> >>> >>> _______________________________________________ GeoWeb3DContest >>> mailing list >>> GeoWeb3DContest at web3d.org >>> http://web3d.org/mailman/listinfo/geoweb3dcontest_web3d.org >> >> _______________________________________________ >> GeoWeb3DContest mailing list >> GeoWeb3DContest at web3d.org >> http://web3d.org/mailman/listinfo/geoweb3dcontest_web3d.org >> >> >> _______________________________________________ GeoWeb3DContest >> mailing list GeoWeb3DContest at web3d.org >> http://web3d.org/mailman/listinfo/geoweb3dcontest_web3d.org > > _______________________________________________ > GeoWeb3DContest mailing list > GeoWeb3DContest at web3d.org > http://web3d.org/mailman/listinfo/geoweb3dcontest_web3d.org > > > _______________________________________________ > GeoWeb3DContest mailing list > GeoWeb3DContest at web3d.org > http://web3d.org/mailman/listinfo/geoweb3dcontest_web3d.org _______________________________________________ GeoWeb3DContest mailing list GeoWeb3DContest at web3d.org http://web3d.org/mailman/listinfo/geoweb3dcontest_web3d.org -------------- next part -------------- A non-text attachment was scrubbed... Name: submission.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 20946 bytes Desc: submission.docx URL: