[Korea-chapter] [X3D] minutes from Web3D Korea chapter meeting (revised)

Johannes Behr johannes.behr at igd.fraunhofer.de
Mon Dec 21 09:36:52 PST 2009


Hi,

I still don't see the need for a specific LiveCamera, LiveViewpoint
and specific AR background.

For basic AR you just need the following:

- Generic Method to stream data (e.g. images) in/out
(http://www.instantreality.org/documentation/nodetype/IOSensor/)

- Some Simple Viewpoint extension (principielPoint)
(http://www.instantreality.org/documentation/nodetype/PerspectiveViewpoint/)

- Generic Background which allows you do place and distort a image
(http://www.instantreality.org/documentation/nodetype/PolygonBackground/)

best regards
johannes


> 
> 4.  Dr. Gun Lee presented an update on his work in Mixed Reality (MR)
> which spans the spectrum from
> - Real environments (real world)
> - Augmented Reality (AR)
> - Augmented Virtuality (AV)
> - Virtual Environments (VE)
> 
> Many AR applications are now being distributed over the Internet with
> commercial and marketing tie-ins.  There are effectively no standards
> in this area.  There are some user groups for popular software (for
> example AR Toolkit) but there do not appear to be any standardization
> efforts.  X3D and VRML are used by AR loaders, as are Collada and other
> formats.  Device setup is often closely tied to hardware drivers and
> specific operating systems or specific codebases, making portability
> and interoperability difficult.  Together this makes a good rationale
> for use of X3D to implement AR capabilities.
> 
> He has done much work in this area, summarized in the slideset.
> Specific X3D extensions include
> - LiveCamera node
> - Background and/or BackgroundTexture node modifications
> - LiveViewpoint
> 
> We talked about different types of cameras (planar/cylindrical/spherical
> images) that might be used.  The options probably deserve further analysis
> as new products and image-processing techniques become widely available.
> 
> Wondering whether a combination of Background/TextureBackground plus a
> simple Billboard holding a 6-sided IndexedFaceSet might handle most
> combinations of interest?  I suspect that there is sufficient generality
> in these already-existing nodes if we map and preprocess LiveCamera imagery
> as a X3DTexture node correctly.  In other words, we do not want to create
> new nodes or different solutions for X3D if a good adaptation of existing
> functionality can be found.  If there are many variations in the X3D nodeset,
> then it becomes very difficult for browsers to implement everything correctly.
> A tradeoff for a new node might be considered acceptable if it greatly
> simplifies work for authors, without hurting browser builders.  "Less is more."
> 
> Of course these are not design decisions being written here, rather they
> are design guidelines that help new technology eventually get approved as
> X3D additions.
> 
> Gun Lee also pointed out that Layers might also be of use, particularly
> for display in an immersive multi-sided environment like a CAVE.  Also
> worth exploring.  (Unfortunately, Layer/Layout support is not yet great
> by X3D players, this is a prerequisite area for implementation/evaluation
> before approving X3D v3.3).  Currently implemented support is maintained at
> http://www.web3d.org/x3d/wiki/index.php/Player_support_for_X3D_components
> 
> We talked for some time about whether a LiveViewpoint node might need
> both projection matrix and rotation.  Perhaps projection matrix is not
> needed?  Usually only browsers do matrices, and a camera input likely
> only provide orientation (not projection matrix) and an author might
> then display the moving texture onto a properly sized, oriented and
> registered quadrilateral within the scene.  Something to consider.
> 
> Going forward, some suggestions:
> 
> Wondering about consistency of this work with other approaches proposed
> by Fraunhofer (and perhaps others) in past Web3D Symposium proceedings?
> 
> Wondering if there are any overlapping goals or capabilities defined
> in the Projective Texture Mapping (PTM) work?  There are some similar
> themes, maybe a small amount of co-design might be productive.  Whether
> the result is yes or no, the answer is interesting because it helps us
> to convince others that the most distilled and effective recommendations
> have been achieved.
> 
> Another effort that needs to be better documented is how various browsers
> handle different devices.  It is a tradeoff for how much detail is needed,
> with details usually handled by browsers.
> 
> We should try to compare your nodes to the Camera nodes proposed by NPS
> at the Web3D 2009 Symposium.
> http://www.web3d.org/x3d/content/examples/Basic/development/
> 	then
> 	Camera Examples
> 	paper also provided
> 
> Wondering if you had looked at relative difficulty in implementing these
> by others.  One approach is to build a ProtoDeclare that includes a
> Script (and perhaps some native hardware-driver code) to simplify the
> construction of alternative implementations.
> 
> We discussed how the path to resolve all of these various combinations
> might be to create use cases, with corresponding X3D examples, that show
> how to use these nodes correctly.  If any credible use case cannot be
> shown with the X3D extensions, then the challenges are not yet properly
> solved.  Once we are close, then codebase variations to refine the final
> result are usually not too complex.
> 
> And so, it also appears as if some X3D example scenes, with corresponding
> snapshots or videos, might also be of use here.
> 
> Finally, once again, the X3D group is being presented with some powerful
> new capabilities that are worth refining and considering for inclusion in
> X3D v3.3.
> 
> Of related interest:  "Web3D to Showcase X3D at the AR DevCamp" article at
> http://www.web3d.org/news/permalink/web3d-to-showcase-x3d-at-the-ar-devcamp
> 
>> “The first Augmented Reality Development Camp (AR DevCamp) will be held
>> on Saturday December 5, 2009 at the Hacker Dojo in Mountain View CA and
>> simultaneously in New York City and around the world!
>> After nearly 20 years in the research labs, Augmented Reality is taking
>> shape as one of the next major waves of Internet innovation, overlaying
>> and infusing the physical world with digital media, information and
>> experiences. AR must be fundamentally open, interoperable, extensible,
>> and accessible to all, so that it can create the kinds of opportunities
>> for expressiveness, communication, business and social good on the web
>> and Internet today. As one step towards this goal of an Open AR web,
>> the AR DevCamp 1.0, will have a full day of technical sessions and
>> hacking opportunities in an open format, BarCamp style. For more
>> information about this camp please visit www.ardevcamp.org.
>> 
>> Web3D Consortium will be showcasing their open standards X3D
>> implementations for Augmented Reality at this camp.  Join us and see
>> how you can use X3D today for your Augmentent Reality needs.
> 
> ==================================
> 
> 5.  Dr. Byounghyun Yoo presented updates on use of X3D Earth to provide
> a digital globe infrastructure.  This work explains and continues to
> extend many achievements he accomplished while working as a postdoctoral
> researcher and then Web3D Fellow while at NPS 2007-2008.
> 
> 	http://web3d.org/about/fellowship
> 
> Particular highlights of this talk show how the X3D Earth effort meet the
> larger requirements of a Digital Earth infrastructure proposed by Al Gore
> a decade ago, overcoming limitations on multiplicity, interoperability,
> openness and equity that constrain other geobrowsers.
> 
> The visual-debugging video that showed artificial tiling algorithms is
> very powerful and has influenced us to add author-visualization assists
> for various nodes wherever possible in the X3D-Edit tool.
> 
> ==================================
> 
> 6.  Dr. Don Brutzman presented 3 talks on the following topics
> 
> - X3D progress and prospects 2009
> - HTML5 and X3D:  presented at World Wide WEb Consortium (W2C)
> 	Technical Plenary and Advisory Committee (TPAC) November 2009
> - X3D-Edit Update:  continued improvements in authoring X3D
> 
> ==================================
> 
> 7.  Pranveer Singh KAIST presented his progress in Parametric Macro
> translation of CAD files into a neutral XML file format which captures
> the CAD authoring history of model creation.  The results then go to
> KAIST's Transcad (which licenses ACIS and HOOPS) to create facet data,
> and then through various open-source polygon reduction tools (TETGEN
> and MeshLab) to produce X3D.
> 
> Are there open-source alternatives to ACIS or HOOPS?  Apparently not.
> However, if the X3D boundary represenation (B-Rep) specification were
> released, the neutral XML file might directly produce X3D, which can
> then be decimated and cleaned up.
> 
> Of further interest is that if the final results are in X3D B-rep form,
> a reverse translation might be possible back into the neutral XML
> parametric-macro format, which could be further run in reverse to
> regenerate CAD files.
> 
> The polygon refactoring and reduction steps using TETGEN and MeshLab
> are fairly automatic using Pranveer's tool, which he demonstrated.
> This can be released and is likely of broader interest.
> 
> This is important work which looks increasingly important with each
> successful improvement.  I think that the X3D CAD working group will
> be well advised to consider this as an extension to the X3D CAD
> Component which will unlock X3D and a variety of CAD formats,
> perhaps even bidirectionally.
> 
> ==================================
> 
> 8.  Dr. Kwan-hee Yoo presented conceptual ideas on medical visualization
> that focused on 2D and 3D visualizations, particularly with respect to
> how X3D might work well with the DICOM standard.
> 
> We looked at the Medical Working Group proposed specification changes:
> 
> http://www.web3d.org/membership/login/memberwiki/index.php/X3D_v3.3_Specification_Changes#Medical_working_group
> - MedicalInterchange component
> - Texturing3D component additions
> - Volume rendering
> - Annotation component
> 
> Kwan-Hee's slides might help make an excellent justification for the
> MedicalInterchange component.
> 
> He will look at these changes to see if they can express his exemplars.
> 
> We hope to schedule a future teleconference with Medical Working Group
> members on these topics.
> 
> ==================================
> 
> 9.  Dr. Kwan-hee Yoo presented further conceptual ideas on the potential
> benefits of digital textbooks utilizing X3D.  He gave an digital textbook
> demo example with his slides based on the Microsoft WPF format.  He also
> listed a set if functions that could be used in this application area.
> 
> He then gave an X3D demo that illustrated the use of many of these
> functional areas:  page turning, book rotation, zooming.  The source
> code was provided.  Other examples included X3D mixing of Korean text,
> imagery, 3D TouchSensors and video.  BS Contact did an excellent job
> on all functions, though Korean text characters appeared to be rendered
> as individual images (rather than geometry) and so was not always clear.
> 
> He is also looking at SVG possibilities, which has also been proposed:
> http://www.web3d.org/membership/login/memberwiki/index.php/X3D_v3.3_Specification_Changes#SvgTexture_node
> 
> Some other e-book info is online at
> http://www.free-ebooks-download.org/free-ebook/dotnet/Framework/
> http://publishing.ebookmall.com/information/formats.htm
> 
> We also talked about how further interoperation with HTML5, and thus
> SVG and MathML, particularly for cross-linking via event passing,
> will greatly facilitate creation of these new multimedia applications
> as deployable Web content/applications.
> 
> Perhaps the Web Fonts effort in W3C is also becoming of common interest
> since that might help browser companies improve their font support,
> sometimes "for free" if they are operating as a plugin with a browser
> that provides such fonts.
> http://www.w3.org/TR/css3-fonts
> 
> ==================================
> 
> 10.  Dr. Yong-Sang Cho provided an introduction presentation
> for the  IMS Global Learning Consortium, online at
> http://www.imsglobal.org
> 
> Lots of excellent detail in the slides.
> 
> Learning systems of interest include open-source Sakai and Moodle.
> 	http://sakaiproject.org
> 	http://moodle.org
> 
> They are interested in how X3D might fit into these standards and
> best practices relating to learning management systems.  Another
> common interest is the use of X3D with respect to accessibility
> (for access-limited systems).
> 
> Since the _X3D for Web Authors_ course is online with slides and
> examples and video, perhaps the course material might be moved
> into Sakai (now in use at NPS) for creating a full-capability
> course that is not only about X3D multimedia, but also using
> X3D multimedia as part of the learning management system (LMS).
> 
> This is an extremely important imperative with W3C WAI (mentioned
> earlier), IMS and Web3D.  We will discuss it further in tomorrow's
> meeting.
> 
> ==================================
> 
> 11.  Dr. Chung-weon Oh presented on Standardization of Web3D GIS.
> He discussed a wide variety of initiatives going on in many different
> standards-related organizations.
> 
> There was some interest in SEDRIS standards.  Of interest is that there
> is a further proposed extension to X3D to match SEDRIS capabilities.
> http://www.igraphics.com/Standards
> http://www.igraphics.com/Standards/EnhancedGeospatialComponent_2007_10_30
> 
> Interestingly, members of our X3D Earth Working Group are meeting
> with members of the Open Geospatial Consortium (OGC) tomorrow in
> San Francisco.  We recently renewed our Liaison Agreement.  Our
> primary representative is Mike McCann of MBARI who is also cochair
> of the X3D Earth Working Group.
> 
> ==================================
> 
> 12.  It is clear that the Korea Group is able to proceed as fast or faster
> than many of the other players in Web3D Consortium.  It would be great if
> more partnerships were established with existing or new) members to take
> advantage of their excellent progress, to mutual benefit.  Suggestions
> and direct discussions are most welcome.
> 
> A good approach for getting the Korean Chapter proposals into X3D v3.3:
> - continue using the wiki to get to clarity on each technical approach
> - once mature, turn it into formal specification prose (like the Medical
> 	example)
> 
> Links will be announced soon for the presentations given today.
> Further comments are welcome on these minutes for audience questions.
> 
> There was a tremendous and amazing amount of progress today.  Looking
> forward to further dialog and process.
> 
> all the best, Don
> -- 
> Don Brutzman  Naval Postgraduate School, Code USW/Br           brutzman at nps.edu
> Watkins 270   MOVES Institute, Monterey CA 93943-5000 USA  work +1.831.656.2149
> X3D, virtual worlds, underwater robots, XMSF  http://web.nps.navy.mil/~brutzman
> 
> 
> 
> 
> _______________________________________________
> X3d mailing list
> X3d at web3d.org
> http://web3d.org/mailman/listinfo/x3d_web3d.org

--
Dr. Johannes Behr
Leiter Projektbereich VR

Fraunhofer-Institut für Graphische Datenverarbeitung IGD
Fraunhoferstr. 5  |  64283 Darmstadt  |  Germany
Tel +49 6151 155-510  |  Fax +49 6151 155-196
johannes.behr at igd.fraunhofer.de  |  www.igd.fraunhofer.de




More information about the Korea-chapter mailing list