[Korea-chapter] minutes from Web3D Korea chapter meeting
brutzman at nps.edu
Sun Dec 6 19:20:33 PST 2009
1. We had a very productive meeting of the Web3D Korea chapter
in Seoul Korea, Monday 7 December
Dr. Myeong Won Lee, Suwon University, Chapter organizer
Dr. Kwanhee Yoo, Chungbuk National University
Dr. Hae-Jimn Kim, Korea Standards Association (KSA) and
Dr. Gun Lee, Electronics and Telecommunication Research
Dr. Byounghyun Yoo, MIT Singapore Alliance
Pranveer Singh, Korea Advanced Institute of Science and
___, Chapter secretary
Dr. Don Brutzman, Naval Postgraduate School (NPS) and
member Web3D Consortium Board of Directors (BoD)
Minutes recorded by myself with periodic detailed review by the group.
2. Units proposal, Myeong Won Lee
Dr. Lee showed her classic example with a bacterium and a larger
cell, demonstrated in her Units browser. This showed that the
concepts can work satisfactorily. The video of the demo will be
placed online in the Korea Chapter document directory.
Sh has also made an X3D schema extension to validate the new construct,
tested in Eclipse and X3D-Edit.
There are 2 approaches to Unit grammar checks, details shown in slides.
The second approach, which explicitly declares each type of unit (rather
than multiple name and value pairs) seems preferable since it will enable
X3D DTD or Schema checking of valid content. The first approach can also
be checked, but only by applications or external rules (such as X3D
Note of explanation:
X3D Schematron is an additional form of
XML validation used to detect problems and help assure
the quality and correctness of X3D scenes.
Possible schema change variations:
- put the Unit statement in the <head> section of a scene, since each
kind of unit can only be defined once per file/document
- However there is a problem that we might not have considered.
Alternatively make the Unit statement immediately follow the
X3D <Scene> root in order to make it more available at run time,
since head/component/meta tags are not retrievable once loaded
- Alternatively, how would an SAI script change units for a scene graph
created or loaded in memory already?
- Essentially this is a Scene Access Interface (SAI) question:
how will SAI handle Unit definitions in a scene?
This small problem needs to be considered by the working group.
Units need to be added to an X3D v3.3 schema
- Don will work on creating a new schema that includes Myeong-Won's work
- Dr. Lee's schema extensions will be very helpful and will be considered
- Don may need to vary some upper/lower case conventions and avoid
abbreviations to match other definitions in X3D DOCTYPE and Schema
- Don will add X3D v3.3 draft support in X3D-Edit
- Dr. Lee and Don will then put final versions of the Unit scene examples
in the X3D Basic archives, which will be placed online at
3. Dr. Kwanhee Yoo presented detailed work on medical visualization
that took advantage of projected texture mapping (PTM) techniques.
Slides, images and a published paper were made available. These
were also presented at the previous SC24 meeting last June in London.
These are definitely of interest to the Medical Working Group.
A demo showed a camera (shown as a ball) projecting an image onto various
objects in a scene. The code utilizes GPU capabilities and has also been
shown previously. He has been considering how to integrate this work into
an X3D player codebase. The code is written in C++ using OpenGL in open
Can the images being used to wrap textures around polygonal human-organ
cylinders be captured and adapted from orthoscopic cameras? Answer: yes.
PTM is mappable as another type of X3DTextureNode and X3DTexture3DNode.
It is applied within Shape(s) to corresponding geometry, and so PTM is
both well scoped (i.e. not like a virtual light) and scalable (i.e.
through DEF/USE copies for each corresponding geometry).
Even though there is not yet an X3D player that supports the PTM proposal,
it is not too soon to create examples in X3D with corresponding screen
snapshots and videos of what the results should look like. Kwanhee Yoo
will work with Don to create those examples. Don will also add them to
the draft X3D v3.3 DOCTYPE/schema and provide support in X3D-Edit for
scene authoring and tool launching.
Extension of the PTM techniques to medical applications is an important
advance that requires further attention of the X3D and Medical working
groups. PTM is on the list of planned additions to the X3D v3.3 spec.
Hopefully seeing more examples, especially medical examples, will help
to advance this important work.
Anita, we should also reach out to other companies that have tried to
apply these techniques separately from Web3D/X3D. Is there a market
survey of such capabilities? Is there a conference that supports it?
Hopefully the Medical Working Group can answer these questions.
5. Dr. Gun Lee presented an update on his work in Mixed Reality (MR)
which spans the spectrum from
- Real environments (real world)
- Augmented Reality (AR)
- Augmented Virtuality (AV)
- Virtual Environments (VE)
Many AR applications are now being distributed over the Internet with
commercial and marketing tie-ins. There are effectively no standards
in this area. There are some user groups for popular software (for
example AR Toolkit) but there do not appear to be any standardization
efforts. X3D and VRML are used by AR loaders, as are Collada and other
formats. Device setup is often closely tied to hardware drivers and
specific operating systems or specific codebases, making portability
and interoperability difficult. Together this makes a good rationale
for use of X3D to implement AR capabilities.
He has done much work in this area, summarized in the slideset.
Specific X3D extensions include
- LiveCamera node
- Background and/or BackgroundTexture node modifications
We talked about different types of cameras (planar/cylindrical/spherical
images) that might be used. The options probably deserve further analysis
as new products and image-processing techniques become widely available.
Wondering whether a combination of Background/TextureBackground plus a
simple Billboard holding a 6-sided IndexedFaceSet might handle most
combinations of interest? I suspect that there is sufficient generality
in these already-existing nodes if we map and preprocess LiveCamera imagery
as a X3DTexture node correctly. In other words, we do not want to create
new nodes or different solutions for X3D if a good adaptation of existing
functionality can be found. If there are many variations in the X3D nodeset,
then it becomes very difficult for browsers to implement everything correctly.
A tradeoff for a new node might be considered acceptable if it greatly
simplifies work for authors, without hurting browser builders. "Less is more."
Of course these are not design decisions being written here, rather they
are design guidelines that help new technology eventually get approved as
Gun Lee also pointed out that Layers might also be of use, particularly
for display in an immersive multi-sided environment like a CAVE. Also
worth exploring. (Unfortunately, Layer/Layout support is not yet great
by X3D players, this is a prerequisite area for implementation/evaluation
before approving X3D v3.3). Currently implemented support is maintained at
We talked for some time about whether a LiveViewpoint node might need
both projection matrix and rotation. Perhaps projection matrix is not
needed? Usually only browsers do matrices, and a camera input likely
only provide orientation (not projection matrix) and an author might
then display the moving texture onto a properly sized, oriented and
registered quadrilateral within the scene. Something to consider.
Going forward, some suggestions:
Wondering about consistency of this work with other approaches proposed
by Fraunhofer (and perhaps others) in past Web3D Symposium proceedings?
Wondering if there are any overlapping goals or capabilities defined
in the Projective Texture Mapping (PTM) work? There are some similar
themes, maybe a small amount of co-design might be productive. Whether
the result is yes or no, the answer is interesting because it helps us
to convince others that the most distilled and effective recommendations
have been achieved.
Another effort that needs to be better documented is how various browsers
handle different devices. It is a tradeoff for how much detail is needed,
with details usually handled by browsers.
We should try to compare your nodes to the Camera nodes proposed by NPS
at the Web3D 2009 Symposium.
paper also provided
Wondering if you had looked at relative difficulty in implementing these
by others. One approach is to build a ProtoDeclare that includes a
Script (and perhaps some native hardware-driver code) to simplify the
construction of alternative implementations.
We discussed how the path to resolve all of these various combinations
might be to create use cases, with corresponding X3D examples, that show
how to use these nodes correctly. If any credible use case cannot be
shown with the X3D extensions, then the challenges are not yet properly
solved. Once we are close, then codebase variations to refine the final
result are usually not too complex.
And so, it also appears as if some X3D example scenes, with corresponding
snapshots or videos, might also be of use here.
Finally, once again, the X3D group is being presented with some powerful
new capabilities that are worth refining and considering for inclusion in
99. It is clear that the Korea Group is able to proceed as fast or faster
than many of the other players in Web3D Consortium. It would be great if
more partnerships were established with existing or new) members to take
advantage of their excellent progress.
all the best, Don
Don Brutzman Naval Postgraduate School, Code USW/Br brutzman at nps.edu
Watkins 270 MOVES Institute, Monterey CA 93943-5000 USA work +1.831.656.2149
X3D, virtual worlds, underwater robots, XMSF http://web.nps.navy.mil/~brutzman
More information about the Korea-chapter