[Korea-chapter] minutes from Web3D Korea chapter meeting (revised)

Don Brutzman brutzman at nps.edu
Mon Dec 7 00:34:49 PST 2009

1.  We had a very productive meeting of the Web3D Korea chapter
in Seoul Korea, Monday 7 December


	Dr. Myeong Won Lee, Suwon University, Chapter organizer
	Dr. Kwan-hee Yoo, Chungbuk National University
	Dr. Hae-Jimn Kim, Korea Standards Association (KSA) and
		Hallym University
	Dr. Gun Lee, Electronics and Telecommunication Research
		Institute (ETRI)
	Dr. Byounghyun Yoo, MIT Singapore Alliance
	Pranveer Singh, Korea Advanced Institute of Science and
		Technology (KAIST)
	JungSeop Jung, Chapter secretary, Korea Standards
		Association (KSA)
	JinSang Hwang, PartDB Co., Ltd.
	Dr. Yong-Sang Cho, Korea Education and Research Information
		Service (KERIS)
	Dr. Cheong-weon Oh, Namseoul University
	Dr. Don Brutzman, Naval Postgraduate School (NPS) and
		member Web3D Consortium Board of Directors (BoD)

Minutes recorded by myself with periodic detailed review by the group.


2.  Units proposal, Myeong Won Lee

Dr. Lee showed her classic example with a bacterium and a larger
cell, demonstrated in her Units browser.  This showed that the
concepts can work satisfactorily.  The video of the demo will be
placed online in the Korea Chapter document directory.

Sh has also made an X3D schema extension to validate the new construct,
tested in Eclipse and X3D-Edit.

There are 2 approaches to Unit grammar checks, details shown in slides.
The second approach, which explicitly declares each type of unit (rather
than multiple name and value pairs) seems preferable since it will enable
X3D DTD or Schema checking of valid content.  The first approach can also
be checked, but only by applications or external rules (such as X3D

Note of explanation:
	X3D Schematron is an additional form of
	XML validation used to detect problems and help assure
	the quality and correctness of X3D scenes.

Possible schema change variations:
- put the Unit statement in the <head> section of a scene, since each
	kind of unit can only be defined once per file/document
- However there is a problem that we might not have considered.
	Alternatively make the Unit statement immediately follow the
	X3D <Scene> root in order to make it more available at run time,
	since head/component/meta tags are not retrievable once loaded
- Alternatively, how would an SAI script change units for a scene graph
	created or loaded in memory already?
- Essentially this is a Scene Access Interface (SAI) question:
	how will SAI handle Unit definitions in a scene?

This small problem needs to be considered by the working group.

Units need to be added to an X3D v3.3 schema
- Don will work on creating a new schema that includes Myeong-Won's work
- Dr. Lee's schema extensions will be very helpful and will be considered
- Don may need to vary some upper/lower case conventions and avoid
	abbreviations to match other definitions in X3D DOCTYPE and Schema
- Don will add X3D v3.3 draft support in X3D-Edit
- Dr. Lee and Don will then put final versions of the Unit scene examples
	in the X3D Basic archives, which will be placed online at


3. Dr. Kwan-hee Yoo presented detailed work on medical visualization
that took advantage of projected texture mapping (PTM) techniques.
Slides, images and a published paper were made available.  These
were also presented at the previous SC24 meeting last June in London.
These are definitely of interest to the Medical Working Group.

A demo showed a camera (shown as a ball) projecting an image onto various
objects in a scene.  The code utilizes GPU capabilities and has also been
shown previously.  He has been considering how to integrate this work into
an X3D player codebase.  The code is written in C++ using OpenGL in open

Can the images being used to wrap textures around polygonal human-organ
cylinders be captured and adapted from orthoscopic cameras?  Answer:  yes.

PTM is mappable as another type of X3DTextureNode and X3DTexture3DNode.
It is applied within Shape(s) to corresponding geometry, and so PTM is
both well scoped (i.e. not like a virtual light) and scalable (i.e.
through DEF/USE copies for each corresponding geometry).

Even though there is not yet an X3D player that supports the PTM proposal,
it is not too soon to create examples in X3D with corresponding screen
snapshots and videos of what the results should look like.  Kwanhee Yoo
will work with Don to create those examples.  Don will also add them to
the draft X3D v3.3 DOCTYPE/schema and provide support in X3D-Edit for
scene authoring and tool launching.

Extension of the PTM techniques to medical applications is an important
advance that requires further attention of the X3D and Medical working
groups.  PTM is on the list of planned additions to the X3D v3.3 spec.
Hopefully seeing more examples, especially medical examples, will help
to advance this important work.

Anita, we should also reach out to other companies that have tried to
apply these techniques separately from Web3D/X3D.  Is there a market
survey of such capabilities?  Is there a conference that supports it?
Hopefully the Medical Working Group can answer these questions.


4.  Dr. Gun Lee presented an update on his work in Mixed Reality (MR)
which spans the spectrum from
- Real environments (real world)
- Augmented Reality (AR)
- Augmented Virtuality (AV)
- Virtual Environments (VE)

Many AR applications are now being distributed over the Internet with
commercial and marketing tie-ins.  There are effectively no standards
in this area.  There are some user groups for popular software (for
example AR Toolkit) but there do not appear to be any standardization
efforts.  X3D and VRML are used by AR loaders, as are Collada and other
formats.  Device setup is often closely tied to hardware drivers and
specific operating systems or specific codebases, making portability
and interoperability difficult.  Together this makes a good rationale
for use of X3D to implement AR capabilities.

He has done much work in this area, summarized in the slideset.
Specific X3D extensions include
- LiveCamera node
- Background and/or BackgroundTexture node modifications
- LiveViewpoint

We talked about different types of cameras (planar/cylindrical/spherical
images) that might be used.  The options probably deserve further analysis
as new products and image-processing techniques become widely available.

Wondering whether a combination of Background/TextureBackground plus a
simple Billboard holding a 6-sided IndexedFaceSet might handle most
combinations of interest?  I suspect that there is sufficient generality
in these already-existing nodes if we map and preprocess LiveCamera imagery
as a X3DTexture node correctly.  In other words, we do not want to create
new nodes or different solutions for X3D if a good adaptation of existing
functionality can be found.  If there are many variations in the X3D nodeset,
then it becomes very difficult for browsers to implement everything correctly.
A tradeoff for a new node might be considered acceptable if it greatly
simplifies work for authors, without hurting browser builders.  "Less is more."

Of course these are not design decisions being written here, rather they
are design guidelines that help new technology eventually get approved as
X3D additions.

Gun Lee also pointed out that Layers might also be of use, particularly
for display in an immersive multi-sided environment like a CAVE.  Also
worth exploring.  (Unfortunately, Layer/Layout support is not yet great
by X3D players, this is a prerequisite area for implementation/evaluation
before approving X3D v3.3).  Currently implemented support is maintained at

We talked for some time about whether a LiveViewpoint node might need
both projection matrix and rotation.  Perhaps projection matrix is not
needed?  Usually only browsers do matrices, and a camera input likely
only provide orientation (not projection matrix) and an author might
then display the moving texture onto a properly sized, oriented and
registered quadrilateral within the scene.  Something to consider.

Going forward, some suggestions:

Wondering about consistency of this work with other approaches proposed
by Fraunhofer (and perhaps others) in past Web3D Symposium proceedings?

Wondering if there are any overlapping goals or capabilities defined
in the Projective Texture Mapping (PTM) work?  There are some similar
themes, maybe a small amount of co-design might be productive.  Whether
the result is yes or no, the answer is interesting because it helps us
to convince others that the most distilled and effective recommendations
have been achieved.

Another effort that needs to be better documented is how various browsers
handle different devices.  It is a tradeoff for how much detail is needed,
with details usually handled by browsers.

We should try to compare your nodes to the Camera nodes proposed by NPS
at the Web3D 2009 Symposium.
	Camera Examples
	paper also provided

Wondering if you had looked at relative difficulty in implementing these
by others.  One approach is to build a ProtoDeclare that includes a
Script (and perhaps some native hardware-driver code) to simplify the
construction of alternative implementations.

We discussed how the path to resolve all of these various combinations
might be to create use cases, with corresponding X3D examples, that show
how to use these nodes correctly.  If any credible use case cannot be
shown with the X3D extensions, then the challenges are not yet properly
solved.  Once we are close, then codebase variations to refine the final
result are usually not too complex.

And so, it also appears as if some X3D example scenes, with corresponding
snapshots or videos, might also be of use here.

Finally, once again, the X3D group is being presented with some powerful
new capabilities that are worth refining and considering for inclusion in
X3D v3.3.

Of related interest:  "Web3D to Showcase X3D at the AR DevCamp" article at

> “The first Augmented Reality Development Camp (AR DevCamp) will be held
> on Saturday December 5, 2009 at the Hacker Dojo in Mountain View CA and
> simultaneously in New York City and around the world!
> After nearly 20 years in the research labs, Augmented Reality is taking
> shape as one of the next major waves of Internet innovation, overlaying
> and infusing the physical world with digital media, information and
> experiences. AR must be fundamentally open, interoperable, extensible,
> and accessible to all, so that it can create the kinds of opportunities
> for expressiveness, communication, business and social good on the web
> and Internet today. As one step towards this goal of an Open AR web,
> the AR DevCamp 1.0, will have a full day of technical sessions and
> hacking opportunities in an open format, BarCamp style. For more
> information about this camp please visit www.ardevcamp.org.
> Web3D Consortium will be showcasing their open standards X3D
> implementations for Augmented Reality at this camp.  Join us and see
> how you can use X3D today for your Augmentent Reality needs.


5.  Dr. Byounghyun Yoo presented updates on use of X3D Earth to provide
a digital globe infrastructure.  This work explains and continues to
extend many achievements he accomplished while working as a postdoctoral
researcher and then Web3D Fellow while at NPS 2007-2008.


Particular highlights of this talk show how the X3D Earth effort meet the
larger requirements of a Digital Earth infrastructure proposed by Al Gore
a decade ago, overcoming limitations on multiplicity, interoperability,
openness and equity that constrain other geobrowsers.

The visual-debugging video that showed artificial tiling algorithms is
very powerful and has influenced us to add author-visualization assists
for various nodes wherever possible in the X3D-Edit tool.


6.  Dr. Don Brutzman presented 3 talks on the following topics

- X3D progress and prospects 2009
- HTML5 and X3D:  presented at World Wide WEb Consortium (W2C)
	Technical Plenary and Advisory Committee (TPAC) November 2009
- X3D-Edit Update:  continued improvements in authoring X3D


7.  Pranveer Singh KAIST presented his progress in Parametric Macro
translation of CAD files into a neutral XML file format which captures
the CAD authoring history of model creation.  The results then go to
KAIST's Transcad (which licenses ACIS and HOOPS) to create facet data,
and then through various open-source polygon reduction tools (TETGEN
and MeshLab) to produce X3D.

Are there open-source alternatives to ACIS or HOOPS?  Apparently not.
However, if the X3D boundary represenation (B-Rep) specification were
released, the neutral XML file might directly produce X3D, which can
then be decimated and cleaned up.

Of further interest is that if the final results are in X3D B-rep form,
a reverse translation might be possible back into the neutral XML
parametric-macro format, which could be further run in reverse to
regenerate CAD files.

The polygon refactoring and reduction steps using TETGEN and MeshLab
are fairly automatic using Pranveer's tool, which he demonstrated.
This can be released and is likely of broader interest.

This is important work which looks increasingly important with each
successful improvement.  I think that the X3D CAD working group will
be well advised to consider this as an extension to the X3D CAD
Component which will unlock X3D and a variety of CAD formats,
perhaps even bidirectionally.


8.  Dr. Kwan-hee Yoo presented conceptual ideas on medical visualization
that focused on 2D and 3D visualizations, particularly with respect to
how X3D might work well with the DICOM standard.

We looked at the Medical Working Group proposed specification changes:

- MedicalInterchange component
- Texturing3D component additions
- Volume rendering
- Annotation component

Kwan-Hee's slides might help make an excellent justification for the
MedicalInterchange component.

He will look at these changes to see if they can express his exemplars.

We hope to schedule a future teleconference with Medical Working Group
members on these topics.


9.  Dr. Kwan-hee Yoo presented further conceptual ideas on the potential
benefits of digital textbooks utilizing X3D.  He gave an digital textbook
demo example with his slides based on the Microsoft WPF format.  He also
listed a set if functions that could be used in this application area.

He then gave an X3D demo that illustrated the use of many of these
functional areas:  page turning, book rotation, zooming.  The source
code was provided.  Other examples included X3D mixing of Korean text,
imagery, 3D TouchSensors and video.  BS Contact did an excellent job
on all functions, though Korean text characters appeared to be rendered
as individual images (rather than geometry) and so was not always clear.

He is also looking at SVG possibilities, which has also been proposed:

Some other e-book info is online at

We also talked about how further interoperation with HTML5, and thus
SVG and MathML, particularly for cross-linking via event passing,
will greatly facilitate creation of these new multimedia applications
as deployable Web content/applications.

Perhaps the Web Fonts effort in W3C is also becoming of common interest
since that might help browser companies improve their font support,
sometimes "for free" if they are operating as a plugin with a browser
that provides such fonts.


10.  Dr. Yong-Sang Cho provided an introduction presentation
for the  IMS Global Learning Consortium, online at

Lots of excellent detail in the slides.

Learning systems of interest include open-source Sakai and Moodle.

They are interested in how X3D might fit into these standards and
best practices relating to learning management systems.  Another
common interest is the use of X3D with respect to accessibility
(for access-limited systems).

Since the _X3D for Web Authors_ course is online with slides and
examples and video, perhaps the course material might be moved
into Sakai (now in use at NPS) for creating a full-capability
course that is not only about X3D multimedia, but also using
X3D multimedia as part of the learning management system (LMS).

This is an extremely important imperative with W3C WAI (mentioned
earlier), IMS and Web3D.  We will discuss it further in tomorrow's


11.  Dr. Chung-weon Oh presented on Standardization of Web3D GIS.
He discussed a wide variety of initiatives going on in many different
standards-related organizations.

There was some interest in SEDRIS standards.  Of interest is that there
is a further proposed extension to X3D to match SEDRIS capabilities.

Interestingly, members of our X3D Earth Working Group are meeting
with members of the Open Geospatial Consortium (OGC) tomorrow in
San Francisco.  We recently renewed our Liaison Agreement.  Our
primary representative is Mike McCann of MBARI who is also cochair
of the X3D Earth Working Group.


12.  It is clear that the Korea Group is able to proceed as fast or faster
than many of the other players in Web3D Consortium.  It would be great if
more partnerships were established with existing or new) members to take
advantage of their excellent progress, to mutual benefit.  Suggestions
and direct discussions are most welcome.

A good approach for getting the Korean Chapter proposals into X3D v3.3:
- continue using the wiki to get to clarity on each technical approach
- once mature, turn it into formal specification prose (like the Medical

Links will be announced soon for the presentations given today.
Further comments are welcome on these minutes for audience questions.

There was a tremendous and amazing amount of progress today.  Looking
forward to further dialog and process.

all the best, Don
Don Brutzman  Naval Postgraduate School, Code USW/Br           brutzman at nps.edu
Watkins 270   MOVES Institute, Monterey CA 93943-5000 USA  work +1.831.656.2149
X3D, virtual worlds, underwater robots, XMSF  http://web.nps.navy.mil/~brutzman

More information about the Korea-chapter mailing list