Some interesting papers on associating semantic metadata with X3D scene graphs have been published at last year's Web3D Conference and more at this year (I am a reviewer but cannot comment more than that). So it is doable and has been done.
For VRML, it came down to requirements. At the time of the specification, it was already too heavy for the web infrastructure. Now, it is about right. Add more metadata then was not technically smart; now it is inconvenient. What you want is an object-oriented programming language, not a declarative scene graph with scripts. I understand that. Had the VRML2 designers chosen that design, the Microsoft submission would have won. Possibly that would been better for what you want. len -----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Karsten Otto Sent: Friday, January 26, 2007 4:16 AM To: VOS Discussion Subject: Re: [vos-d] VOS requirements Am 25.01.2007 um 16:27 schrieb Len Bullard: > X3D has a physics specification underway. One is already being > integrated > into Contact. X3D already has shaders and scripting plus a > metadata node > for indicating semantics. Since the objects you mention below can > be notatd > as say DEF Tree and referenced by that name, I'm not sure what you > want for > semantics past that which won't create a badly layered design. > > Collada was designed as a transfer format for games. It is > compatible with > X3D. > Ok, I see this is a difficult concept to get across when people are used to scene graphs. Let me try to explain it another way. Yes, you can group primitives via DEF into a complex shape, and even give it some name that suggests to the human reader what is meant by this group. The problem is that DEF in VRML is designed as a syntactic tool, not as a signifier for complex objects. Remember that you could as easily group part of a world for editing convenience. world --> transform --> DEF tree = (transform1--> sphere, transform2 --> cylinder) --> meta: "tree" and world --> transform1 --> sphere --> meta: "treetop" world --> transform2 --> cylinder --> meta: "treetrunk" will be both be percieved by a human as a tree, but for an agent both cases are still gibberish. The label "tree" does not help either, it could as well have been "Baum" or "Arbol"; these are words that require understandig of a human language to process, and agents generally are not capable of that. You need a controlled vocabulary, semantic URI, or similar concept identifier that is machine understandable. Finally, meta nodes may help a bit, but for an agent to userstand the scene it would first have to extract them all, pick the semantically relevant parts of the scene graph, and correlate them somehow. In the first case this is easier than in the second case; sadly 3D modelling tools tend towards the second case, especially when naive users toy with them. After all, it "looks" great, right? Unfortunately its a bad situation for an agent interested primarily in the meta information. I think this boils down to intended use. If you want a scene looking good to a human, use a scene graph. If you want a machine understandable scene, use a description language. If you want *both*, well... pick a focus, and either emphasize the scene with attached meta nodes, or emphasize the semantic description and attach the geometry. Scene emphasis is mainstream, I happen to prefer semantic emphasis. Think outward and inner beauty :-) Regards, Karsten Otto (kao) _______________________________________________ vos-d mailing list vos-d@interreality.org http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d _______________________________________________ vos-d mailing list vos-d@interreality.org http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d