Hi Jeff;
The thing is that we are tying in non-VRML
functionality to the shared worlds.
That Avatar that walks around could be me, and
I could talk to you.
You can walk into an area, see who is there, and
walk up to a group of people much like you would
do in a party or something.
Take the approach of multi-user conferencing, and
put the ability that VNet shared virtual worlds
gives you for the visual placement.
We take the coordinates of every member of the shared
virtual world space, and tell the other tool(s) on
your computer where these avatars are placed.
It is trying to tie VRML into real-world interactions,
if that makes any sense! So, look at it backwards -
we are putting VRML on top of other computer tools
that interact with your environment in a real sense.
I come to this from a group of E.C. Funded research
projects on multicast conferencing, and one of the
big problems that I have experienced is that there
is no way to have people go off into corners and
talk amongst themselves to come to some resolution.
Does this help? I use two scenarios to help describe
this stuff:
1) walking into an evening party. You see groups of
people, maybe you walk up to the group that Miriam
is in, and you join in the on-going conversation.
2) virtual conference. A real speaker is making an
audio/video presentation, and listeners can line up
at "microphone stations" to ask questions. The speaker
can see movement in his/her audience, and direct
individuals to speak.
Basically mimicing real life.
Hope this helps!
John.
> so how is this different than what CosmoPlayer
> [actually what *any* compliant plugin] currently does
> to support audio spatialization
> within for example the box that walks around the vrmLab scene
> and makes grinding noises??