hello John...

I'm not trying to be a pain in the #$%
I'm just trying to figure out what I am missing

> The thing is that we are tying in non-VRML 
> functionality to the shared worlds. 
> 
> That Avatar that walks around could be me, and
> I could talk to you.
> 
> You can walk into an area, see who is there, and
> walk up to a group of people much like you would
> do in a party or something.

so let us say I am in a VNet scene
and my avatar includes a Sound node containing an AudioClip node
something like the following

      Sound {
        minFront 0.1
        minBack 0.1
        maxFront 25
        maxBack 25
        source DEF footSteps AudioClip {
          description "Avatar Footsteps"
          loop TRUE
          url [ "Warehouse/sounds/WAV/camera.wav" ]
        }
      }

how is what you are working on effectively different than 
the spatialization offered by the Sound node??
if my avatar in my VNet scene offered gesture buttons
each of which triggered/modified shared-playback Sound/AudioClip nodes
[currently they usually trigger shared-visibility gestures]
would this be similar to what you are doing?

I can see how 
the ability to embed an AudioClip node
or something like it
in an avatar
and to have that reflect to the other users 
what is being said at your desk
would be a *very* interesting capability...

jeffs

--
Jeff Sonstein, M.A.     http://ariadne.iz.net/
        http://ariadne.iz.net/~jeffs/jeffs.asc
==============================================
there are no bugs
there are just undocumented features

Reply via email to