>> QUESTION: Does ANY application writer see a need for SPATIAL rendering of
>> a MIDI file, and feels it critical to have support for "PointSound" or
>> "ConeSound" using a MediaContainer with MIDI data???

< I didn't mean anything by using more than a single question mark in my
  mail message...!!!... that's just my typing/writing style... :>

Vladimir

I fully understand the performance/footprint advantages of using MIDI
verse sample files for music.  Thank you for making it clear that what
you really need is a way to ATTENUATE the volume based on distance. How-
ever, volume attenuation is only one portion of spatialization.  As you
probably know, spatialization also includes calculating and rendering
Interaural Time and Intensity Differences (ITD and IID) for the left and
right output signals, as well as performing seperate Filtering for these
two signals to simulate the head-shoulder-pinna shadowing of specific
frequencies based on the sound position.  Typically, this processing is
done on a MONO sound source (though a stereo signal could simply be mixed
into a mono signal before being spatialized).

Currently there is no way for a Java 3D sound to be simply attenuated
automatically based on distance from sound source to listener without
being spatialized (delaying left or right signal, changing left and right
right gain, adding reverberation,...).  In the short term, if all you want
is volume attenuation, I suggest that you calculate the distance from the
listener to the sound source (this can be done with Java 3D queries and a
little linear algebra ---> sounds like a good candidate for a Java 3D
utility...:) and then change the gain of the MIDI playback yourself.

Although the currently released Java 3D AudioDevices are not implemented
to support MIDI sound data, close-to-full spatialization of MIDI data is
"possible" to achieve, as I alluded to in my earlier message, without any
changes to the Java 3D API.  For example, a particular Java 3D AudioDevice
implementation could be created that spatializes a MIDI file by duplicating
the MIDI data so that there are seperate MIDI streams used for outputting
dry, hard-panned left and hard-panned right to use for simulating IID, and
a center channel for reverberation.  Each would have to be slightly delayed
to simulate ITD.  The spatialization algorthm would have to (1) use three
times as many channels and notes as the original stream and (2) pre-screen
the MIDI data for Continuous Controller #10 and #7 commands so that panning
and volume didn't get explicitly changed  by the MIDI stream.  Since the
number of notes and channels already used by the MIDI stream might already
reach the wave-table-synth's (WTS) limits this tripling of resources required
could be a problem.  Futhermore, filtering after a WTS engine creates the
output signal may not be possible with all implementations (especially if
hardware WTS is being used).

Sun has and will consider implementing an AudioDevice implementation that
DOES support MIDI input via MediaContainers, but it is not a trival task.
I broadcasted the above question to test the waters - to determine how
important it is to Java 3D application developers in general...;)

Thanks for your input.

Warren Dale
Sun Microsytems

===========================================================================
To unsubscribe, send email to [EMAIL PROTECTED] and include in the body
of the message "signoff JAVA3D-INTEREST".  For general help, send email to
[EMAIL PROTECTED] and include in the body of the message "help".

Reply via email to