Regarding the previous post...

It seems this is done in the function /fluid_rvoice_buffers_mix /in the file "fluid_rvoice.c."
The output is buf[] and is also type fluid_real_t.
So  I'm assuming the rendering engine can accept floating point values.
I'm still trying to see how the linear interpolation is able to remove most of the low frequency artifacts. none of the simulations and experiments don't result in the output I'm seeing from FluidSynth.
Perhaps the audio h/w has additional filtering functions.

Is there a way I can save an output wave to a file? I want to look at the audio data before it's rendered.

Thanks,
Brad




On 01/03/2016 01:27 PM, Brad Stewart wrote:
The interpolation routines place the processed data into dsp_buf[] of type fluid_real_t.
I'm assuming you eventually convert this to int16 for rendering.
Can you point me to the section of the code that does the conversion?
Are you doing anything else, such as a moving-averaging filter?
Thanks,
Brad

_______________________________________________
fluid-dev mailing list
fluid-dev@nongnu.org
https://lists.nongnu.org/mailman/listinfo/fluid-dev

Reply via email to