Re: PortAudio driver (was Re: [fluid-dev] New development)

2009-02-04 Thread O. P. Martin

O. P. Martin wrote:


Hi, Josh,
hi, Pedro,

How are you?

I was hoping to use PortAudio in my own project under Windows and 
cross platform.  The current state of affairs does sound quite 
discouraging. 

Does audacity use pa ...?  How do they do it?  

Thank you for your work.  Keep up the good work!

May the Lord bless you,
Philip



Josh Green wrote:

Hello Pedro,

On Sun, 2009-02-01 at 02:05 +0100, Pedro Lopez-Cabanillas wrote:
  

I've checked in some changes to the PortAudio driver.

- PortAudio enumerates devices having input only ports. As we need audio 
output  devices, I've changed the enumeration to ignore devices with less 
than 2  output channels available.


- For the default device name, I've defined the string "PortAudio Default" 
trying to solve the clash with ALSA's "default" device name. The function 
Pa_GetDefaultOutputDevice() provides the default device index.


- Added an assignment for the device index of the matching requested device 
name.




Sounds like some good stuff.  Thanks for completing those!


  
About the Windows tests. The current status of PortAudio is somewhat sad, 
being optimist. Their autohell build system allows only one backend at once. 
To compile the WDMKS backend the documentation says that it needs DirectX 
SDK, but the needed headers come from the Drivers Kit instead. It compiles, 
but the initialization is deactivated, requiring to uncomment some lines in 
the file "pa_win_hostapis.c". After some googling you realize that there is 
an active ticket about this: http://www.portaudio.com/trac/ticket/47


In order to build Fluidsynth, portaudio.pc needs to be modified by hand. Only 
to  realize that there is no sound at all. Using different devices doesn't 
help. PA Test programs don't produce noise, either. It is a problem with the 
backend code, that only invokes the callback when there is an input stream, 
in addition to the output one. There are some googles talking about this. 
Finally, after commenting out the offending condition, there is sound at 
last!. Buffer size: 64, Buffer count: 2, latency of less than 3 msecs at 
48000 Hz. The sample rate depends on the device: there isn't automatic 
resampling, only the rates supported by the device. The bad news: the first 
underrun affects very badly the audio quality forever. There is no automatic 
recovery, or any other solution than restarting over.




That sounds like a pretty sad state of affairs.  Hopefully it will
improve.  The latency under 3 msecs sounds pretty good though.  Having
the sound get out of sync though on an underrun, doesn't sound too nice
though.

  

Regards,
Pedro


Cheers!
Josh




___
fluid-dev mailing list
fluid-dev@nongnu.org 
http://lists.nongnu.org/mailman/listinfo/fluid-dev

  




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev
  


___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: PortAudio driver (was Re: [fluid-dev] New development)

2009-02-02 Thread O. P. Martin


Hi, Josh,
hi, Pedro,

How are you?

I was hoping to use PortAudio in my own project under Windows and cross 
platform.  The current state of affairs does sound quite discouraging. 


Thank you for your work.  Keep up the good work!

May the Lord bless you,
Philip



Josh Green wrote:

Hello Pedro,

On Sun, 2009-02-01 at 02:05 +0100, Pedro Lopez-Cabanillas wrote:
  

I've checked in some changes to the PortAudio driver.

- PortAudio enumerates devices having input only ports. As we need audio 
output  devices, I've changed the enumeration to ignore devices with less 
than 2  output channels available.


- For the default device name, I've defined the string "PortAudio Default" 
trying to solve the clash with ALSA's "default" device name. The function 
Pa_GetDefaultOutputDevice() provides the default device index.


- Added an assignment for the device index of the matching requested device 
name.






Sounds like some good stuff.  Thanks for completing those!


  
About the Windows tests. The current status of PortAudio is somewhat sad, 
being optimist. Their autohell build system allows only one backend at once. 
To compile the WDMKS backend the documentation says that it needs DirectX 
SDK, but the needed headers come from the Drivers Kit instead. It compiles, 
but the initialization is deactivated, requiring to uncomment some lines in 
the file "pa_win_hostapis.c". After some googling you realize that there is 
an active ticket about this: http://www.portaudio.com/trac/ticket/47


In order to build Fluidsynth, portaudio.pc needs to be modified by hand. Only 
to  realize that there is no sound at all. Using different devices doesn't 
help. PA Test programs don't produce noise, either. It is a problem with the 
backend code, that only invokes the callback when there is an input stream, 
in addition to the output one. There are some googles talking about this. 
Finally, after commenting out the offending condition, there is sound at 
last!. Buffer size: 64, Buffer count: 2, latency of less than 3 msecs at 
48000 Hz. The sample rate depends on the device: there isn't automatic 
resampling, only the rates supported by the device. The bad news: the first 
underrun affects very badly the audio quality forever. There is no automatic 
recovery, or any other solution than restarting over.





That sounds like a pretty sad state of affairs.  Hopefully it will
improve.  The latency under 3 msecs sounds pretty good though.  Having
the sound get out of sync though on an underrun, doesn't sound too nice
though.

  

Regards,
Pedro



Cheers!
Josh




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev

  


___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: PortAudio driver (was Re: [fluid-dev] New development)

2009-01-31 Thread Josh Green
Hello Pedro,

On Sun, 2009-02-01 at 02:05 +0100, Pedro Lopez-Cabanillas wrote:
> I've checked in some changes to the PortAudio driver.
> 
> - PortAudio enumerates devices having input only ports. As we need audio 
> output  devices, I've changed the enumeration to ignore devices with less 
> than 2  output channels available.
> 
> - For the default device name, I've defined the string "PortAudio Default" 
> trying to solve the clash with ALSA's "default" device name. The function 
> Pa_GetDefaultOutputDevice() provides the default device index.
> 
> - Added an assignment for the device index of the matching requested device 
> name.
> 


Sounds like some good stuff.  Thanks for completing those!


> About the Windows tests. The current status of PortAudio is somewhat sad, 
> being optimist. Their autohell build system allows only one backend at once. 
> To compile the WDMKS backend the documentation says that it needs DirectX 
> SDK, but the needed headers come from the Drivers Kit instead. It compiles, 
> but the initialization is deactivated, requiring to uncomment some lines in 
> the file "pa_win_hostapis.c". After some googling you realize that there is 
> an active ticket about this: http://www.portaudio.com/trac/ticket/47
> 
> In order to build Fluidsynth, portaudio.pc needs to be modified by hand. Only 
> to  realize that there is no sound at all. Using different devices doesn't 
> help. PA Test programs don't produce noise, either. It is a problem with the 
> backend code, that only invokes the callback when there is an input stream, 
> in addition to the output one. There are some googles talking about this. 
> Finally, after commenting out the offending condition, there is sound at 
> last!. Buffer size: 64, Buffer count: 2, latency of less than 3 msecs at 
> 48000 Hz. The sample rate depends on the device: there isn't automatic 
> resampling, only the rates supported by the device. The bad news: the first 
> underrun affects very badly the audio quality forever. There is no automatic 
> recovery, or any other solution than restarting over.
> 

That sounds like a pretty sad state of affairs.  Hopefully it will
improve.  The latency under 3 msecs sounds pretty good though.  Having
the sound get out of sync though on an underrun, doesn't sound too nice
though.

> Regards,
> Pedro

Cheers!
Josh




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


PortAudio driver (was Re: [fluid-dev] New development)

2009-01-31 Thread Pedro Lopez-Cabanillas
Josh Green wrote:
> After looking at the fluid_portaudio.c and seeing how small it was, I
> decided to take a crack at it.  Checked in is the new PortAudio driver
> using PortAudio API 19.  I added device enumeration, but it still needs
> some improvement.  It is using the device names for the setting
> audio.portaudio.device, but these look like they can be rather long and
> I'm not sure if there is any guarantee that they will be unique.  I also
> am using the string "default" to try and select the default device,
> which I was assuming would be device 0, but I'm not sure about that and
> this would also conflict with the ALSA "default" device name.
>
> PortAudio selects its devices based on index.  Perhaps specifying the
> device numerically would be better, though it would be nice to be able
> to see the names in a drop down list in applications as well.
>
> Other improvements would be the addition of a new2 driver.
>
> Anyone willing to try this with ASIO on Windows?
>
> Best regards,
>   Josh

I've checked in some changes to the PortAudio driver.

- PortAudio enumerates devices having input only ports. As we need audio 
output  devices, I've changed the enumeration to ignore devices with less 
than 2  output channels available.

- For the default device name, I've defined the string "PortAudio Default" 
trying to solve the clash with ALSA's "default" device name. The function 
Pa_GetDefaultOutputDevice() provides the default device index.

- Added an assignment for the device index of the matching requested device 
name.

About the Windows tests. The current status of PortAudio is somewhat sad, 
being optimist. Their autohell build system allows only one backend at once. 
To compile the WDMKS backend the documentation says that it needs DirectX 
SDK, but the needed headers come from the Drivers Kit instead. It compiles, 
but the initialization is deactivated, requiring to uncomment some lines in 
the file "pa_win_hostapis.c". After some googling you realize that there is 
an active ticket about this: http://www.portaudio.com/trac/ticket/47

In order to build Fluidsynth, portaudio.pc needs to be modified by hand. Only 
to  realize that there is no sound at all. Using different devices doesn't 
help. PA Test programs don't produce noise, either. It is a problem with the 
backend code, that only invokes the callback when there is an input stream, 
in addition to the output one. There are some googles talking about this. 
Finally, after commenting out the offending condition, there is sound at 
last!. Buffer size: 64, Buffer count: 2, latency of less than 3 msecs at 
48000 Hz. The sample rate depends on the device: there isn't automatic 
resampling, only the rates supported by the device. The bad news: the first 
underrun affects very badly the audio quality forever. There is no automatic 
recovery, or any other solution than restarting over.

Regards,
Pedro


___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-30 Thread ggoode.sa
Hi Pedro,
Given that the generic ASIO4ALL driver is really just an ASIO overlay
of WDM/KS I think that a WDM/KS PortAudio driver would produce the
desired lower latency that windows users are hoping for. Do you know
what type of configuration options would be available - buffers,
sample rate, etc?
GrahamG


___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-30 Thread Pedro Lopez-Cabanillas
On Thu, Jan 29, 2009 at 11:41 PM, Josh Green wrote:
>
> On Thu, 2009-01-29 at 22:46 +0100, Pedro Lopez-Cabanillas wrote:
> > I have a problem with ASIO, though. First, I don't like the license terms 
> > from
> > Steinberg: they don't allow to redistribute their sources (that are 
> > available
> > free of charge for registered developers). Second, they ask for an unfair
> > amount of personal data before allowing you to download the SDKs.
> >
> > I've created binary setup packages for Windows bundling QSynth and 
> > FluidSynth
> > in the past (available in SourceForge). I fear that I'm not going to include
> > ASIO support in the future ones.
> >
> > Regards,
> > Pedro
>
> Now that I read things right..  So your objection to ASIO extends to
> PortAudio as well?  I mean, distributing a PortAudio enabled FluidSynth,
> which happens to be able to use ASIO as a side effect, seems relatively
> harmless.  Especially if you aren't bundling PortAudio as well.

I agree. PortAudio license is OK for me.

My plan is to add PortAudio support to the next QSynth's Windows
binary package, after the release of FluidSynth 1.0.9, bundling also a
PortAudio library built without ASIO. It would be possible for
somebody to replace the provided PortAudio DLL compiled with a
different backend.

Building PortAudio for Windows requires to choose only one backend
among those provided in the sources: DSound, WinMM, ASIO and WDMKS. As
FluidSynth already has a DSound driver, and I dislike ASIO, I would
like to try the other two.

> I can understand your objections to semi-closed or completely closed
> standards, I dislike them myself.  I had a rather lengthy discussion
> with someone over the licensing terms of the DLS instrument standard.  I
> think to this day, their clutching on to the specification is the reason
> why it isn't as popular as SoundFont, despite some of its improvements
> over them.  It seems like its the open standards that end up getting
> adopted usually, even if they aren't necessarily the best.
>
> Regards,
>Josh


___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-29 Thread Josh Green
On Thu, 2009-01-29 at 22:46 +0100, Pedro Lopez-Cabanillas wrote:
> I have a problem with ASIO, though. First, I don't like the license terms 
> from 
> Steinberg: they don't allow to redistribute their sources (that are available 
> free of charge for registered developers). Second, they ask for an unfair 
> amount of personal data before allowing you to download the SDKs. 
> 
> I've created binary setup packages for Windows bundling QSynth and FluidSynth 
> in the past (available in SourceForge). I fear that I'm not going to include 
> ASIO support in the future ones.
> 
> Regards,
> Pedro

Now that I read things right..  So your objection to ASIO extends to
PortAudio as well?  I mean, distributing a PortAudio enabled FluidSynth,
which happens to be able to use ASIO as a side effect, seems relatively
harmless.  Especially if you aren't bundling PortAudio as well.

I can understand your objections to semi-closed or completely closed
standards, I dislike them myself.  I had a rather lengthy discussion
with someone over the licensing terms of the DLS instrument standard.  I
think to this day, their clutching on to the specification is the reason
why it isn't as popular as SoundFont, despite some of its improvements
over them.  It seems like its the open standards that end up getting
adopted usually, even if they aren't necessarily the best.

Regards,
Josh




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-29 Thread Josh Green
On Thu, 2009-01-29 at 23:29 +0100, Pedro Lopez-Cabanillas wrote:
> Josh Green wrote:
> > On Thu, 2009-01-29 at 22:46 +0100, Pedro Lopez-Cabanillas wrote:
> > > Thanks, Josh!
> > >
> > > I will try to find some time this week end to play in Windows. I've
> > > already succesfully tested it on Linux.
> > >
> > > I have a problem with ASIO, though. First, I don't like the license terms
> > > from Steinberg: they don't allow to redistribute their sources (that are
> > > available free of charge for registered developers). Second, they ask for
> > > an unfair amount of personal data before allowing you to download the
> > > SDKs.
> > >
> > > I've created binary setup packages for Windows bundling QSynth and
> > > FluidSynth in the past (available in SourceForge). I fear that I'm not
> > > going to include ASIO support in the future ones.
> > >
> > > Regards,
> > > Pedro
> >
> > I guess an alternative to all that would be to just write ASIO drivers
> > for FluidSynth.  Any ideas where one can find the API documentation?
> > Josh
> 
> I don't understand how it would be an alternative.
> 
> I suppose that the API documentation would be included in the ASIO SDK. 
> 
> Regards,
> Pedro

Oops, I misunderstood that.  I thought you were talking about the
license terms of PortAudio.  I should read slower :)

Regards,
Josh




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-29 Thread Pedro Lopez-Cabanillas
Josh Green wrote:
> On Thu, 2009-01-29 at 22:46 +0100, Pedro Lopez-Cabanillas wrote:
> > Thanks, Josh!
> >
> > I will try to find some time this week end to play in Windows. I've
> > already succesfully tested it on Linux.
> >
> > I have a problem with ASIO, though. First, I don't like the license terms
> > from Steinberg: they don't allow to redistribute their sources (that are
> > available free of charge for registered developers). Second, they ask for
> > an unfair amount of personal data before allowing you to download the
> > SDKs.
> >
> > I've created binary setup packages for Windows bundling QSynth and
> > FluidSynth in the past (available in SourceForge). I fear that I'm not
> > going to include ASIO support in the future ones.
> >
> > Regards,
> > Pedro
>
> I guess an alternative to all that would be to just write ASIO drivers
> for FluidSynth.  Any ideas where one can find the API documentation?
>   Josh

I don't understand how it would be an alternative.

I suppose that the API documentation would be included in the ASIO SDK. 

Regards,
Pedro


___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-29 Thread Pedro Lopez-Cabanillas
Hi,

> Hi Pedro,
>
> I can understand your dislike of the Steinberg license terms! ASIO is,
> however, the best general low latency audio for Windows users.
>
> Would it be possible for you to create/update a
> How-To-Build-Fluidsynth / Qsynth in windows using mingw with the ASIO
> information (once it is working) so that others are able to build it
> for themselves. I'm very willing to be a tester for the How-To.
>
> Thanks for all your work!
>
> GrahamG
> Johannesburg, South Africa

First, you need to compile PortAudio. It is already documented here:
http://www.portaudio.com/trac/wiki/TutorialDir/Compile/WindowsASIOMSVC

That is the only place you need to include ASIO (or any other backend 
supported by PortAudio). 

Once you have built PortAudio, you can build FluidSynth with PortAudio 
support. FS will not use PA's backends directly. 

Regards,
Pedro


___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-29 Thread Josh Green
On Thu, 2009-01-29 at 22:46 +0100, Pedro Lopez-Cabanillas wrote:
> Thanks, Josh!
> 
> I will try to find some time this week end to play in Windows. I've already  
> succesfully tested it on Linux.
> 
> I have a problem with ASIO, though. First, I don't like the license terms 
> from 
> Steinberg: they don't allow to redistribute their sources (that are available 
> free of charge for registered developers). Second, they ask for an unfair 
> amount of personal data before allowing you to download the SDKs. 
> 
> I've created binary setup packages for Windows bundling QSynth and FluidSynth 
> in the past (available in SourceForge). I fear that I'm not going to include 
> ASIO support in the future ones.
> 
> Regards,
> Pedro

I guess an alternative to all that would be to just write ASIO drivers
for FluidSynth.  Any ideas where one can find the API documentation?  
Josh




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-29 Thread Pedro Lopez-Cabanillas
Josh Green wrote:
> On Thu, 2009-01-29 at 00:12 +0100, Pedro Lopez-Cabanillas wrote:
> > > Seems to me like it is definitely worth improving the existing
> > > PortAudio driver.  Any idea what this would entail?
> > >   Josh
> >
> > Briefly:
> > * Detect and define PORTAUDIO_* in the build system. I've done that using
> > pkg-config portaudio-2.0 >= 19. Allow to disable compilation of this
> > driver, maybe disable it by default?
>
> I think if portaudio is found, it just be included as a driver.  If its
> installed, chances are the user wants to use it.
>
> > * Implement new2() and call new2() from new() as it is usual in most
> > other audio drivers. Or leave that for a later version/never?
>
> The new2 drivers are used by QSynth and perhaps some other applications
> to intercept the audio.
>
> > * Register the setting "audio.portaudio.device" to list/select detected
> > devices and backends as reported by Pa_GetDeviceCount() and
> > Pa_GetDeviceInfo().
> > * Maybe more settings? Some backends would require special ones?
> > * PortAudio API functions changed: Pa_OpenStream(), PaStreamCallback, ...
> > * Testing: ASIO in Windows, but it should also work for other
> > platforms/backends...
> >
> > Regards,
> > Pedro
>
> After looking at the fluid_portaudio.c and seeing how small it was, I
> decided to take a crack at it.  Checked in is the new PortAudio driver
> using PortAudio API 19.  I added device enumeration, but it still needs
> some improvement.  It is using the device names for the setting
> audio.portaudio.device, but these look like they can be rather long and
> I'm not sure if there is any guarantee that they will be unique.  I also
> am using the string "default" to try and select the default device,
> which I was assuming would be device 0, but I'm not sure about that and
> this would also conflict with the ALSA "default" device name.
>
> PortAudio selects its devices based on index.  Perhaps specifying the
> device numerically would be better, though it would be nice to be able
> to see the names in a drop down list in applications as well.
>
> Other improvements would be the addition of a new2 driver.
>
> Anyone willing to try this with ASIO on Windows?
>
> Best regards,
>   Josh

Thanks, Josh!

I will try to find some time this week end to play in Windows. I've already  
succesfully tested it on Linux.

I have a problem with ASIO, though. First, I don't like the license terms from 
Steinberg: they don't allow to redistribute their sources (that are available 
free of charge for registered developers). Second, they ask for an unfair 
amount of personal data before allowing you to download the SDKs. 

I've created binary setup packages for Windows bundling QSynth and FluidSynth 
in the past (available in SourceForge). I fear that I'm not going to include 
ASIO support in the future ones.

Regards,
Pedro


___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-29 Thread Josh Green
Hello,

Good point about WineASIO, I didn't actually know such thing existed.

Best regards,
Josh

On Thu, 2009-01-29 at 11:28 +0200, ggoode.sa wrote:
> Hi Josh,
> 
> I don't have a build environment in Windows at the moment and probably
> won't for a few more weeks (in the middle of a move), but if you email
> me your build I'm willing to try it in WinXP. Alternatively one could
> test this with the WineASIO driver and WINE (which then patches into
> JACK).
> 
> GrahamG



___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-29 Thread ggoode.sa
Hi Josh,

I don't have a build environment in Windows at the moment and probably
won't for a few more weeks (in the middle of a move), but if you email
me your build I'm willing to try it in WinXP. Alternatively one could
test this with the WineASIO driver and WINE (which then patches into
JACK).

GrahamG


___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-29 Thread Josh Green
On Thu, 2009-01-29 at 00:12 +0100, Pedro Lopez-Cabanillas wrote:
> > Seems to me like it is definitely worth improving the existing PortAudio
> > driver.  Any idea what this would entail?
> > Josh
> 
> Briefly:
> * Detect and define PORTAUDIO_* in the build system. I've done that using 
> pkg-config portaudio-2.0 >= 19. Allow to disable compilation of this driver, 
> maybe disable it by default?

I think if portaudio is found, it just be included as a driver.  If its
installed, chances are the user wants to use it.

> * Implement new2() and call new2() from new() as it is usual in most other 
> audio drivers. Or leave that for a later version/never?

The new2 drivers are used by QSynth and perhaps some other applications
to intercept the audio.

> * Register the setting "audio.portaudio.device" to list/select detected 
> devices and backends as reported by Pa_GetDeviceCount() and 
> Pa_GetDeviceInfo().   
> * Maybe more settings? Some backends would require special ones?
> * PortAudio API functions changed: Pa_OpenStream(), PaStreamCallback, ...
> * Testing: ASIO in Windows, but it should also work for other 
> platforms/backends...
> 
> Regards,
> Pedro

After looking at the fluid_portaudio.c and seeing how small it was, I
decided to take a crack at it.  Checked in is the new PortAudio driver
using PortAudio API 19.  I added device enumeration, but it still needs
some improvement.  It is using the device names for the setting
audio.portaudio.device, but these look like they can be rather long and
I'm not sure if there is any guarantee that they will be unique.  I also
am using the string "default" to try and select the default device,
which I was assuming would be device 0, but I'm not sure about that and
this would also conflict with the ALSA "default" device name.

PortAudio selects its devices based on index.  Perhaps specifying the
device numerically would be better, though it would be nice to be able
to see the names in a drop down list in applications as well.

Other improvements would be the addition of a new2 driver.

Anyone willing to try this with ASIO on Windows?

Best regards,
Josh




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-28 Thread Pedro Lopez-Cabanillas
Josh Green wrote:
> On Tue, 2009-01-27 at 21:59 +0100, Pedro Lopez-Cabanillas wrote:
> > I don't think so. Seems that the PortAudio driver would be useful only to
> > Windows users, where FluidSynth has only a DirectSound driver with huge
> > latency problems. For this platform PortAudio provides ASIO and WinMM
> > audio drivers. There was also a WDM KS driver, but I don't know about the
> > current status of it. I don't know if this business would be a success or
> > a failure at all, before testing it. Even if PortAudio is not near our
> > (high) expectations, I would like to give it an opportunity.
> >
> > Regards,
> > Pedro
>
> Seems to me like it is definitely worth improving the existing PortAudio
> driver.  Any idea what this would entail?
>   Josh

Briefly:
* Detect and define PORTAUDIO_* in the build system. I've done that using 
pkg-config portaudio-2.0 >= 19. Allow to disable compilation of this driver, 
maybe disable it by default?
* Implement new2() and call new2() from new() as it is usual in most other 
audio drivers. Or leave that for a later version/never?
* Register the setting "audio.portaudio.device" to list/select detected 
devices and backends as reported by Pa_GetDeviceCount() and 
Pa_GetDeviceInfo().   
* Maybe more settings? Some backends would require special ones?
* PortAudio API functions changed: Pa_OpenStream(), PaStreamCallback, ...
* Testing: ASIO in Windows, but it should also work for other 
platforms/backends...

Regards,
Pedro


___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development : system clock vs. audio clock

2009-01-28 Thread Josh Green
On Wed, 2009-01-28 at 11:34 +0100, Antoine Schmitt wrote:
> Hello,
> I'm not sure that the problem can be ignored for Midi file playback,  
> because a large driver buffer will miss some midi events, which will  
> then happen late in the audio stream.
> 
> About my patch, the fact is that I haven't found a clean way to link  
> the synth to the sequencer. I verified that my fix worked, but did not  
> manage to clean up the API nicely. The synth and the seq currently  
> just point to each other. I did not know how to manage this API  
> change, and still don't. There is this seqbingd.h file which purpose  
> was to isolate the synth from the seq. If we link them directly in  
> some way, this file is useless. I need advice...
> 

Having some source code to sink our teeth into, would help a lot.  Did
you already make it available and I missed that memo?  If not, would it
be possible to place it somewhere?  I could make a branch for it in SVN.
Josh




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development : system clock vs. audio clock

2009-01-28 Thread Antoine Schmitt

Right ! ;-)

"2. FS doesn't work well with big audio output sizes (or high  
latencies) since MIDI events get quantized to it."



But then problem #2 is a problem for midifile playback, right ?


Le 28 janv. 09 à 18:03, Bernat Arlandis i Mañó a écrit :


Antoine Schmitt escrigué:

Hello,
I'm not sure that the problem can be ignored for Midi file  
playback, because a large driver buffer will miss some midi events,  
which will then happen late in the audio stream.



Hello Antoine.
Notice how I've split your issue into two problems. Problem #1 can  
be ignored, as a matter of fact, you're ignoring it and trying to  
address just problem #2.


++ as




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development : system clock vs. audio clock

2009-01-28 Thread Bernat Arlandis i Mañó

Antoine Schmitt escrigué:

Hello,
I'm not sure that the problem can be ignored for Midi file playback, 
because a large driver buffer will miss some midi events, which will 
then happen late in the audio stream.



Hello Antoine.
Notice how I've split your issue into two problems. Problem #1 can be 
ignored, as a matter of fact, you're ignoring it and trying to address 
just problem #2.


--
Bernat Arlandis i Mañó



___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development : system clock vs. audio clock

2009-01-28 Thread Antoine Schmitt

Hello,
I'm not sure that the problem can be ignored for Midi file playback,  
because a large driver buffer will miss some midi events, which will  
then happen late in the audio stream.


About my patch, the fact is that I haven't found a clean way to link  
the synth to the sequencer. I verified that my fix worked, but did not  
manage to clean up the API nicely. The synth and the seq currently  
just point to each other. I did not know how to manage this API  
change, and still don't. There is this seqbingd.h file which purpose  
was to isolate the synth from the seq. If we link them directly in  
some way, this file is useless. I need advice...



Le 27 janv. 09 à 17:49, Bernat Arlandis i Mañó a écrit :


Antoine Schmitt escrigué:

Hi Josh and Bernat,

The issue I fixed was for real time rendering, when using the  
sequencer. And it was related, not only to standard and simpler  
latency caused by the size of the driver buffer, but because of  
unexpected behavior from the DSound driver, which, depending on the  
target hardware and other unknown reasons, would actually request  
buffers in bulk : it would request 16 buffers in a row, thus  
multipiying the latency by 16. And this would not be consistent  
(sometimes 1 buffer would be asked, sometimes 16). I have logs on  
this. This means that audio was in a way running much ahead of real  
time.


The result was that the "Sub audio buffer MIDI event processing"  
issue that Josh mentions was multiplied by 16, resulting in audible  
irregularities in rythms. IIRC, midi playback is also attached to  
the system clock, with a timer. So this problem will also happen  
for midi file playback, not only for sequencer playback. [as a side  
note, there is a redundancy in code, again, IIRC, between the  
sequencer and the midifile playback. This could be factored by  
having for example the midifile playback use the sequencer to  
insert midi events in the audio stream - end of side note]


I fixed this by branching the sequencer on the audio time (how many  
samples have elapsed), _and_ by calling the sequencer routine just  
before filling each audio buffer.


-> I guess that I did not fix this same issue with midifile  
playback then.
-> and also, I reduced the precision to a single buffer length. I  
did not address sub-buffer precision.

=> I guess this could really benefit an overall cleanup.

As for the question of where to do the processing of the scheduled  
(whether through the sequencer or through the midifile playback)  
midi events, I think that the only way to have consistent and  
reliable rendering is indeed to do it inside the callback from the  
audio driver, especially if the audio runs ahead of real time.



Thank you very much for taking the time to explain it, now I  
understand much better what you have done, and yes, it's related to  
Josh's proposal.


Before trying to solve this problem in the best way we have to  
understand it well, and it's somewhat complex. I see two problems  
here:
1. A soundcard/driver with a unusually high minimal buffer size  
(associated to a unusually high latency).
2. FS doesn't work well with big audio output sizes (or high  
latencies) since MIDI events get quantized to it.


Problem #1 is the real problem and it's related to the soundcard/ 
system/driver, not FS., but it can be mostly ignored when you're  
only playing back MIDI files. You should see whether this is the  
problem solved by ASIO drivers on Windows, someone else will know  
better than me, poor Linux user. :)


Problem #1 could be almost fixed by solving problem #2, but not  
really.  Implementing Josh's proposal would complicate the code a  
lot and it would hurt performance very bad for systems with already  
good latency, that's any modern computer with appropriate audio  
drivers and configuration.


There might be a field to explore there, maybe Antoine's patch is  
good to implement a workaround to the latency problem in DSound  
drivers, this could be good.



Le 27 janv. 09 à 03:32, Josh Green a écrit :
It seems to me like using a system timer for MIDI file event timing
(something that has different resolutions depending on the system) is
going to be a lot less reliable than using the sound card time.   
Again

though, I agree that this probably only benefits MIDI file
playback/rendering.


It depends on what you're looking for. If you see FS output only as  
numeric series, then we should sacrifice everything for exact sample  
resolution. But this is sound so latency and performance reliability  
are a lot more important than sample accuracy. Don't get me wrong, I  
would love to achieve sample accuracy with good performance,  
reliability and latency, but that's not realistic specially since  
we're aimed towards personal computers.


Still, there's good news, we can get sample accuracy with non-RT (or  
offline) rendering, but this doesn't need any timers, and I'd like  
to do it for 2.0. This would be good for testing and a

Re: [fluid-dev] New development

2009-01-27 Thread Josh Green
On Tue, 2009-01-27 at 21:59 +0100, Pedro Lopez-Cabanillas wrote:
> I don't think so. Seems that the PortAudio driver would be useful only to  
> Windows users, where FluidSynth has only a DirectSound driver with huge 
> latency problems. For this platform PortAudio provides ASIO and WinMM audio 
> drivers. There was also a WDM KS driver, but I don't know about the current  
> status of it. I don't know if this business would be a success or a failure 
> at all, before testing it. Even if PortAudio is not near our (high) 
> expectations, I would like to give it an opportunity.
> 
> Regards,
> Pedro
> 

Seems to me like it is definitely worth improving the existing PortAudio
driver.  Any idea what this would entail?
Josh




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-27 Thread Pedro Lopez-Cabanillas
Bernat Arlandis i Mañó wrote:
> Josh Green escrigué:
> > On Mon, 2009-01-26 at 22:52 +0100, Pedro Lopez-Cabanillas wrote:
> >> I would like to find more time to work on the PortAudio driver, as it
> >> was my plan for ticket #19. I will try, but don't hold your breath.
> >>
> >> You are right: there is already a PortAudio driver, but it doesn't
> >> compile with PortAudio V19. This version includes ASIO support (Mac OSX
> >> and Windows), so we don't need to write another driver for it. Do you
> >> agree?
>
> Pedro, I think you didn't like my comment in ticket #19. I thought this
> ticket was just a heads up so we would know about this broken driver. I
> didn't know you were interested on this, so please don't take offense
> because I took for granted that it wasn't important for anybody.

Don't worry, no problem. No offense at all.

> Since you might be working soon on this and have better knowledge, I
> hope you could answer some questions. Do you think this driver could
> replace the supported ones without loosing any functionality? If the
> answer is positive, do you think this could work well?

I don't think so. Seems that the PortAudio driver would be useful only to  
Windows users, where FluidSynth has only a DirectSound driver with huge 
latency problems. For this platform PortAudio provides ASIO and WinMM audio 
drivers. There was also a WDM KS driver, but I don't know about the current  
status of it. I don't know if this business would be a success or a failure 
at all, before testing it. Even if PortAudio is not near our (high) 
expectations, I would like to give it an opportunity.

Regards,
Pedro


___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development : system clock vs. audio clock

2009-01-27 Thread Bernat Arlandis i Mañó

Antoine Schmitt escrigué:

Hi Josh and Bernat,

The issue I fixed was for real time rendering, when using the 
sequencer. And it was related, not only to standard and simpler 
latency caused by the size of the driver buffer, but because of 
unexpected behavior from the DSound driver, which, depending on the 
target hardware and other unknown reasons, would actually request 
buffers in bulk : it would request 16 buffers in a row, thus 
multipiying the latency by 16. And this would not be consistent 
(sometimes 1 buffer would be asked, sometimes 16). I have logs on 
this. This means that audio was in a way running much ahead of real time.


The result was that the "Sub audio buffer MIDI event processing" issue 
that Josh mentions was multiplied by 16, resulting in audible 
irregularities in rythms. IIRC, midi playback is also attached to the 
system clock, with a timer. So this problem will also happen for midi 
file playback, not only for sequencer playback. [as a side note, there 
is a redundancy in code, again, IIRC, between the sequencer and the 
midifile playback. This could be factored by having for example the 
midifile playback use the sequencer to insert midi events in the audio 
stream - end of side note]


I fixed this by branching the sequencer on the audio time (how many 
samples have elapsed), _and_ by calling the sequencer routine just 
before filling each audio buffer.


-> I guess that I did not fix this same issue with midifile playback 
then.
-> and also, I reduced the precision to a single buffer length. I did 
not address sub-buffer precision.

=> I guess this could really benefit an overall cleanup.

As for the question of where to do the processing of the scheduled 
(whether through the sequencer or through the midifile playback) midi 
events, I think that the only way to have consistent and reliable 
rendering is indeed to do it inside the callback from the audio 
driver, especially if the audio runs ahead of real time.



Thank you very much for taking the time to explain it, now I understand 
much better what you have done, and yes, it's related to Josh's proposal.


Before trying to solve this problem in the best way we have to 
understand it well, and it's somewhat complex. I see two problems here:
1. A soundcard/driver with a unusually high minimal buffer size 
(associated to a unusually high latency).
2. FS doesn't work well with big audio output sizes (or high latencies) 
since MIDI events get quantized to it.


Problem #1 is the real problem and it's related to the 
soundcard/system/driver, not FS., but it can be mostly ignored when 
you're only playing back MIDI files. You should see whether this is the 
problem solved by ASIO drivers on Windows, someone else will know better 
than me, poor Linux user. :)


Problem #1 could be almost fixed by solving problem #2, but not really.  
Implementing Josh's proposal would complicate the code a lot and it 
would hurt performance very bad for systems with already good latency, 
that's any modern computer with appropriate audio drivers and configuration.


There might be a field to explore there, maybe Antoine's patch is good 
to implement a workaround to the latency problem in DSound drivers, this 
could be good.



Le 27 janv. 09 à 03:32, Josh Green a écrit :
It seems to me like using a system timer for MIDI file event timing
(something that has different resolutions depending on the system) is
going to be a lot less reliable than using the sound card time.  Again
though, I agree that this probably only benefits MIDI file
playback/rendering.

  
It depends on what you're looking for. If you see FS output only as 
numeric series, then we should sacrifice everything for exact sample 
resolution. But this is sound so latency and performance reliability are 
a lot more important than sample accuracy. Don't get me wrong, I would 
love to achieve sample accuracy with good performance, reliability and 
latency, but that's not realistic specially since we're aimed towards 
personal computers.


Still, there's good news, we can get sample accuracy with non-RT (or 
offline) rendering, but this doesn't need any timers, and I'd like to do 
it for 2.0. This would be good for testing and also for master track 
rendering of pre-recorded midi tracks.

What about just using it as a timing source?  I still haven't thought it
all through, but I could see how this could have its advantages.

  
Using the audio driver as a timing source could be an option for 2.0, in 
fact, I'd like to be able to use anything as a timing source, but 
there's a difference, there would be separate threads for MIDI and core 
processing with different priorities, same as now, but sharing the same 
timing source.


We're getting to very complex issues that I think shouldn't be the most 
important thing now, unless someone wants to experiment with them, but 
this kind of experiments should be done in its own experimental branch. 
The 2.x branch should not be experimental.


Ch

Re: [fluid-dev] New development : system clock vs. audio clock

2009-01-27 Thread Antoine Schmitt

Hi Josh and Bernat,

The issue I fixed was for real time rendering, when using the  
sequencer. And it was related, not only to standard and simpler  
latency caused by the size of the driver buffer, but because of  
unexpected behavior from the DSound driver, which, depending on the  
target hardware and other unknown reasons, would actually request  
buffers in bulk : it would request 16 buffers in a row, thus  
multipiying the latency by 16. And this would not be consistent  
(sometimes 1 buffer would be asked, sometimes 16). I have logs on  
this. This means that audio was in a way running much ahead of real  
time.


The result was that the "Sub audio buffer MIDI event processing" issue  
that Josh mentions was multiplied by 16, resulting in audible  
irregularities in rythms. IIRC, midi playback is also attached to the  
system clock, with a timer. So this problem will also happen for midi  
file playback, not only for sequencer playback. [as a side note, there  
is a redundancy in code, again, IIRC, between the sequencer and the  
midifile playback. This could be factored by having for example the  
midifile playback use the sequencer to insert midi events in the audio  
stream - end of side note]


I fixed this by branching the sequencer on the audio time (how many  
samples have elapsed), _and_ by calling the sequencer routine just  
before filling each audio buffer.


-> I guess that I did not fix this same issue with midifile playback  
then.
-> and also, I reduced the precision to a single buffer length. I did  
not address sub-buffer precision.

=> I guess this could really benefit an overall cleanup.

As for the question of where to do the processing of the scheduled  
(whether through the sequencer or through the midifile playback) midi  
events, I think that the only way to have consistent and reliable  
rendering is indeed to do it inside the callback from the audio  
driver, especially if the audio runs ahead of real time.



Le 27 janv. 09 à 03:32, Josh Green a écrit :


I probably shouldn't say too much, until I see what Antoine's solution
is..  But..

On Tue, 2009-01-27 at 03:04 +0100, Bernat Arlandis i Mañó wrote:

It makes sense to me to
process the audio based on the audio playback.  This would lead to
identical playback between successive renders of a MIDI file,  
which is

what we want.

This could be the only advantage I can think of, but it would be only
reproducible in the same hardware, driver and audio buffer size  
setup.

If you're thinking on case testing then the only solution is non-RT
rendering.


Indeed, it seems like it is the most useful for non-RT rendering.  I
think the issue that Antoine was originally trying to fix was  
related to
the Windows DSound driver implementation processing a lot more data  
than

just an audio buffer, which really seems like a driver issue to me.


 I don't see a problem with this change and I think it
would vastly improve things.  There might be a little more  
overhead as
far as MIDI event processing, but it would lead to more accurate  
timing

as well.



This would worsen latency since the core thread would have to do more
work at the critical point where the sound card is waiting for data.


Hmmm.  Not if you are simply using the number of samples played out of
the sound card as a timing source.  Or am I still overlooking  
something.

It seems to me like using a system timer for MIDI file event timing
(something that has different resolutions depending on the system) is
going to be a lot less reliable than using the sound card time.  Again
though, I agree that this probably only benefits MIDI file
playback/rendering.


Besides, I don't think having the MIDI file player depending on the
audio driver is good.



What about just using it as a timing source?  I still haven't  
thought it

all through, but I could see how this could have its advantages.


And, please, this shouldn't be taken as disrespect to Antoine's work,
I'd still have a look at it to see what he has really accomplished.

I think it's cool having this discussion now, since you're the
maintainer and you'll want to have some control in the future
development, it's logical. I'd like to know how good we work it out  
when

we don't agree. :)

Cheers.


Well I'm not particularly attached to how things go, just as long as  
we

do the "right thing" (TM) and KIFS (Keep It F.. Simple) ;)

Cheers.
Josh




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev



++ as




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-26 Thread Josh Green
I probably shouldn't say too much, until I see what Antoine's solution
is..  But..

On Tue, 2009-01-27 at 03:04 +0100, Bernat Arlandis i Mañó wrote:
> >  It makes sense to me to
> > process the audio based on the audio playback.  This would lead to
> > identical playback between successive renders of a MIDI file, which is
> > what we want.
> This could be the only advantage I can think of, but it would be only 
> reproducible in the same hardware, driver and audio buffer size setup. 
> If you're thinking on case testing then the only solution is non-RT 
> rendering.

Indeed, it seems like it is the most useful for non-RT rendering.  I
think the issue that Antoine was originally trying to fix was related to
the Windows DSound driver implementation processing a lot more data than
just an audio buffer, which really seems like a driver issue to me.

> >   I don't see a problem with this change and I think it
> > would vastly improve things.  There might be a little more overhead as
> > far as MIDI event processing, but it would lead to more accurate timing
> > as well.
> >
> >   
> This would worsen latency since the core thread would have to do more 
> work at the critical point where the sound card is waiting for data. 

Hmmm.  Not if you are simply using the number of samples played out of
the sound card as a timing source.  Or am I still overlooking something.
It seems to me like using a system timer for MIDI file event timing
(something that has different resolutions depending on the system) is
going to be a lot less reliable than using the sound card time.  Again
though, I agree that this probably only benefits MIDI file
playback/rendering.

> Besides, I don't think having the MIDI file player depending on the 
> audio driver is good.
> 

What about just using it as a timing source?  I still haven't thought it
all through, but I could see how this could have its advantages.

> And, please, this shouldn't be taken as disrespect to Antoine's work, 
> I'd still have a look at it to see what he has really accomplished.
> 
> I think it's cool having this discussion now, since you're the 
> maintainer and you'll want to have some control in the future 
> development, it's logical. I'd like to know how good we work it out when 
> we don't agree. :)
> 
> Cheers.

Well I'm not particularly attached to how things go, just as long as we
do the "right thing" (TM) and KIFS (Keep It F.. Simple) ;)

Cheers.
Josh




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-26 Thread Bernat Arlandis i Mañó

Josh Green escrigué:

I think the difference is between clocking MIDI events in MIDI files
based on the system timer versus using the sound card as the timing
source (how much audio has been played back).

It seems so.

 It makes sense to me to
process the audio based on the audio playback.  This would lead to
identical playback between successive renders of a MIDI file, which is
what we want.
This could be the only advantage I can think of, but it would be only 
reproducible in the same hardware, driver and audio buffer size setup. 
If you're thinking on case testing then the only solution is non-RT 
rendering.

  I don't see a problem with this change and I think it
would vastly improve things.  There might be a little more overhead as
far as MIDI event processing, but it would lead to more accurate timing
as well.

  
This would worsen latency since the core thread would have to do more 
work at the critical point where the sound card is waiting for data. 
Besides, I don't think having the MIDI file player depending on the 
audio driver is good.


And, please, this shouldn't be taken as disrespect to Antoine's work, 
I'd still have a look at it to see what he has really accomplished.


I think it's cool having this discussion now, since you're the 
maintainer and you'll want to have some control in the future 
development, it's logical. I'd like to know how good we work it out when 
we don't agree. :)


Cheers.

--
Bernat Arlandis i Mañó



___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-26 Thread Josh Green
On Tue, 2009-01-27 at 01:12 +0100, Bernat Arlandis i Mañó wrote:
> > About new development, there is an improvement to fluid that I had  
> > worked on two years ago in a private branch that I think should be  
> > integrated inside the main code base. I don't know if it should be  
> > integrated into 1.x or 2.x, because it changes the API a bit.
> > 
> > The bug is with the sequencer : in short, the sequencer uses the  
> > computer clock to trigger its sequenced events, whereas the audio  
> > buffers are requested and created by the soundcard when it needs
> > them  
> > which may be ahead of realtime, resulting in events not being  
> > triggered at the right moment in the audio stream. The fact is that
> > in  
> > real life, audio cards do request audio buffers ahead of time,  
> > sometimes a lot ahead, in bulk (like 16 buffers at once), thus not  
> > leaving time for the sequencer to trigger its events at the right  
> > moment in the audio stream.
> > I have fixed this by:
> > - making the sequencer use the audio stream as a clock
> > - calling the sequencer from the synth audio callback so that  
> > sequenced events are inserted in the audio buffer right before the  
> > audio is rendered
> > This implied a small change in the API because now the sequencer  
> > depends on the synth to get its clock (which was not the case
> > before).
> > 
> > How should I proceed to include it in the main code base ?
> > 
> > ++ as
> > 
> 
> Maybe I'm not understanding what he's done, but it sounds to me like
> he's talking about simple and well-known sound card latency. I don't see
> it related to what you're talking about.
> 
> I don't know why latency could be a problem playing midi files, maybe
> it's another problem. It might happen that his system timer has a low
> resolution and thus MIDI file playing is affected, I think fluidsynth
> tries to get a 4ms period system timer. In Linux you can solve this
> easily by setting up a higher system timer resolution in the kernel, I
> don't know about other systems.
> 

I think the difference is between clocking MIDI events in MIDI files
based on the system timer versus using the sound card as the timing
source (how much audio has been played back).  It makes sense to me to
process the audio based on the audio playback.  This would lead to
identical playback between successive renders of a MIDI file, which is
what we want.  I don't see a problem with this change and I think it
would vastly improve things.  There might be a little more overhead as
far as MIDI event processing, but it would lead to more accurate timing
as well.

Does this adequately describe your solution Antoine?

Best regards,
Josh




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-26 Thread Bernat Arlandis i Mañó

Josh Green escrigué:

On Mon, 2009-01-26 at 15:10 -0500, Ebrahim Mayat wrote:
  

...and while we are on the topic of new development...one thing that has
been on my mind for a while is the subject of effects plug-ins. For the
last few releases, I have chosen not to add the option of LADSPA for the
simple reason that this often causes spontaneous crashes particularly
when running qsynth.

Since lv2 is the new alternative to LADSPA

 

I think that it would be a good idea to consider including lv2 as a
feature that could be coded into the new branch. 


Such considerations would probably affect future planning.

E 




Indeed, LV2 is a good item for 2.x inclusion.

I updated the Future of FluidSynth page (now titled FluidSynth 2.0):
http://fluidsynth.resonance.org/trac/wiki/Future

Everyone, feel free to add to this.  At some point we'll need to make
some decisions about priority and focus of tasks and some ideas may not
get addressed during the 2.x development cycle.

Best regards,
Josh
  
This one has to be discussed, but it's a bit too soon for this, I guess. 
I think the current LADSPA implementation was a good idea back then in 
2002, but now, plugging FS to the Jack tools could be a much better option.


--

--
Bernat Arlandis i Mañó



___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-26 Thread Bernat Arlandis i Mañó

Josh Green escrigué:

On Mon, 2009-01-26 at 22:52 +0100, Pedro Lopez-Cabanillas wrote:
  
I would like to find more time to work on the PortAudio driver, as it was my 
plan for ticket #19. I will try, but don't hold your breath.


You are right: there is already a PortAudio driver, but it doesn't compile 
with PortAudio V19. This version includes ASIO support (Mac OSX and Windows), 
so we don't need to write another driver for it. Do you agree?



Pedro, I think you didn't like my comment in ticket #19. I thought this 
ticket was just a heads up so we would know about this broken driver. I 
didn't know you were interested on this, so please don't take offense 
because I took for granted that it wasn't important for anybody.


Since you might be working soon on this and have better knowledge, I 
hope you could answer some questions. Do you think this driver could 
replace the supported ones without loosing any functionality? If the 
answer is positive, do you think this could work well?


--

--
Bernat Arlandis i Mañó



___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-26 Thread jimmy

> Date: Mon, 26 Jan 2009 15:25:13 -0500
> From: Ebrahim Mayat 
> 
> ...and while we are on the topic of new development...one
> thing that has
> been on my mind for a while is the subject of effects
> plug-ins. For the
> last few releases, I have chosen not to add the option of
> LADSPA for the
> simple reason that this often causes spontaneous crashes
> particularly
> when running qsynth.
> 
> Since lv2 is the new alternative to LADSPA
> 
>  
> 
> I think that it would be a good idea to consider including
> lv2 as a
> feature that could be coded into the new branch. 
> 
> Such considerations would probably affect future planning.
> 

Ebrahim,

I think the crashes might be because many places in FS, variables are accessed 
directly and chained structure.substructure.variable are set and accessed 
without verifying validity of such data structure, or variable values.

I think the original code cheated this way for speed-shake.  Which might be a 
valid trade-off at the time.  But it is mighty hard trying to track down 
spontaneous crashes.

So I think LV2 support might be good thing to look forward to if someone can 
add that, it doesn't guarrantee anything regarding spontaneous crashes.

I have seen one or two midi's that plays extra fast tempo that causes FS to 
crash, not sure if I have the time to track it down for the time being to even 
report it.

Jimmy



  


___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-26 Thread Bernat Arlandis i Mañó

Josh Green escrigué:

On Mon, 2009-01-26 at 09:43 +0100, Antoine Schmitt wrote:
  

Le 25 janv. 09 à 23:14, Josh Green a écrit :


Things for the new 2.x branch:
  
About new development, there is an improvement to fluid that I had  
worked on two years ago in a private branch that I think should be  
integrated inside the main code base. I don't know if it should be  
integrated into 1.x or 2.x, because it changes the API a bit.


The bug is with the sequencer : in short, the sequencer uses the  
computer clock to trigger its sequenced events, whereas the audio  
buffers are requested and created by the soundcard when it needs them  
which may be ahead of realtime, resulting in events not being  
triggered at the right moment in the audio stream. The fact is that in  
real life, audio cards do request audio buffers ahead of time,  
sometimes a lot ahead, in bulk (like 16 buffers at once), thus not  
leaving time for the sequencer to trigger its events at the right  
moment in the audio stream.

I have fixed this by:
- making the sequencer use the audio stream as a clock
- calling the sequencer from the synth audio callback so that  
sequenced events are inserted in the audio buffer right before the  
audio is rendered
This implied a small change in the API because now the sequencer  
depends on the synth to get its clock (which was not the case before).


How should I proceed to include it in the main code base ?

++ as




What you mention here addresses what I had listed as "Sub audio buffer
MIDI event processing" in that new things for 2.x branch list.  I had
not realized that there was any work done on resolving this.  Great!

Could you send me an archive of this branch?  Also, if you could
describe what part of the API it changes, that would be helpful, but I
could probably gather that from doing a diff on the sources as well.

This sounds like something that could be useful to integrate into 1.x.

  

Maybe I'm not understanding what he's done, but it sounds to me like
he's talking about simple and well-known sound card latency. I don't see
it related to what you're talking about.

I don't know why latency could be a problem playing midi files, maybe
it's another problem. It might happen that his system timer has a low
resolution and thus MIDI file playing is affected, I think fluidsynth
tries to get a 4ms period system timer. In Linux you can solve this
easily by setting up a higher system timer resolution in the kernel, I
don't know about other systems.

--
Bernat Arlandis i Mañó


___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-26 Thread Josh Green
On Mon, 2009-01-26 at 15:10 -0500, Ebrahim Mayat wrote:
> ...and while we are on the topic of new development...one thing that has
> been on my mind for a while is the subject of effects plug-ins. For the
> last few releases, I have chosen not to add the option of LADSPA for the
> simple reason that this often causes spontaneous crashes particularly
> when running qsynth.
> 
> Since lv2 is the new alternative to LADSPA
> 
>  
> 
> I think that it would be a good idea to consider including lv2 as a
> feature that could be coded into the new branch. 
> 
> Such considerations would probably affect future planning.
> 
> E 
> 

Indeed, LV2 is a good item for 2.x inclusion.

I updated the Future of FluidSynth page (now titled FluidSynth 2.0):
http://fluidsynth.resonance.org/trac/wiki/Future

Everyone, feel free to add to this.  At some point we'll need to make
some decisions about priority and focus of tasks and some ideas may not
get addressed during the 2.x development cycle.

Best regards,
Josh




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-26 Thread Josh Green
On Mon, 2009-01-26 at 09:43 +0100, Antoine Schmitt wrote:
> Le 25 janv. 09 à 23:14, Josh Green a écrit :
> > Things for the new 2.x branch:
> 
> 
> About new development, there is an improvement to fluid that I had  
> worked on two years ago in a private branch that I think should be  
> integrated inside the main code base. I don't know if it should be  
> integrated into 1.x or 2.x, because it changes the API a bit.
> 
> The bug is with the sequencer : in short, the sequencer uses the  
> computer clock to trigger its sequenced events, whereas the audio  
> buffers are requested and created by the soundcard when it needs them  
> which may be ahead of realtime, resulting in events not being  
> triggered at the right moment in the audio stream. The fact is that in  
> real life, audio cards do request audio buffers ahead of time,  
> sometimes a lot ahead, in bulk (like 16 buffers at once), thus not  
> leaving time for the sequencer to trigger its events at the right  
> moment in the audio stream.
> I have fixed this by:
> - making the sequencer use the audio stream as a clock
> - calling the sequencer from the synth audio callback so that  
> sequenced events are inserted in the audio buffer right before the  
> audio is rendered
> This implied a small change in the API because now the sequencer  
> depends on the synth to get its clock (which was not the case before).
> 
> How should I proceed to include it in the main code base ?
> 
> ++ as
> 

What you mention here addresses what I had listed as "Sub audio buffer
MIDI event processing" in that new things for 2.x branch list.  I had
not realized that there was any work done on resolving this.  Great!

Could you send me an archive of this branch?  Also, if you could
describe what part of the API it changes, that would be helpful, but I
could probably gather that from doing a diff on the sources as well.

This sounds like something that could be useful to integrate into 1.x.

Cheers.
Josh




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-26 Thread Josh Green
On Mon, 2009-01-26 at 22:52 +0100, Pedro Lopez-Cabanillas wrote:
> I would like to find more time to work on the PortAudio driver, as it was my 
> plan for ticket #19. I will try, but don't hold your breath.
> 
> You are right: there is already a PortAudio driver, but it doesn't compile 
> with PortAudio V19. This version includes ASIO support (Mac OSX and Windows), 
> so we don't need to write another driver for it. Do you agree?
> 

Sure, if it satisfies the ASIO support, that would be great!  I'm not
too keen on writing a driver myself, it being windows and all ;)

> Regards,
> Pedro
> 

Cheers.
Josh




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-26 Thread Pedro Lopez-Cabanillas
Josh Green wrote:
> > > Some decisions should be made about what remains to put into 1.0.9.
> > >
> > > What of the following should be added?
> > > - PortAudio driver (it exists, does it just need to be improved?)
> > > - Jack MIDI driver
> > > - ASIO driver
> >
> > That's another discussion, we should think about two different and
> > independent development branches. Personally, I'm not interested, but
> > someone might want to do them for 1.x and we could be merge them later in
> > the 2.x branch.
>
> I'll see if I feel inspired to work on any of those in the coming week.
> If not, then its time for a 1.0.9 release.
>
> Cheers!
>   Josh

I would like to find more time to work on the PortAudio driver, as it was my 
plan for ticket #19. I will try, but don't hold your breath.

You are right: there is already a PortAudio driver, but it doesn't compile 
with PortAudio V19. This version includes ASIO support (Mac OSX and Windows), 
so we don't need to write another driver for it. Do you agree?

Regards,
Pedro


___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-26 Thread Ebrahim Mayat
...and while we are on the topic of new development...one thing that has
been on my mind for a while is the subject of effects plug-ins. For the
last few releases, I have chosen not to add the option of LADSPA for the
simple reason that this often causes spontaneous crashes particularly
when running qsynth.

Since lv2 is the new alternative to LADSPA

 

I think that it would be a good idea to consider including lv2 as a
feature that could be coded into the new branch. 

Such considerations would probably affect future planning.

E 




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-26 Thread Ebrahim Mayat
...and while we are on the topic of new development...one thing that has
been on my mind for a while is the subject of effects plug-ins. For the
last few releases, I have chosen not to add the option of LADSPA for the
simple reason that this often causes spontaneous crashes particularly
when running qsynth.

Since lv2 is the new alternative to LADSPA

 

I think that it would be a good idea to consider including lv2 as a
feature that could be coded into the new branch. 

Such considerations would probably affect future planning.

E 



___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-26 Thread Bernat Arlandis i Mañó

On Sun, 25 Jan 2009 18:29:27 -0800, Josh Green  wrote:
> I think breaking FluidSynth into multiple libraries would likely
> needlessly complicate things.  However, I think keeping the code well
> modularized is good and would make splitting things off into separate
> libraries easy, if and when it seems like a good idea.
> 
I agree, the possibility of building separate libraries or a library with
just the chosen components should only be implemented when needed. But
those components should have a well defined API and be totally independent
that makes this possible without touching the code. That's one of the
things I'd be more interested on achieving. At this point, I don't
understand how splitting into separate libs would complicate things,
though.

Still, keep in mind that it's not my main goal splitting into separate libs
but having good modularization to the point this would be really easy and
practical.

> libInstPatch *does* handle 24 bit audio and floating point as well.  Its
> got its own audio managing for converting formats internally and what
> not.  Swami and libInstPatch support 24 bit audio in SoundFont files.
> FluidSynth does not.  That is what I would like to change.
> 
I'll need help there with libInstPatch integration. I don't know exactly
what libInstPatch can do and what not, but using it seems a good idea.

> No that isn't quite right.  The SoundFont loader API is used for
> synthesis in FluidSynth (not loading SoundFont files themselves).
> libInstPatch and Swami do their own instrument management, but when they
> want to synthesize those instruments, the SoundFont loader API is used.
> This API abstracts the synthesis into a set of voices which can be
> created by the application.  The voices have a list of SoundFont based
> parameters, modulators and sample data.  In this way though, FluidSynth
> can be used to synthesize any format, well at least within the confines
> of SoundFont parameters.  Its a flexible API, but I think it could use
> some cleanup and expansion of its capabilities (different audio formats
> for example, like 24 bit).
> 
That's really interesting, this is what I like the least from FS.
Theoretically this would help supporting every sound font format but it
becomes a very hard thing to do mainly because when trying you'll be
implementing a synthesis engine inside every font loader. There's another
solution that would work best. Later on this.

> I'm starting to think having libInstPatch be a dependency could be a
> good move.  libInstPatch is itself a glib/gobject based library.  It has
> some additional dependencies, but most of them are optional (the Python
> binding for example).  The components that would be of the most interest
> would be the instrument loading and synthesis cache objects.  The cache
> allows for the "rendering" of instrument objects into a list of
> potential voices.  When a MIDI note-on event occurs, these voices can be
> selected in a lock-free fashion, the cache is generated at MIDI program
> selection time.  It seems like FluidSynth should be able to take
> advantage of this code, whether it be used in something like Swami or
> standalone.
> 
I really think all the sound font loader stuff should go there, after
having moved the synthesis related parts to the synth component.

> Seems like you have some good ideas.  Lets try to keep a good balance
> though between modularity and simplicity.  Over-modularizing stuff can
> make things more complicated than they need to be, when its really
> supposed to have the opposite effect.
> 
I don't like complicating things, I always try to follow the keep it simple
approach. If things were getting more complicated than they are I'd vote
for throwing the new branch to the can, and I don't want to get to this.
Have in mind, though, that new developments bring new things to learn, and
that's always a bit of work, but learning the new shouldn't be harder than
learning the old. Besides, when more features start to appear they will add
a bit more to learn, but that's unavoidable.

> I think the next question is.  When should we branch?  It probably makes
> the most sense to release 1.0.9 and then branch off 2.x.  At some point
> 2.x would become the head and we would make a 1.x branch.
> 
These should be totally independent, don't think of them as related in any
way. We can branch when it's needed, and 1.0.9 can be released whenever you
want. If you can wait a few days I'll kick off the new branch with a
proposal and I'll also throw a couple fixes in, they started playing with
the code but have become a bit more serious.

Usually, in most projects, new development goes to the trunk and stable
releases are branches. However, since people here is used to having a
stable trunk we can start the experimental 2.x in a branch.

> Some decisions should be made about what remains to put into 1.0.9.
> 
> What of the following should be added?
> - PortAudio driver (it exists, does it just need to be improved?)
> 

Re: [fluid-dev] New development

2009-01-26 Thread jimmy
> Date: Mon, 26 Jan 2009 01:47:12 +0100
> From: Bernat Arlandis i Ma?? 
> 
> 
> Although it's mainly about whatever we can agree that
> is good for the 
> development of the project, I have some generic ideas that
> I think would 
> work. These are modularization, decoupling the API from the
> internals, 
> and also introducing the glib and other dependencies that
> would ease work.
> 
> Specifically, on the modularization aspect I'd like to
> break FS down in 
> several libraries that could build separately if needed.
> This libraries 
> might be, guessing a bit: fluid-synth (the synthesis
> engine), 
> fluid-soundfont (sound font loader), fluid-midi (midi
> playing 
> capabilities incl. midi drivers), fluid-midir (midi
> routing), 
> fluid-audio (audio conversion utilities and output drivers,
> maybe LADSPA 
> too), fluid-ctrl (shell and server).
> 
> Some of these components could grow and become independent
> projects, in 
> particular I think midi routing could become a general
> library being 
> able to read routing rules from a XML file and with a front
> end for 
> editing these files. Some other components might just
> disappear if 
> there's some external one that can do the same.
> 
> Being able to break it down and build it like this would be
> a good 
> modularization test. It would also help 3rd party
> developers taking just 
> what they need and connect all the parts in more flexible
> ways than is 
> possible now.
> 
> In some way, the code has already been developed with this
> goals in 
> mind, so we're not that far. It's really difficult
> to fully reach these 
> goals in one try, or even two, but we're already
> somewhat close.

Bernat,

I think those are good ideas.  Keep in mind I don't do any work for FS at all, 
except running into a few problem here and there.

While people want backward compatibility, I can see that FS code is not that 
easy to change.  So a new version with new API would be just as well. 

Allow me to also suggest that while you do code decoupling, internal variable 
changes should verify that it is valid before allowing the changes.  This is 
best funneling variable changes in a common function and call it from other 
places.

I have seen many places in FS code that assigns variables blindly, i.e. midi 
program change, soundfont bank change that causes invalid instrument selection 
and FS messed up that midi channel, and that channel is silenced.

Jimmy



  


___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-26 Thread Antoine Schmitt


Le 25 janv. 09 à 23:14, Josh Green a écrit :

Things for the new 2.x branch:



About new development, there is an improvement to fluid that I had  
worked on two years ago in a private branch that I think should be  
integrated inside the main code base. I don't know if it should be  
integrated into 1.x or 2.x, because it changes the API a bit.


The bug is with the sequencer : in short, the sequencer uses the  
computer clock to trigger its sequenced events, whereas the audio  
buffers are requested and created by the soundcard when it needs them  
which may be ahead of realtime, resulting in events not being  
triggered at the right moment in the audio stream. The fact is that in  
real life, audio cards do request audio buffers ahead of time,  
sometimes a lot ahead, in bulk (like 16 buffers at once), thus not  
leaving time for the sequencer to trigger its events at the right  
moment in the audio stream.

I have fixed this by:
- making the sequencer use the audio stream as a clock
- calling the sequencer from the synth audio callback so that  
sequenced events are inserted in the audio buffer right before the  
audio is rendered
This implied a small change in the API because now the sequencer  
depends on the synth to get its clock (which was not the case before).


How should I proceed to include it in the main code base ?

++ as




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-26 Thread Bernat Arlandis i Mañó

On Sun, 25 Jan 2009 18:29:27 -0800, Josh Green  wrote:

I think breaking FluidSynth into multiple libraries would likely
needlessly complicate things.  However, I think keeping the code well
modularized is good and would make splitting things off into separate
libraries easy, if and when it seems like a good idea.


I agree, the possibility of building separate libraries or a library with
just the chosen components should only be implemented when needed. But
those components should have a well defined API and be totally independent
that makes this possible without touching the code. That's one of the
things I'd be more interested on achieving. At this point, I don't
understand how splitting into separate libs would complicate things,
though.

Still, keep in mind that it's not my main goal splitting into separate libs
but having good modularization to the point this would be really easy and
practical.


libInstPatch *does* handle 24 bit audio and floating point as well.  Its
got its own audio managing for converting formats internally and what
not.  Swami and libInstPatch support 24 bit audio in SoundFont files.
FluidSynth does not.  That is what I would like to change.


I'll need help there with libInstPatch integration. I don't know exactly
what libInstPatch can do and what not, but using it seems a good idea.


No that isn't quite right.  The SoundFont loader API is used for
synthesis in FluidSynth (not loading SoundFont files themselves).
libInstPatch and Swami do their own instrument management, but when they
want to synthesize those instruments, the SoundFont loader API is used.
This API abstracts the synthesis into a set of voices which can be
created by the application.  The voices have a list of SoundFont based
parameters, modulators and sample data.  In this way though, FluidSynth
can be used to synthesize any format, well at least within the confines
of SoundFont parameters.  Its a flexible API, but I think it could use
some cleanup and expansion of its capabilities (different audio formats
for example, like 24 bit).


That's really interesting, this is what I like the least from FS.
Theoretically this would help supporting every sound font format but it
becomes a very hard thing to do mainly because when trying you'll be
implementing a synthesis engine inside every font loader. There's another
solution that would work best. Later on this.


I'm starting to think having libInstPatch be a dependency could be a
good move.  libInstPatch is itself a glib/gobject based library.  It has
some additional dependencies, but most of them are optional (the Python
binding for example).  The components that would be of the most interest
would be the instrument loading and synthesis cache objects.  The cache
allows for the "rendering" of instrument objects into a list of
potential voices.  When a MIDI note-on event occurs, these voices can be
selected in a lock-free fashion, the cache is generated at MIDI program
selection time.  It seems like FluidSynth should be able to take
advantage of this code, whether it be used in something like Swami or
standalone.


I really think all the sound font loader stuff should go there, after
having moved the synthesis related parts to the synth component.


Seems like you have some good ideas.  Lets try to keep a good balance
though between modularity and simplicity.  Over-modularizing stuff can
make things more complicated than they need to be, when its really
supposed to have the opposite effect.


I don't like complicating things, I always try to follow the keep it simple
approach. If things were getting more complicated than they are I'd vote
for throwing the new branch to the can, and I don't want to get to this.
Have in mind, though, that new developments bring new things to learn, and
that's always a bit of work, but learning the new shouldn't be harder than
learning the old. Besides, when more features start to appear they will add
a bit more to learn, but that's unavoidable.


I think the next question is.  When should we branch?  It probably makes
the most sense to release 1.0.9 and then branch off 2.x.  At some point
2.x would become the head and we would make a 1.x branch.


These should be totally independent, don't think of them as related in any
way. We can branch when it's needed, and 1.0.9 can be released whenever you
want. If you can wait a few days I'll kick off the new branch with a
proposal and I'll also throw a couple fixes in, they started playing with
the code but have become a bit more serious.

Usually, in most projects, new development goes to the trunk and stable
releases are branches. However, since people here is used to having a
stable trunk we can start the experimental 2.x in a branch.


Some decisions should be made about what remains to put into 1.0.9.

What of the following should be added?
- PortAudio driver (it exists, does it just need to be improved?)
- Jack MIDI driver
- ASIO driver


That's another discussion, we should think about

Re: [fluid-dev] New development

2009-01-26 Thread Josh Green
On Mon, 2009-01-26 at 12:06 +0100, Bernat Arlandis i Mañó wrote:
> Still, keep in mind that it's not my main goal splitting into separate libs
> but having good modularization to the point this would be really easy and
> practical.
> 

Sounds good.

> > libInstPatch *does* handle 24 bit audio and floating point as well.  Its
> > got its own audio managing for converting formats internally and what
> > not.  Swami and libInstPatch support 24 bit audio in SoundFont files.
> > FluidSynth does not.  That is what I would like to change.
> > 
> I'll need help there with libInstPatch integration. I don't know exactly
> what libInstPatch can do and what not, but using it seems a good idea.
> 

There is still a little work that I would like to do with libInstPatch
to bring it up to the task.  The sample caching code is what I have been
working on.  Swami loads instruments on demand, rather than a whole
SoundFont all at once.  I haven't yet implemented code for managing this
cache as it grows and freeing unused samples and what not, but its in
the works.

> > No that isn't quite right.  The SoundFont loader API is used for
> > synthesis in FluidSynth (not loading SoundFont files themselves).
> > libInstPatch and Swami do their own instrument management, but when they
> > want to synthesize those instruments, the SoundFont loader API is used.
> > This API abstracts the synthesis into a set of voices which can be
> > created by the application.  The voices have a list of SoundFont based
> > parameters, modulators and sample data.  In this way though, FluidSynth
> > can be used to synthesize any format, well at least within the confines
> > of SoundFont parameters.  Its a flexible API, but I think it could use
> > some cleanup and expansion of its capabilities (different audio formats
> > for example, like 24 bit).
> > 
> That's really interesting, this is what I like the least from FS.
> Theoretically this would help supporting every sound font format but it
> becomes a very hard thing to do mainly because when trying you'll be
> implementing a synthesis engine inside every font loader. There's another
> solution that would work best. Later on this.
> 

Actually, it seems to work pretty well abstracting instruments into
voices.  I think the way it is modeled is OK, its just that the API
needs some cleanup.  I'm not sure what you mean by "implementing a
synthesis engine inside every font loader".  If you mean, loading the
instruments into memory, calculating the parameters and creating the
appropriate voices when a note-on is pressed, that is what libInstPatch
and the FluidSynth Swami plugin does.

> > I'm starting to think having libInstPatch be a dependency could be a
> > good move.  libInstPatch is itself a glib/gobject based library.  It has
> > some additional dependencies, but most of them are optional (the Python
> > binding for example).  The components that would be of the most interest
> > would be the instrument loading and synthesis cache objects.  The cache
> > allows for the "rendering" of instrument objects into a list of
> > potential voices.  When a MIDI note-on event occurs, these voices can be
> > selected in a lock-free fashion, the cache is generated at MIDI program
> > selection time.  It seems like FluidSynth should be able to take
> > advantage of this code, whether it be used in something like Swami or
> > standalone.
> > 
> I really think all the sound font loader stuff should go there, after
> having moved the synthesis related parts to the synth component.
> 

I could probably get a libInstPatch enabled FluidSynth running pretty
quickly, since a lot of the code is already written.

> > I think the next question is.  When should we branch?  It probably makes
> > the most sense to release 1.0.9 and then branch off 2.x.  At some point
> > 2.x would become the head and we would make a 1.x branch.
> > 
> These should be totally independent, don't think of them as related in any
> way. We can branch when it's needed, and 1.0.9 can be released whenever you
> want. If you can wait a few days I'll kick off the new branch with a
> proposal and I'll also throw a couple fixes in, they started playing with
> the code but have become a bit more serious.
> 
> Usually, in most projects, new development goes to the trunk and stable
> releases are branches. However, since people here is used to having a
> stable trunk we can start the experimental 2.x in a branch.
> 

Sounds good.

> > Some decisions should be made about what remains to put into 1.0.9.
> > 
> > What of the following should be added?
> > - PortAudio driver (it exists, does it just need to be improved?)
> > - Jack MIDI driver
> > - ASIO driver
> > 
> That's another discussion, we should think about two different and
> independent development branches. Personally, I'm not interested, but
> someone might want to do them for 1.x and we could be merge them later in
> the 2.x branch.
> 

I'll see if I feel inspired to work on any of those in the com

Re: [fluid-dev] New development

2009-01-26 Thread Antoine Schmitt


Le 25 janv. 09 à 23:14, Josh Green a écrit :

Things for the new 2.x branch:



About new development, there is an improvement to fluid that I had  
worked on two years ago in a private branch that I think should be  
integrated inside the main code base. I don't know if it should be  
integrated into 1.x or 2.x, because it changes the API a bit.


The bug is with the sequencer : in short, the sequencer uses the  
computer clock to trigger its sequenced events, whereas the audio  
buffers are requested and created by the soundcard when it needs them  
which may be ahead of realtime, resulting in events not being  
triggered at the right moment in the audio stream. The fact is that in  
real life, audio cards do request audio buffers ahead of time,  
sometimes a lot ahead, in bulk (like 16 buffers at once), thus not  
leaving time for the sequencer to trigger its events at the right  
moment in the audio stream.

I have fixed this by:
- making the sequencer use the audio stream as a clock
- calling the sequencer from the synth audio callback so that  
sequenced events are inserted in the audio buffer right before the  
audio is rendered
This implied a small change in the API because now the sequencer  
depends on the synth to get its clock (which was not the case before).


How should I proceed to include it in the main code base ?

++ as




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-25 Thread Josh Green
On Mon, 2009-01-26 at 01:47 +0100, Bernat Arlandis i Mañó wrote:
> Specifically, on the modularization aspect I'd like to break FS down in 
> several libraries that could build separately if needed. This libraries 
> might be, guessing a bit: fluid-synth (the synthesis engine), 
> fluid-soundfont (sound font loader), fluid-midi (midi playing 
> capabilities incl. midi drivers), fluid-midir (midi routing), 
> fluid-audio (audio conversion utilities and output drivers, maybe LADSPA 
> too), fluid-ctrl (shell and server).
> 
> Some of these components could grow and become independent projects, in 
> particular I think midi routing could become a general library being 
> able to read routing rules from a XML file and with a front end for 
> editing these files. Some other components might just disappear if 
> there's some external one that can do the same.
> 
> Being able to break it down and build it like this would be a good 
> modularization test. It would also help 3rd party developers taking just 
> what they need and connect all the parts in more flexible ways than is 
> possible now.
> 
> In some way, the code has already been developed with this goals in 
> mind, so we're not that far. It's really difficult to fully reach these 
> goals in one try, or even two, but we're already somewhat close.
> 

I think breaking FluidSynth into multiple libraries would likely
needlessly complicate things.  However, I think keeping the code well
modularized is good and would make splitting things off into separate
libraries easy, if and when it seems like a good idea.

> > - 24 bit sample support
> > - Sample streaming support
> >   
> 24bit support is needed for complete SF2.04 support, and sample 
> streaming would be good too, specially with 24bit samples. I thought 
> this belonged to libInstPatch but no. These should be post-2.0.

libInstPatch *does* handle 24 bit audio and floating point as well.  Its
got its own audio managing for converting formats internally and what
not.  Swami and libInstPatch support 24 bit audio in SoundFont files.
FluidSynth does not.  That is what I would like to change.

> > - Sub audio buffer MIDI event processing
> >   
> This one would be hard and I think it would hit performance hard. I 
> don't think it's important to have such a high MIDI resolution. Talk 
> later about this, post-2.0.

I agree with this.  I think this was mainly an issue in regards to some
audio drivers not processing audio at the buffer size to which they are
set to.  That seems to be more of an issue with the sound driver though.

> > - Faster than realtime MIDI file to audio rendering
> >   
> When doing modularization, I'd like to implement external timing, that 
> is, synthesis and MIDI timing controlled by external functions. That 
> would make it really easy to do.

Yeah, I think most of the timing related stuff in regards to MIDI
playback happens in realtime, rather than holding a queue of timing
events.  This is one area though, where I am a bit in the dark as far as
the FluidSynth code base.  It would be nice though, to be able to render
a WAV file from a MIDI and SoundFont file and get the exact same audio
output, every time.  This would also be extremely useful in SoundFont
compliance testing, something that I think really needs to be done with
FluidSynth.

> > - Overhaul SoundFont loader API (used only by Swami as far as I know)
> >   
> This means Swami depends on FS Soundfont API, I thought libInstPatch 
> duplicated this functionality. This is in the pack then.

No that isn't quite right.  The SoundFont loader API is used for
synthesis in FluidSynth (not loading SoundFont files themselves).
libInstPatch and Swami do their own instrument management, but when they
want to synthesize those instruments, the SoundFont loader API is used.
This API abstracts the synthesis into a set of voices which can be
created by the application.  The voices have a list of SoundFont based
parameters, modulators and sample data.  In this way though, FluidSynth
can be used to synthesize any format, well at least within the confines
of SoundFont parameters.  Its a flexible API, but I think it could use
some cleanup and expansion of its capabilities (different audio formats
for example, like 24 bit).

> > - Leverage off of libInstPatch (optional dependency perhaps, maybe not?)
> > which would add support for other formats and flexible framework for
> > managing/manipulating instruments.
> >   
> You could certainly help a lot with this.

I'm starting to think having libInstPatch be a dependency could be a
good move.  libInstPatch is itself a glib/gobject based library.  It has
some additional dependencies, but most of them are optional (the Python
binding for example).  The components that would be of the most interest
would be the instrument loading and synthesis cache objects.  The cache
allows for the "rendering" of instrument objects into a list of
potential voices.  When a MIDI note-on event occurs, these voices c

Re: [fluid-dev] New development

2009-01-25 Thread Bernat Arlandis i Mañó

Pedro:
Changes breaking the API compatibility, not only for FluidSynth but for any 
ELF shared library (i.e., to be deployed in Linux), should require a change 
in the SONAME internal attribute for the library. This is usually 
accomplished changing the major version number.
I wasn't sure about this, I thought changing to 1.1.x would be enough. 
Thanks for the insight.
Why to bore with this? Because libfluidsynth.so is used by many programs, as 
QSynth (I'm a contributor for it) and the others reported by Julien.

True. I use QSynth myself, great to hear you're there too.
So, yes. I would open a new FS2 branch for changes breaking API compatibility 
and architecture changes. Maybe this would be the place to introduce the glib 
dependency that Josh has in mind. Anyway, I would like to ask that before 
coding a lot, please (briefly) explain your proposals.
  
Although it's mainly about whatever we can agree that is good for the 
development of the project, I have some generic ideas that I think would 
work. These are modularization, decoupling the API from the internals, 
and also introducing the glib and other dependencies that would ease work.


Specifically, on the modularization aspect I'd like to break FS down in 
several libraries that could build separately if needed. This libraries 
might be, guessing a bit: fluid-synth (the synthesis engine), 
fluid-soundfont (sound font loader), fluid-midi (midi playing 
capabilities incl. midi drivers), fluid-midir (midi routing), 
fluid-audio (audio conversion utilities and output drivers, maybe LADSPA 
too), fluid-ctrl (shell and server).


Some of these components could grow and become independent projects, in 
particular I think midi routing could become a general library being 
able to read routing rules from a XML file and with a front end for 
editing these files. Some other components might just disappear if 
there's some external one that can do the same.


Being able to break it down and build it like this would be a good 
modularization test. It would also help 3rd party developers taking just 
what they need and connect all the parts in more flexible ways than is 
possible now.


In some way, the code has already been developed with this goals in 
mind, so we're not that far. It's really difficult to fully reach these 
goals in one try, or even two, but we're already somewhat close.


Josh Green:

I think creating a 2.x branch would be the way to go.  One of the things
that has made FluidSynth so successful, is that the API has been pretty
solid.  It is indeed also one of the reasons why it is difficult to move
forward, as you point out.  I think that the 1.x branch should continue
to be compatible as much as possible, at least at a code API level (just
requires a recompile, but no source changes on the part of other
software using FluidSynth).
  
I agree and it's also a requirement like Pedro said, that's why I 
mentioned that change in the ticket, since at the time I didn't now 
whether the trunk was going to always be the 1.0.x stable branch.

Things for the new 2.x branch:
- Move to glib
  

That's high in our list.

- 24 bit sample support
- Sample streaming support
  
24bit support is needed for complete SF2.04 support, and sample 
streaming would be good too, specially with 24bit samples. I thought 
this belonged to libInstPatch but no. These should be post-2.0.

- Sub audio buffer MIDI event processing
  
This one would be hard and I think it would hit performance hard. I 
don't think it's important to have such a high MIDI resolution. Talk 
later about this, post-2.0.

- Faster than realtime MIDI file to audio rendering
  
When doing modularization, I'd like to implement external timing, that 
is, synthesis and MIDI timing controlled by external functions. That 
would make it really easy to do.

- Overhaul SoundFont loader API (used only by Swami as far as I know)
  
This means Swami depends on FS Soundfont API, I thought libInstPatch 
duplicated this functionality. This is in the pack then.

- Design in a fashion which allows for additional instrument formats to
be supported, if desired (a Fluid instrument format?).
  
Not my interest, but modularization would certainly help that, and we 
might reach a point where this is desirable.

- Leverage off of libInstPatch (optional dependency perhaps, maybe not?)
which would add support for other formats and flexible framework for
managing/manipulating instruments.
  

You could certainly help a lot with this.

I would imagine that as we work on the 2.x branch, the 1.x branch will
become less appealing to work on.  So it is likely that some of the
things which really should be done with 1.x, wont get done.  I think
this is OK though.  Other developers which use FluidSynth in their
applications will likely be more willing to change their own API code to
FluidSynth if it has lots of new features.  Those new features will be
easier for us to add, when the code base is more to our liking, so it is
to th

Re: [fluid-dev] New development

2009-01-25 Thread Josh Green
On Sun, 2009-01-25 at 09:27 -0500, Ebrahim Mayat wrote:
> By "break compatibility backwards", do you mean that old soundfont files
> could not be parsed successfully ?
> 
> Perhaps you are talking about the linking of FS libraries to other
> programs like SWAMI, MAX/MSP and  fluid~ ?  
> 
> Thanks for keeping us informed.
> E

Its about breaking library compatibility and API compatibility.  But as
mentioned by Pedro, the old FluidSynth library can be installed along
side the new one.  So older applications will continue to work, but any
program which wants to take advantage of newer developments will need to
change their code to interface to the new library.

Hope that explains it well enough.  To the user, there shouldn't be a
lot of difference, except for the added functionality of FluidSynth 2.0
of course.
Josh




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-25 Thread Josh Green
Hello Bernat,

On Sun, 2009-01-25 at 13:41 +0100, Bernat Arlandis i Mañó wrote:
> This is concerning ticket #11.
> 
> I'm thinking on a lot of changes to take FS forward as I review the 
> source code and fix some things, but most of these changes would break 
> compatibility backwards. Maybe we should think about making a branch for 
> something that could be FS2.0. Fixes that break compatibility could go 
> there, what do you think? Or we could take intermediate steps that would 
> be 1.1, 1.2,... I think it's better working in gradual steps with public 
> releases.

> I'm willing to push development for that branch since I'm getting to 
> know its internals and I like working with it very much, but it wouldn't 
> be good to do it completely on my own without your expertise and the 
> users support. I don't want this to start a fork neither, so there has 
> to be an initial agreement that we all support this branch while it 
> accomplishes some goals or we all forget about it.
> 

I think creating a 2.x branch would be the way to go.  One of the things
that has made FluidSynth so successful, is that the API has been pretty
solid.  It is indeed also one of the reasons why it is difficult to move
forward, as you point out.  I think that the 1.x branch should continue
to be compatible as much as possible, at least at a code API level (just
requires a recompile, but no source changes on the part of other
software using FluidSynth).

Things for the new 2.x branch:
- Move to glib
- 24 bit sample support
- Sample streaming support
- Sub audio buffer MIDI event processing
- Faster than realtime MIDI file to audio rendering
- Overhaul SoundFont loader API (used only by Swami as far as I know)
- Design in a fashion which allows for additional instrument formats to
be supported, if desired (a Fluid instrument format?).
- Leverage off of libInstPatch (optional dependency perhaps, maybe not?)
which would add support for other formats and flexible framework for
managing/manipulating instruments.


I would imagine that as we work on the 2.x branch, the 1.x branch will
become less appealing to work on.  So it is likely that some of the
things which really should be done with 1.x, wont get done.  I think
this is OK though.  Other developers which use FluidSynth in their
applications will likely be more willing to change their own API code to
FluidSynth if it has lots of new features.  Those new features will be
easier for us to add, when the code base is more to our liking, so it is
to the benefit of all, for us to have the flexibility to change the API
as needed.  The 1.x can remain, with bug fixes and minor functionality
improvements, for those applications which don't move to the new version
immediately.

> I know it's a short time since I'm working with the code, but I'm 
> already playing with it and there are simple things that can't be done 
> without breaking backwards compatibility since the code exposes a lot of 
> internals and there's a highly coupling between some components. These 
> are issues that keep FS from evolving.
> 
> Opinions, please.
> 

Its very inspiring to have you so interested in development.  I've been
the maintainer for several years now and haven't been able to quite give
FluidSynth as much attention as it needs, and deserves.  I look forward
to working on a new FluidSynth with you, and whomever else has the will
and enthusiasm to do so :)

One thing I really want to get into, is creating a new instrument
format.  This is slightly off topic, but I think it would be good to
keep in mind this kind of possibility, as we develop FluidSynth 2.0.
I'm thinking something XML based, which uses splines for describing
waveforms, envelopes, oscillators, audio samples, etc.  Kind of a
Structured Vector Audio.  I think Fluid instruments would be a fitting
name for this format.  There is still a bit of work to do, before
exploring this realm.  But I'm really excited about developing such a
format and hearing it synthesized.

Cheers.
Josh




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-25 Thread Ebrahim Mayat
On Sun, 2009-01-25 at 13:41 +0100, Bernat Arlandis i Mañó wrote:
> This is concerning ticket #11.
> 
> I'm thinking on a lot of changes to take FS forward as I review the 
> source code and fix some things, but most of these changes would break 
> compatibility backwards. Maybe we should think about making a branch for 
> something that could be FS2.0. Fixes that break compatibility could go 
> there, what do you think? Or we could take intermediate steps that would 
> be 1.1, 1.2,... I think it's better working in gradual steps with public 
> releases.
> 
By "break compatibility backwards", do you mean that old soundfont files
could not be parsed successfully ?

Perhaps you are talking about the linking of FS libraries to other
programs like SWAMI, MAX/MSP and  fluid~ ?  

Thanks for keeping us informed.
E




___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-25 Thread Pedro Lopez-Cabanillas
Bernat Arlandis i Mañó wrote:
> This is concerning ticket #11.
>
> I'm thinking on a lot of changes to take FS forward as I review the
> source code and fix some things, but most of these changes would break
> compatibility backwards. Maybe we should think about making a branch for
> something that could be FS2.0. Fixes that break compatibility could go
> there, what do you think? Or we could take intermediate steps that would
> be 1.1, 1.2,... I think it's better working in gradual steps with public
> releases.
>
> I'm willing to push development for that branch since I'm getting to
> know its internals and I like working with it very much, but it wouldn't
> be good to do it completely on my own without your expertise and the
> users support. I don't want this to start a fork neither, so there has
> to be an initial agreement that we all support this branch while it
> accomplishes some goals or we all forget about it.
>
> I know it's a short time since I'm working with the code, but I'm
> already playing with it and there are simple things that can't be done
> without breaking backwards compatibility since the code exposes a lot of
> internals and there's a highly coupling between some components. These
> are issues that keep FS from evolving.
>
> Opinions, please.

Changes breaking the API compatibility, not only for FluidSynth but for any 
ELF shared library (i.e., to be deployed in Linux), should require a change 
in the SONAME internal attribute for the library. This is usually 
accomplished changing the major version number. As reported by the objdump 
utility:

$ objdump -p /usr/lib/libfluidsynth.so | grep SONAME
  SONAME  libfluidsynth.so.1

Why to bore with this? Because libfluidsynth.so is used by many programs, as 
QSynth (I'm a contributor for it) and the others reported by Julien. Using 
different SONAMEs allows the old library version to be installed alongside of 
the new one, to be used at runtime in the same box. The programs using the 
old library can still run before they evolve to adopt the new one.

So, yes. I would open a new FS2 branch for changes breaking API compatibility 
and architecture changes. Maybe this would be the place to introduce the glib 
dependency that Josh has in mind. Anyway, I would like to ask that before 
coding a lot, please (briefly) explain your proposals.

Regards,
Pedro


___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-25 Thread Bernat Arlandis i Mañó

Julien Claassen escrigué:

Hello Bernat!
  How do you mean: breaking compatibility? Would commands no longer 
work? Would you have a new syntax for the more complex parts of FS? 
Could you explain a bit more, so the stupid user understands. Because 
only then I could really think about it. Generally I'd have no probelm 
with changes, as long as I know what to do to get my sounds playing...
  One thing though you should definitely do is to document API canges, 
because csound uses fluidsynth stuff, there's a DSSI plugin and I 
don't know what else.

  But it's good to see someone put new force behind it and push it!
  Kindest regards
  Julien


Hi Julien.
It's about changes in the API, so users shouldn't have a problem, except 
those derived from other applications not being updated to the new API 
and thus not working with the newer library versions.
By now it's not necessary changing the way FS is used from the command 
line, I don't see anything wrong there, although they will have to grow 
to accommodate new features.


--
Bernat Arlandis i Mañó



___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [fluid-dev] New development

2009-01-25 Thread Julien Claassen

Hello Bernat!
  How do you mean: breaking compatibility? Would commands no longer work? 
Would you have a new syntax for the more complex parts of FS? Could you 
explain a bit more, so the stupid user understands. Because only then I could 
really think about it. Generally I'd have no probelm with changes, as long as 
I know what to do to get my sounds playing...
  One thing though you should definitely do is to document API canges, because 
csound uses fluidsynth stuff, there's a DSSI plugin and I don't know what 
else.

  But it's good to see someone put new force behind it and push it!
  Kindest regards
  Julien


Music was my first love and it will be my last (John Miles)

 FIND MY WEB-PROJECT AT: 
http://ltsb.sourceforge.net
the Linux TextBased Studio guide
=== AND MY PERSONAL PAGES AT: ===
http://www.juliencoder.de


___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev