I've actually been thinking about doing the same thing (although I haven't
started or done any kind of feasibility study).
However, I know my way around NaCl/Pepper pretty well (I have been working
with the Pepper team to add new APIs). I can't think of a reason why this
would be a problem (other
Hi,
Wow, I thought this was documented but I can't find it in the docs either.
It's the -F or --fast-renderer command-line flag (which is in the man page).
The command I use is this:
fluidsynth -lni -r 48000 -F $(SONG).wav $SOUNDFONT $(SONG).mid
Where 48000 is the sample rate you want.
HTH
Tested my three things:
- Building on Ubuntu: Good.
- Fast rendering: Good.
- As a DOSBox backend: Good.
I also tested that this new version plays a song up until the EOF instead
of just the last note (the fix I made for this release), and that worked.
Also, I note that the API
Since I mentioned on the RC testing program about using FluidSynth as a
DOSBox backend, I thought I'd explain.
I recently played through all of the King's Quest games (marvellous
historical look at the evolution of adventure games) in DOSBox and I came
up to King's Quest VII and the music didn't
A warning for anybody performing a test: you should ensure that you are
testing against the RC version of libfluidsynth, not just the fluidsynth
binary. This will depend on the specifics of the system, but the easiest
way to ensure this is to delete any old versions of fluidsynth you have
around.
Hi David, Team,
Sorry to be a late voice in this. If you put out release candidates, I'll
test them in a couple of use cases (basically all of the cases I use FS
for):
- Building under Linux (I'll typically be running the latest build of
Ubuntu).
- Fast-rendering a collection of MIDI
I really don't want to fuel the fire here, but I'd just like to speak with
some experience on both sides of the patch/pull game.
You and Pedro seem to stick with only what you prefer with the Soundfont
specs and GS-specs, both of which only deal with 128 soundbanks.
I've been observing this
Hi, I'm on holiday right now so I can't test this suggestion, but from
memory, here's what I do.
FluidSynth has an option to set the sampling rate of the output. Sox has an
option to specify the sampling rate of the input. Just set them both to the
same value (an appropriate one is 44000, IIRC)
I wouldn't bother with all that Rosegarden stuff. Since Dr. Leo has already
almost worked out how to use the fast exporter, it seems much easier than
having to manually wire up several GUI programs. Pedro's advice is exactly
what I do.
Perhaps I'm biased towards command line, but I find that much
Hi Christian,
Hi
if somebody wants sale a commercial software which use FluidSynth
1 - is it possible ?
Yes, it is possible. FluidSynth uses the LGPL license, which permits
commercial use, as long as you follow the rules of the license.
The full text of the license appears here:
Doesn't the -f option allow one to indicate a file containing shell
commands,
including reverb/chorus control? Sorry if I am missing that part of the
discussion.
Sorry I never replied on this. I finally got around to testing it: yes it
does work, rendering my previous source code
I'm sorry to read that. The fact is that there isn't any legal uncertainty;
you can use and run FluidSynth under iOS like any other operating system.
The
only concern was raised about distributing FluidSynth in Apple's AppStore,
but
that is another matter. You only need to respect the terms
Hi Mingfen,
I don't know enough about iOS to answer your question. But as an aside, I
will just inform you, if you aren't already aware, that there is unknown
legal status surrounding the use of FluidSynth on iOS due to the LGPL and
Apple licenses. Please consider the following page for
Ordinary users are not going to want to use the command line. Although it
is useful for wrappers to use, a non-technical user will shy away from
anything done in the command line.
Well that's why there is QSynth, right? (I haven't used QSynth much
personally.)
What we are talking about here
**
Remember also that I am using kubuntu (KDE desktop), though with the
low-fat settings (which seem to help a lot). I am experimenting with
kubuntu again because the missing-menu-bar syndrome of the Ubuntu Unity
desktop they are forcing on everybody (much like Microsoft attempted to
force
I'm just coming to terms with how to send commands to FluidSynth to control
things like reverb (specifically, I want to control reverb).
I'm under the impression that while a few things can be controlled from the
command-line (such as gain), most things (such as reverb) cannot be. There
are a few
You can also programmatically control almost everything via midi CC
messages and a carefully edited soundfont. That would work in your
case, although it's a bit of a faff to set up.
Right, so I tried adjusting the Reverb knob in Rosegarden (connected to
FluidSynth) and didn't hear much
I successfully compiled FluidSynth (1.1.3) on Ubuntu 11.10, and installed
it, and the problem went away.
OK so there are four configurations of interest, and it seems that (maybe?)
you've only tried three:
1. FS 1.1.3 on Ub 11.04 (i.e., default FS on Ub 11.04)
2. FS 1.1.3 on Ub 11.10
3. FS
Hi Aere,
Thanks so much. I wasn't expecting you to be this helpful! I skimmed the
guide -- it looks like it will be of use to me.
To the mailing list: I apologise for diverting this thread off from a bug
report. Hopefully others will find it helpful too.
Matt
Aere, thank you for reporting this issue. I'd be willing to test this out
but I actually have a deadline soon for some music I need to compose with
FluidSynth. I can't have FluidSynth break in the next few weeks. Therefore,
I'm very glad you warned me and I'll hold off upgrading to 11.10 until I
I need JACK because I include audio files in my sequences. I can even use
it on a 450 megahertz machine. I've learned a lot about using it in the
past few years. I use Rosegarden, and it's easiest to record audio tracks
using Rosegarden.
Okay. I might try it again if I need to include
It wouldn't be that explicit. You would do it by hooking a modulator
to some CC input and using it to extend the volume decay phase - you
can do that in the soundfont or the synth.
However, I don't see a way to do it with the sustain pedal, since that
suppresses all noteoff events while it
But to simulate the other piano strings resonating in sympathy when the
sustain pedal is pressed then you want to have some chorus/echo effect at
the same time as the sustain pedal is pressed.
If the piece has been recorded from live keyboard playing then removing the
sustain pedal will
Just on a side (musical) note. With Piano, the Sustain pedal undamps all
the strings so you do get sympathetic resonances AND the notes die away
slowly with the pedal held. So it seems the Yamaha is modelling this. But
then, on a piano, the notes die away even if you hold the keys; one of
Hi fluid-dev,
I have a question about the synth. I've noticed that my music keyboard's
built-in synth has a particularly nice property that if the key is released
while the sustain pedal is pressed, the note will begin to fade out. In
other words, pushing a key and holding it is *not* the same as
Please do send all events (including meta events). Perhaps you don't need
them with your Midi files, but I have work with Midi segments that changes
tempo (perhaps even time signature 3/4 - 4/4) on the fly. So looping back
will need the meta events (tempo...) to play properly. There are
Yeah I guess that would work (although I'd prefer to properly detect when
all the sound has died out, rather than than just waiting a few seconds).
I hadn't thought of that.
But there's still something dishonest about being told (by the MIDI file)
this song ends right now, and FluidSynth going,
Fixing it on the FluidSynth side seems ugly to me: What if EOT occurs with
a everlasting note on? What about time to let the reverb decay after
playing?
The issue has been brought up before, and I vaguely remember that I saw
some other implementation had added a parameter, something like
This continues a discussion started in the bug ticket #101 (
https://sourceforge.net/apps/trac/fluidsynth/ticket/101) (between myself and
David Henningson).
Recently, a feature was added allowing the user of the FS API to register a
custom playback_callback function, which is called every time a
Would the following text be suitable to put on the wiki, and represent the
sort-of consensus:
=== iOS and the App Store ===
It is questionable whether iOS and the App Store can fulfil the
requirements of the LGPL. From a long thread on the fluid-dev mailinglist
[insert link to archive], it
A BSD license is not equivalent to a liberal interpretation of the LGPL, that
is: allowing the distribution of FluidSynth and derived works by any channel,
including the App Store, with the conditions (required by the LGPL, not by the
BSD license) that 1) when the source code is modified, it
I finally got around to looking at this issue (thought I'd take a
break from arguing about licensing and actually contribute!)
It turns out this issue is really easy to fix, but much harder to
*really* fix. To elaborate:
1. To fix the issue, all I had to do was change fluid_midi so it
doesn't
I wasn't suggesting that remaining LGPL would negatively impact
FluidSynth. I was simply trying to find a solution to the issue at
hand, since I've worked a lot with embedded systems and thought that
the discussion may have just been about the static library issue. I
probably should have
Pedro:
Yes, you can release a GPL application that requires proprietary operating
systems and compilers. Nothing is said about money, though.
Yes, you can release a (L)GPL application that requires proprietary
operating systems and compilers. That is why it is valid to release
FluidSynth for
OK thanks for correcting, Graham,
So point 2 is false and point 3 is true if ONLY using the App Store...
I can get the sourcecode, compile it myself in my OSX environment and
connect and upload the app to 100 iOS devices...
Well we are assuming use of the App Store (jailbroken devices are
For point 3, I think it fails because you can choose to distribute the
modified source code outside App Store, and it'll be available to use for
anyone who fulfils points 1. and 2.
That's a good point, but I think it's all about whether you're
distributing the binaries or the source. Remember,
I don't agree. His interpretation of free software licenses is unacceptable
for me, so please don't use FluidSynth
Wiki to express that personal opinions in the name of the whole project team.
OK, I won't change anything.
___
fluid-dev mailing list
You are assuming too much here. We don't know under which license Rouet
Production is going to release his product in October.
Hmm, I possibly side-tracked this thread a bit. In my post, I was
speaking generally about coming up with a consistent policy for this
sort of behaviour, not
You said that Xcode is free.
I've said that it is gratis. And that GCC, the compiler included is also
free software.
Sorry. When I said free I meant gratis.
You are wrong with the assumption that free software implies gratis as well.
The free as in freedom of free software is not the same
That's a good idea. That way, you would be able to just type
'fluidsynth midifile' to play a song.
Can I also recommend having a standard environment variable
SOUNDFONTPATH or similar which contains a colon-separated (semicolon
on Windows) list of paths to search for soundfonts. That would be
Question is whether this stuff should actually be in FluidSynth or in
its clients. If the client does not specify a soundfont on the command
line, it might be that he/she wants to load it as a shell command.
Maybe yet another switch --load-default-soundfont (and corresponding
API thing)?
not
This issue has come up several times on the mailing list. It might be
helpful to have a statement on the FluidSynth trac page explaining the
project's position on use of the software in the Apple App Store, and
similar restricted environments.
There is already an FAQ question about this:
I have never got FluidSynth working with Jack. (I think I have once but the
sound wasn't right and Jack itself was quite crashy.) I just use the ALSA
back-end, which should be fine on Linux machines unless you seriously care
about real-time (from what I understand).
Just use -a alsa as a
that it might be a problem.)
Matt Giuca
___
fluid-dev mailing list
fluid-dev@nongnu.org
https://lists.nongnu.org/mailman/listinfo/fluid-dev
I agree with Andrew's first paragraph (and I don't know enough about
libtool to endorse the second).
Is it something that alone calls for a new release (1.1.5) or can we just
update svn? What implications does bumping the SONAME have?
The Fluidsynth soname is currently libfluidsynth.so.1. As
Great work, David. Thanks for keeping everything going. It is definitely a
useful tool for a lot of musicians and game writers (of which I count myself
in both categories).
Matt
___
fluid-dev mailing list
fluid-dev@nongnu.org
Wow, that sounds like a fantastic use of FluidSynth. If you perform it, can
you post a video to this mailing list?
___
fluid-dev mailing list
fluid-dev@nongnu.org
https://lists.nongnu.org/mailman/listinfo/fluid-dev
Thanks very much David.
It was great working with you throughout the concept and implementation of
this new feature.
Also the adding an example in the documentation and for providing easy links
in the email above was much appreciated!
No problem. I love Launchpad. I really recommend anyone
That's a good summary. I would just add one thing:
Can I include FluidSynth in my closed-source commercial project
I would add to this section The same advice applies to projects
released under permissive licenses (e.g., BSD or MIT). Because such
licenses permit closed-source redistribution,
I have updated the midi-buffer branch with the previous two suggestions:
1. fluid_player_add_mem now copies the buffer before it stores it, so the
caller can now free it immediately (and it is now the caller's
responsibility to free).
2. Internally, fluid_midi_file now uses int buf_len and buf_pos
Hi Chris,
Note that FluidSynth is licensed under the GNU Lesser General Public
Licensehttp://en.wikipedia.org/wiki/Lesser_General_Public_Licenseversion
2 (LGPL), not the main GPL.
This is a slightly different license which specifically allows what you are
trying to do (linking the LGPL library
I've never used Fluid with sockets, so I'm trying it out now.
What you're doing really is nothing to do with Python (I don't think), so we
can simplify it just by running it on the command-line, and using
netcathttp://en.wikipedia.org/wiki/Netcat(or telnet) to communicate
over the socket:
$
The code has been working this way all along, no behavior changes regarding
FS handling of GM-mode bank select in anyway, with or without that comment.
I will leave that part of the comment back in there.
OK cool. What I meant was, if a comment says this might need to be fixed
and you don't
I mean that the (channel==9) has been the hard-coded hack at the time, and
is now replaced by the new hack is_drum_channel field. At initialization,
the new code still hard-code a 9, but not anywhere else as they were
previously.
Yeah, but the comment was about bank selection (and that it
I'm not a FS dev either. Just seeing something I may want to use isn't
there, so I just take a crack at it. Everyone on this [fluid-dev] list is
one way or another interested in the coding/development side of FS one way
or another. That's why I ask for opinions instead of contacting just
Hi David,
Thanks for replying. If you don't mind, I'll take this conversation (about
MIDI buffers) to the other thread (Making MIDI player read from a buffer),
and stop CCing James, Sam and Ryan.
PS. Ryan, if you're reading this, I heard your FLOSS Weekly interview a
couple of years back (on
Hi David,
In response to your post here:
http://lists.nongnu.org/archive/html/fluid-dev/2011-01/msg3.html
Well, for the memory allocation issue it sounds like we both are leaning
towards #5 as in copying the memory. I don't think the
inefficiency is an issue and it gives FS the most
Hi Jimmy,
I am not a FluidSynth developer, just an interested person, so my opinions
don't represent the view of the FluidSynth project.
This seems like a valuable generalisation of a previously hard-coded value.
Given that your new flag will default to 0 for all channels and 1 for
channel 9, it
Matt, how are things on your end? Were you waiting for some feedback?
Hi James,
Yes, I wasn't sure how to proceed, because I basically finished the
patch. There was a discussion about the memory allocation strategy
(since the memory needs to be allocated by the user) which was never
decided on,
Many thanks for this. I never did find the time to finish up my
SDL_mixer patch and send it upstream because life has taken over but
this looks great.
Tell me about it ... took a long time to get this buffer patch done as well.
I look forward to cutting my patch in half. :)
Well the one
This README file instructions belong to the auto-tools based build system.
I've updated it with the specific details for each build system.
Ah that works now. Thanks. I must have been trying to run it from the root
directory, rather than the build directory.
to malloc and copy the buffer;
this copying approach would let me simply send a pointer to the global
buffer in to FluidSynth.
Matt Giuca
___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev
I had a bit of trouble building the Doxygen docs for FluidSynth. I've sort
of resolved it now, but I'll still report the problems I had.
It looks like it has undergone CMakeization but I can't get it to work
properly on Ubuntu 10.10 (doxygen 1.7.1).
I tried the following:
1. cd doc ; cmake .
I would like to find a way to sync midi playback with some recorded audio I
have. I'm not planning on using any audio driver. I want to pull the audio
data using fluid_synth_write_s16().
I'm not quite sure how the fluid_synth_write_s16 interface works, but I
don't think it does its own
It still works fine for me, on Ubuntu 10.04.
Are you on a Mac? I'm guessing you might be, because the commit log for r388
(which you claim to be the start of the breakage) is Mac CoreAudio driver
adapted to AuHAL.
MIDI file rendering works in revisions 387 down to 385
Do you mean it is also
I have opened a new ticket for a minor issue in fluid_midi.c (which I bumped
into trying to merge to my midi-buffer branch (
https://code.launchpad.net/~mgiuca/fluidsynth/midi-buffer)).
https://sourceforge.net/apps/trac/fluidsynth/ticket/94
I got an email from SourceForge when I registered the
So this bug (ticket #92) was fixed in r392. There's a slight problem with
the fix. In the patch I supplied, I had it return -1 in the event of no
characters being read. Since it's emulating getc, I perhaps should have
returned the constant EOF (-1), but since all the calls to this function
check
Hi David,
Thanks for that good clarification. I've been confused as to whether we were
supposed to be using trac or not.
Perhaps we can create some kind of virtual sf.net user, that owns all new
tickets, and has the fluid-dev list registered as his home email so that
all ticket notifications
(Btw I am not a FluidSynth developer, I'm just interested.)
Oh? Should I also post a bug report somewhere then?
There is a bug tracker here:
http://sourceforge.net/apps/trac/fluidsynth/report
I don't think it gets used much though. It only has 6 active tickets. I get
the feeling that bug
.
Matt Giuca
example-eof.mid
Description: MIDI audio
=== modified file 'fluidsynth/src/midi/fluid_midi.c'
--- fluidsynth/src/midi/fluid_midi.c 2010-07-28 20:21:17 +
+++ fluidsynth/src/midi/fluid_midi.c 2010-10-20 12:44:55 +
@@ -95,6 +95,9 @@
mf-c = -1;
} else {
n = FLUID_FREAD(c, 1, 1
I have started making the changes (the major internal change to fluid_midi
is done; it now loads the file to memory at the start and does all the
parsing from the memory buffer). It currently owns the pointer (garbage
collection choice #2), but we can discuss that further.
The code is in a branch
Since I was told off for not using the bug tracker (even though it doesn't
seem to be very much in use...) ;)
https://sourceforge.net/apps/trac/fluidsynth/attachment/ticket/92/
___
fluid-dev mailing list
fluid-dev@nongnu.org
We could also copy the memory. While copying memory around is inoptimal,
that might be a secondary concern. At least that would give us the most
future flexibility.
Oh yeah, that was my other thought. I think the problem with that is that
again, in almost all cases I can think of, you'll
So to summarize, the possible approaches are:
1. Stealing from client-malloc; fluid will call free(). Won't work with
different allocators other than malloc.
2. Stealing from client-fluid_alloc; fluid will call fluid_free(). At
least it lets fluid control the allocator.
3. Borrowing; fluid
Hey Max,
Last time I used fluidsynth to render midis was in january 2010 I
think (I used the version in ubuntu 10.04, don't remember the number).
Everything was fine. However after I updated to 1.1.1 (the current
Ubuntu default), I had the all instruments are pianos bug and
decided to update
There was a discussion in the thread FluidSynth backend for
SDL_mixer about the difficulty with the current implementation of the
FluidSynth API's MIDI file player, in that it will only load a MIDI
file via a filename (fluid_player_add --
Hi David, James, Pedro,
Thanks for a quick discussion.
David Henningson wrote:
...although this would only be in fluid_midi.c, right? It wouldn't affect
other files.
Correct.
I'd personally prefer the client handling memory entirely. This has two
advantages:
* We won't have to do free
Of course, we have this ongoing discussion about FluidSynth accepting a
buffer. As I've said, it would make my code simpler but it's not vital.
Right. Well even though that might help you, whether you can use it
depends on whether it's acceptable for your SDL_mixer backend to
require FluidSynth
Is anyone going to step up and try this? I could have a go but I'll
probably have to change the way things currently work a little.
I'll have a go at writing a void* memory loader into FluidSynth. I've been
meaning to do some hacking for awhile (which is why I've been hanging around
this
Mix_LoadMUS does take a filename but an application can also call
Mix_LoadMUS_RW, which takes an SDL_RWops. All the existing backends
support this.
Ah, OK then. I was sort of expecting that (but didn't find it when I
looked), since most SDL things let you take a RWop.
This may be
I did. The main problem was that it only takes a file path, not a file
handle. I looked at whether it would be easy to add a function for this
but it didn't seem to fit into the playlist design.
True, the fluid MIDI file player doesn't extend to anything other than files
(by name). But
, and noted in the changelog.
Matt Giuca
___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev
Hi Victor,
1. It resets all channes to prg 1, so when you play a midifile, all we get is
piano from GM soundfonts. I have asked about this here, but no one responded.
With a call to
fluid_synth_sfload(synth, p-soundfont, 0);
it seems this behaviour is prevented. But there does not seem
You should be able to set any settings option with the -o switch.
Example:
fluidsynth -o player.reset-synth=0
That should work, if in fact there is a real reason why these files are
all piano-sounding (i.e., there's something weird about the midi files
that's causing them to be reset).
But
, makefiles, etc, for your program. Alternatively,
you could get a separate agreement from the FluidSynth developers
which falls outside of the LGPL.
Matt Giuca
___
fluid-dev mailing list
fluid-dev@nongnu.org
http://lists.nongnu.org/mailman/listinfo/fluid-dev
In the meantime, I'm glad to see that (thanks to Sven Meier and Alessio
Treglia) that 1.1.2 of FluidSynth will reach Ubuntu Maverick! (And yes, it
will build with CMake :-) )
That's excellent, because it seems 1.1.2's new threading code fixed a lot of
random issues, like my everything is a
Fixed in r359 (plcl) and r361 (me), it should now at least compile.
Don't know if it's working though, care to test it?
With ./configure --enable-ladspa, I am now (as of r361) getting these linker
errors:
./.libs/libfluidsynth.so: undefined reference to `new_fluid_LADSPA_FxUnit'
Okay. Just tried with CMake and not autotools. Does it work better in r362?
Sorry .. nope, now autogen.sh fails.
src/Makefile.am:43: LADSPA_SUPPORT does not appear in AM_CONDITIONAL
I've tried adding this line to configure.ac:
AM_CONDITIONAL(LADSPA_SUPPORT, test $ENABLE_LADSPA = xyes)
It
OK great, that works.
Now having said that, I'm not actually testing LADSPA because I don't
know what it does. I'm just confirming that it compiles with the
autotools version.
___
fluid-dev mailing list
fluid-dev@nongnu.org
David,
my congratulations, 1.1.2 is a big step forward. We owe this success to you.
Yes. Can I just say the mutex fixes in 1.1.2 were very worth it. I had
a bug which I'm not sure you knew about (I brought it up at
http://ubuntuforums.org/showthread.php?t=1564839), in which when using
the -F
The easy fix is to build without the --enable-ladspa flag. Sorry for the
inconvenience, currently nobody on this list is actively
requesting/using/testing LADSPA on FluidSynth.
OK, that's what I did.
Just a heads up then, since the Debian package uses --enable-ladspa, you
might have to
91 matches
Mail list logo