Re: [LAD] Devs needed for opensource virtual analog softsynth idea
O.k. for my part I think I will pull my skills more into the direction of GUI-development, for it seems that GUI-Guys are needed too. I don't know if I will focus on fltk, gtkmm or qt4 ... will spend the weekend RTFM-ing. regards, saschas 2011/1/7 Jeremy jeremyb...@gmail.com: Hi Malte, So I've been working on converting it some more. If you could give me some pointers as to the meaning of variables, that would be useful. What are the EG... variables, like EG, EGFaktor,EGtrigger, and EGState? Also if you're looking for a channel stealing algorithm, try this: the type of a synth engine is synth typedef struct _synthblock { _synthblock* next; _synthblock* previous; synth item; } synthblock; Initially, you start out using the synthblock as an element of a singly linked list of free synths. You only need to use the next, pointer, and can ignore the previous pointer. You can either only keep track of the head, and use it as a stack, or you can keep track of the head and the tail and use it as a queue. Either way, adding is a constant time operation, and taking the most recently or least recently used one is also a constant time operation. Then, you have an array which keeps track of which notes are on. synthblock* currentnotes[NUM_MIDINOTES]; When you get a note-on signal, you pop the first synth block off of the free synth list, and then you add a pointer to it in this array, indexed according to what note it is playing. However, you also add it to the doubly linked list of which synths are playing, again, a constant time operation, because you are just twiddling with the next and previous pointers of two blocks. Now, the array contains a pointer to a block which is in the doubly linked list. Now, when you want all the synths to process, you can iterate through the doubly linked list, and thus you only need to process the ones that are playing notes. When you receive a note off signal, you look up the note in the array, and then remove that item from the doubly linked list, and add it to the singly linked one. In the end, you can do everything in constant time (or O(number of notes being played)) Anyway, I don't know if it's pointless for me to put my ideas here, but I'll probably implement it too, if this doesn't make sense now. Jeremy On Thu, Jan 6, 2011 at 3:36 PM, Malte Steiner stei...@block4.com wrote: On 06.01.2011 12:48, Jeremy wrote: Yes. Except it seems that you can select different settings for each of your voices. This doesn't really make sense if you are automatically assigning the notes to synth engines. I think perhaps the best way would be to have one set of settings for *all* copies of the synth engine, and if you want different settings, then you'd have to create another copy of the plugin. Yes, each voice has a different sound and response to a fixed midichannel, 1 for the first, 2 for the second voice and so on... Actually I find it rather interesting to have different settings between automatically assigned notes. For instance with slightly different sounds it even would become more alive. But yes, for the average usage it would be great to just copy the settings across the voices. The channel stealing algorhythm kept me from implementing polyphony so far, got to study that... A while ago I was against the idea of plugins but actually find it now usefull for recalling sessions. It would be great to stuff PD, Csound or AlsaModularSynth into a sequencer. So far I know that you can create LADSPA plugs with Faust and Csound but instruments?? Cheers, Malte -- media art + development http://www.block4.com new on iTunes: Notstandskomitee Automatenmusik http://itunes.apple.com/us/album/automatenmusik/id383400418 ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] Devs needed for opensource virtual analog softsynth idea
Yeah, that would probably be a good idea. I'm pretty bad with designing and implementing GUIs. Jeremy On Fri, Jan 7, 2011 at 3:37 AM, Sascha Schneider ungleichkl...@gmail.comwrote: O.k. for my part I think I will pull my skills more into the direction of GUI-development, for it seems that GUI-Guys are needed too. I don't know if I will focus on fltk, gtkmm or qt4 ... will spend the weekend RTFM-ing. regards, saschas 2011/1/7 Jeremy jeremyb...@gmail.com: Hi Malte, So I've been working on converting it some more. If you could give me some pointers as to the meaning of variables, that would be useful. What are the EG... variables, like EG, EGFaktor,EGtrigger, and EGState? Also if you're looking for a channel stealing algorithm, try this: the type of a synth engine is synth typedef struct _synthblock { _synthblock* next; _synthblock* previous; synth item; } synthblock; Initially, you start out using the synthblock as an element of a singly linked list of free synths. You only need to use the next, pointer, and can ignore the previous pointer. You can either only keep track of the head, and use it as a stack, or you can keep track of the head and the tail and use it as a queue. Either way, adding is a constant time operation, and taking the most recently or least recently used one is also a constant time operation. Then, you have an array which keeps track of which notes are on. synthblock* currentnotes[NUM_MIDINOTES]; When you get a note-on signal, you pop the first synth block off of the free synth list, and then you add a pointer to it in this array, indexed according to what note it is playing. However, you also add it to the doubly linked list of which synths are playing, again, a constant time operation, because you are just twiddling with the next and previous pointers of two blocks. Now, the array contains a pointer to a block which is in the doubly linked list. Now, when you want all the synths to process, you can iterate through the doubly linked list, and thus you only need to process the ones that are playing notes. When you receive a note off signal, you look up the note in the array, and then remove that item from the doubly linked list, and add it to the singly linked one. In the end, you can do everything in constant time (or O(number of notes being played)) Anyway, I don't know if it's pointless for me to put my ideas here, but I'll probably implement it too, if this doesn't make sense now. Jeremy On Thu, Jan 6, 2011 at 3:36 PM, Malte Steiner stei...@block4.com wrote: On 06.01.2011 12:48, Jeremy wrote: Yes. Except it seems that you can select different settings for each of your voices. This doesn't really make sense if you are automatically assigning the notes to synth engines. I think perhaps the best way would be to have one set of settings for *all* copies of the synth engine, and if you want different settings, then you'd have to create another copy of the plugin. Yes, each voice has a different sound and response to a fixed midichannel, 1 for the first, 2 for the second voice and so on... Actually I find it rather interesting to have different settings between automatically assigned notes. For instance with slightly different sounds it even would become more alive. But yes, for the average usage it would be great to just copy the settings across the voices. The channel stealing algorhythm kept me from implementing polyphony so far, got to study that... A while ago I was against the idea of plugins but actually find it now usefull for recalling sessions. It would be great to stuff PD, Csound or AlsaModularSynth into a sequencer. So far I know that you can create LADSPA plugs with Faust and Csound but instruments?? Cheers, Malte -- media art + development http://www.block4.com new on iTunes: Notstandskomitee Automatenmusik http://itunes.apple.com/us/album/automatenmusik/id383400418 ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] Devs needed for opensource virtual analog softsynth idea
On 01/06/2011 08:57 AM, Sascha Schneider wrote: Hi Loki, 2011/1/6 Loki Davison loki.davi...@gmail.com: On Mon, Jan 3, 2011 at 12:35 AM, Sascha Schneider ungleichkl...@gmail.com wrote: Hi folks, inspired by a plan of a german onlinemag called amazona.de I came up with the idea that a virtual analogue opensource softsynth nativly running on Linux would be really nice. (a nice filterbank too, but thats another thing) Amazona planned a complete synth based on userpolls (only in german, sorry): http://www.amazona.de/index.php?page=26file=2article_id=3191 which is now realized as vst: (only german, too) http://www.amazona.de/index.php?page=26file=2article_id=3202 I know that Zynaddsubfx/yoshimi has a really strong soundengine and I asked myself, if it would be possible to take this engine or the DSSI-API and build a polyphonic softsynth with a nice UI like the new calf plugins or guitarix, a bit like the loomer aspect, with some discoDSP, a bit from the Tyrell or the Roland Gaia SH-01 with midilearn, .. The problem I have are my programming skills, that are not good enough to code this kind of software by myself. Are there some LAD's willing to join/take/realise this idea?? If there is interest I could translate the ideas of amazona.de and we all could share our visions for a new kind of controllable virtual analogue softsynth. kind regards, saschas ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev You do have the required skills, just need to choose the right tool. Actually that is my problem, my terrain till now was more in Webdevelopment - CMS-CRM, custom modules I did Java and Python, mainly object oriented. Most synth apps I see in Linux are coded in C, at least the engine, and stuff like pointers really don't fit into my brain .. might be my age ... Something for a rainy afternoon: http://cslibrary.stanford.edu/102/PointersAndMemory.pdf Just reading Page 3 and 5 of the PDF should make it clear. There's a lot of nice, tidy code you can write without knowing about pointers. But once you learn to use the power of pointers, you can never go back. As for JAVA: there's a concept like C/C++ pointers it's called references. Pointers are also common in many scripting langs. e.g in PHP using 'variable' or the backslash operator in perl. A bit over-simplified: These two main reason why some programming languages are not suitable to write *reliable* audio-engines: - Memory allocation can not be done in real-time. - Some scripting langs (f.i. python) have a global lock (meaning program execution can block and wait for some event - causing audio drop out). Besides C/C++ provides for fine-grained optimizations (such as binding variables to CPU registers). User Ingen. It is far too awesome to describe in simple words. :) http://drobilla.net/blog/software/ingen/ I will have a look at that ... Loki regards, Sascha A higher level programming environment - e.g. http://faust.grame.fr/ does abstract many many gory details, but I don't know if the right tool for the job at hand. 2c, robin ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] Devs needed for opensource virtual analog softsynth idea
Hi! On Fri, Jan 7, 2011 at 2:01 PM, Robin Gareus ro...@gareus.org wrote: Something for a rainy afternoon: http://cslibrary.stanford.edu/102/PointersAndMemory.pdf Thanks for the pointer to that! :-D Short concise very informative.. downloaded for future reference! There wouldn't happen to be something similar you know of for threads / Glib threading by chance? -Harry ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
[LAD] glib-threads and c-pointers - was Re: Devs needed for opensource virtual analog softsynth idea
On 01/07/2011 03:50 PM, Harry Van Haaren wrote: Hi! On Fri, Jan 7, 2011 at 2:01 PM, Robin Gareus ro...@gareus.org wrote: Something for a rainy afternoon: http://cslibrary.stanford.edu/102/PointersAndMemory.pdf Thanks for the pointer to that! :-D Short concise very informative.. downloaded for future reference! There wouldn't happen to be something similar you know of for threads / Glib threading by chance? -Harry Alas no, not really. Maybe someone else does. I like the manual pages `man 3 pthread_create` and `man pthread_mutex_init` - both do include SEE ALSO references as well as example code. As far as documentation with a bit of introduction goes: check out the Description section of: http://library.gnome.org/devel/glib/unstable/glib-Threads.html http://library.gnome.org/devel/glib/unstable/glib-The-Main-Event-Loop.html http://library.gnome.org/devel/glib/unstable/glib-Thread-Pools.html There's been some glib-thread discussion on this list last July (Subject: Can someone add 2 features to Kluppe? - http://lists.linuxaudio.org/pipermail/linux-audio-dev/2010-July/028694.html ) and I whipped up a small example demonstrating glib thread creation and mutexes: http://rg42.org/_media/wiki/async-timer2.c It's not annotated but may get you started. best, robin -- Robin Gareus web: http://gareus.org/mail: ro...@gareus.org lab: http://citu.fr/ chat: xmpp:rgar...@ik.nu Public Key at http://pgp.mit.edu/ http://gareus.org/public.asc Fingerprint : 7107 840B 4DC9 C948 076D 6359 7955 24F1 4F95 2B42 ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev