Re: [linux-audio-dev] LV2 buffersize extensions (was: LADSPA...)
On Monday 29 January 2007 18:22, Steve Harris wrote: > > http://tapas.affenbande.org/lv2/ext/fixed-buffersize > > http://tapas.affenbande.org/lv2/ext/power-of-two-buffersize > Great idea. I've got some plugins that will benefit a lot by this. We > should link to known extensions on the http://lv2plug.in/ site. > > FWIW, my provisional plan was to wait until it seemed like time for a > LV2 1.1 (hopefully not too soon :), then roll all the "popular" > extensions into that. Ah, i don't mean this extension has to become part of the core LV2 spec. Nonono. I was just wondering whether it makes sense that i maintain this seperately and keep the extension URI to my site. Is there a plan to host some very common extensions on the lv2 site (URI having lv2plug.in in it and docs on the lv2 site), too? If so i would like to see these extensions included. > It doesn't make a huge amount of difference whether their included or > not though. Well, just a visibility thing. By having some extensions documented and "hosted" on lv2plug.in they probably get more visibility than others. For certain "almost core" functionality this would make sense i think. > > Before you ask, no I don't have a definition for "popular". Hehe :) Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] LV2 buffersize extensions (was: LADSPA...)
On Monday 29 January 2007 09:08, Steve Harris wrote: > Ah, well the host is not supposed to change port values during run() > anyway, the idea in LADSPA (and LV2) is that the host should chop the > run() block where port values change. In practice not all hosts do > that, some just pick a suitably small block size, eg. 32 frames and > quantise the changes to that rate. Hi, let me chime in because it kidna fits into the subject. I have defined two (very very simple LV2 extensions): "The extension’s URI is http://tapas.affenbande.org/lv2/ext/fixed-buffersize All that a plugin needs to check is whether a host feature with this URI exists and the data will be a uint32 containing the buffersize. The host is only allowed to call the plugin’s run function with a buffersize equal to the one specified by the host feature. There’s a second extension: http://tapas.affenbande.org/lv2/ext/power-of-two-buffersize which is identical to above but with the additional requirement that the fixed buffersize has to be a power of two." I don't need to have the URI point to my site. If you want to integrate it into the official LV2 standard i'd be more than happy.. For anyone who might ask: "why do we need this"? Well the answer is that some algorithms (especially fft based) perform much better when the buffer size is known (because they must operate with a fixed buffersize internally). With anyone of these two extensions provided by the host, those plugins can avoid additional delay from buffering, etc.. We discussed on #lad whether a guarantee that the framecount of subsequent run () calls add up to a fixed buffersize is enough.. I wasn't sure about this. But i think now, that it's not a good idea. Here's a good counterargument: Imagine an fft based plugin that uses the host buffer size as internal fft size. Then with this guarantee it would have to collect data until an fft buffer is full. While waiting for this and processing subsequent these smaller buffers, it will have to produce output, but it doesn't have the fft result yet. Thus causing unavoidable delay (of one total buffersize).. Flo -- Palimm Palimm! http://tapas.affenbande.org
[linux-audio-dev] Re: [linux-audio-announce] [ANN] jack_mixer version 2
On Monday 08 January 2007 15:51, Nedko Arnaudov wrote: > jack_mixer version 2 released. > > jack_mixer is GTK (2.x) JACK audio mixer with look similar to it`s > hardware counterparts. It has lot of useful features, apart from being > able to mix multiple JACK audio streams. > > Changes since version 1: > > * Fix compilation issue for 64-bit platforms (-fPIC) > * Add new meter scale - iec268, fewer marks > * Add hints in documentation for compiling on Ubuntu > * Fix compilation with offsetof macro definition > > Homepage with screenshots: http://home.gna.org/jackmixer/ > > Download: http://download.gna.org/jackmixer/ works nicely.. and it has LASH :) great. Thanks, Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] kernel 2.6.18, -19 etc
On Friday 15 December 2006 21:24, Jens M Andreasen wrote: > On Fri, 2006-12-15 at 22:00 +0200, Jussi Laako wrote: > > Jens M Andreasen wrote: > > > Now that mingo's (et al) RT patches are coming into mainstream, what is > > > the corporate rationale behind it and the running order of urgency? > > > > > > I am fishing for some information on; if it is the disk-drives, the > > > network drivers, the usb stack or something else that I am too ignorant > > > to have noticed? > > > > > > What worries me the most, is corner-cases on network, blocking multiple > > > cpu's. > > > > Isn't the functionality conditional and selected at configure time? I > > wouldn't be too worried about it. It also forces broken drivers to be > > fixed which is only a good thing. This is a bit similar to the situation > > when kernel pre-emption was introduced. > > Yes, but the pathces are introduced and applied a bit at the time for > each official kernel version. At 2.6.19 we can read mingo's own comment > over at slashdot that 'now 50% has gone in'. Friend of order would like > to what half of which is accepted and why. Can you post a link to the story? A quick search didn't find it here.. Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] about MIDI timing...
On Wednesday 25 October 2006 15:19, Clemens Ladisch wrote: > Mulyadi Santosa wrote: > > I also read that not all Linux kernel sound card driver enable the > > internal card timer, thus the software must rely on system timer. > > Most sound cards don't have an internal timer that could be used for > MIDI timing. ALSA uses whatever timer is configured, the default for > this is the RTC timer. if snd_rtctimer gets loaded, which for example isn't the case here on my debianbox. I suppose a modules.conf et.al entry should fix this though.. Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] about MIDI timing...
On Wednesday 25 October 2006 18:28, Chris Cannam wrote: > Me: > > I'm not aware of anyone these days successfully > > using Rosegarden with snd-rtctimer - if anyone out > > there is, do say so. > > To test: > > * start RG (version 1.0 or newer) > * go to Settings -> Configure Rosegarden -> Sequencer -> Synchronisation > * change the sequencer timing source option to RTC > * close configuration window, press play. > > It probably doesn't matter whether you have a file > loaded or not. > > Success -> play pointer moves smoothly > Failure -> system locks up solid, reboot required. > > If it does lock up, you may need to edit your > rosegardenrc to restore the timer setting before > you can run RG again. It doesn't lock up here running 2.6.18-rt7 with tickless kernel, hi res timers and some debugging enabled. Interesting enough though, the second time i tried to run ti from a terminal to be able to observe output i got a kernel BUG ;) Sadly my kernel is tested with the NVIDIA binary only driver, so this tells us exactly nothing :) Except that i should reboot and not load the nvidia driver ;) Flo [ cut here ] kernel BUG at kernel/rtmutex.c:673! invalid opcode: [#1] PREEMPT Modules linked in: snd_rtctimer nvidia iptable_nat ip_tables snd_intel8x0 ppdev parport_pc lp parport snd_seq_dummy snd_seq_oss snd_seq_midi snd_seq_midi_event snd_seq kqemu usb_storage scsi_mod ipt_MASQUERADE ip_nat x_tables ip_conntrack bsd_comp ppp_deflate zlib_deflate ppp_async ppp_generic slhc crc_ccitt snd_ice1712 snd_ice17xx_ak4xxx snd_cs46xx snd_ak4xxx_adda snd_cs8427 gameport snd_i2c snd_ac97_codec snd_ac97_bus snd_mpu401_uart snd_pcm_oss snd_mixer_oss snd_rawmidi snd_seq_device snd_pcm snd_timer i2c_sis630 snd evdev ohci_hcd epic100 i2c_sis96x sis900 mii usbcore crc32 snd_page_alloc sis_agp agpgart i2c_core soundcore CPU:0 EIP:0060:[]Tainted: P VLI EFLAGS: 00010082 (2.6.18-rt7 #3) EIP is at rt_spin_lock_slowlock+0x19f/0x1e0 eax: 0020 ebx: 0282 ecx: 0001 edx: esi: c0336fe0 edi: c41bf600 ebp: f7d22e94 esp: f7d22e30 ds: 007b es: 007b ss: 0068 preempt: 0002 Process IRQ 8 (pid: 651, ti=f7d22000 task=f7d202b0 task.ti=f7d22000) Stack: c02ed14d c02f14dd 02a1 c02d7264 f7d22e64 008c f7d22e48 f7d22e48 f7d22e50 f7d22e50 008c f7d22e60 f7d22e60 f7d22e68 f7d22e68 c0336fe0 Call Trace: [] rt_spin_lock+0xe/0x50 [] rtc_control+0x3a/0x80 [] rtctimer_stop+0x2b/0x50 [snd_rtctimer] [] snd_timer_interrupt+0x2b0/0x2f0 [snd_timer] [] rtctimer_interrupt+0x19/0x20 [snd_rtctimer] [] rtc_interrupt+0x72/0x120 [] handle_IRQ_event+0x6e/0x100 [] thread_simple_irq+0x62/0xa0 [] do_irqd+0x2a7/0x310 [] kthread+0xe9/0xf0 [] kernel_thread_helper+0x5/0x10 DWARF2 unwinder stuck at kernel_thread_helper+0x5/0x10 Leftover inexact backtrace: [] show_stack_log_lvl+0xa9/0xd0 [] show_registers+0x1e6/0x270 [] die+0x122/0x2e0 [] do_trap+0x98/0x100 [] do_invalid_op+0xa0/0xb0 [] error_code+0x39/0x40 [] rt_spin_lock+0xe/0x50 [] rtc_control+0x3a/0x80 [] rtctimer_stop+0x2b/0x50 [snd_rtctimer] [] snd_timer_interrupt+0x2b0/0x2f0 [snd_timer] [] rtctimer_interrupt+0x19/0x20 [snd_rtctimer] [] rtc_interrupt+0x72/0x120 [] handle_IRQ_event+0x6e/0x100 [] thread_simple_irq+0x62/0xa0 [] do_irqd+0x2a7/0x310 [] kthread+0xe9/0xf0 [] kernel_thread_helper+0x5/0x10 --- | preempt count: 0002 ] | 2-level deep critical section nesting: .. [] __spin_lock_irqsave+0x1d/0x60 .[] .. ( <= rt_spin_lock_slowlock+0x24/0x1e0) .. [] __spin_lock_irqsave+0x1d/0x60 .[] .. ( <= die+0x42/0x2e0) Code: 25 00 f0 ff ff 8b 00 c7 00 00 00 00 00 eb b7 c7 44 24 08 a1 02 00 00 c7 44 24 04 dd 14 2f c0 c7 04 24 4d d1 2e c0 e8 81 ef e3 ff <0f> 0b a1 02 dd 14 2f c0 e9 b6 fe ff ff 89 cf c7 45 a8 00 00 00 EIP: [] rt_spin_lock_slowlock+0x19f/0x1e0 SS:ESP 0068:f7d22e30 <6>note: IRQ 8[651] exited with preempt_count 1 BUG: sleeping function called from invalid context IRQ 8(651) at fs/inode.c:247 in_atomic():1 [0001], irqs_disabled():1 [] show_trace_log_lvl+0x1ec/0x200 [] show_trace+0x1b/0x20 [] dump_stack+0x26/0x30 [] __might_sleep+0xd6/0xf0 [] clear_inode+0x1f/0x160 [] proc_delete_inode+0x94/0xc0 [] generic_delete_inode+0x7e/0x120 [] iput+0x6d/0xa0 [] dentry_iput+0x7b/0xd0 [] prune_one_dentry+0x62/0x90 [] prune_dcache+0x16c/0x190 [] shrink_dcache_parent+0xdc/0x120 [] proc_flush_task+0x62/0x210 [] release_task+0x1f3/0x3c0 [] do_exit+0x71d/0xb10 [] die+0x2d5/0x2e0 DWARF2 unwinder stuck at die+0x2d5/0x2e0 Leftover inexact backtrace: [] show_trace+0x1b/0x20 [] dump_stack+0x26/0x30 [] __might_sleep+0xd6/0xf0 [] clear_inode+0x1f/0x160 [] proc_delete_inode+0x94/0xc0 [] generic_delete_inode+0x7e/0x120 [] iput+0x6d/0xa0 [] dentry_iput+0x7b/0xd0 [] prune_one_dentry+0x62/0x90 [] prune_dcache
Re: [linux-audio-dev] [ANN]: Kontroll updated
On Tuesday 26 September 2006 15:34, stefan kersten wrote: > Florian Schmidt wrote: > > And here's the question: A user suggested (and i'd like > > this idea very much) that kontroll be able to make use of > > other input devices attached to the computer (additional > > mice, joysticks, etc). Now i would like to avoid playing > > with /dev/input directly, cause i imagine it to be a > > drag. So does anyone of you guys know a small and easy to > > use input-library that makes accessing these devices a > > breeze? If so, please let me know. > > while using the input layer is not very complicated (see > SC_LID.cpp in supercollider for some examples), it's of > course limited to linux. i've had a glance at libggi/libgii > as a cross-platform alternative, but haven't used it yet ... > > http://www.ggi-project.org/packages/libgii.html Thanks for the hints. Will take a look at libgii and the SC stuff. I also took a look at the evdev documentation in the kernel tree, but it left quite a few questions unanswered.. Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Re: [ANN]: Kontroll updated
On Tuesday 26 September 2006 14:20, Nicola Larosa wrote: > Florian Schmidt wrote: > > And here's the question: A user suggested (and i'd like this idea very > > much) that kontroll be able to make use of other input devices attached > > to the computer (additional mice, joysticks, etc). Now i would like to > > avoid playing with /dev/input directly, cause i imagine it to be a drag. > > So does anyone of you guys know a small and easy to use input-library > > that makes accessing these devices a breeze? If so, please let me know. > > Is SDL too much? > > http://www.libsdl.org/ While SDL is fine for joysticks it doesn't seem to provide access to additional mouses / tablets, etc.. I'll add preliminary joystick support using SDL though as a first step. Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Re: [linux-audio-user] [ANN]: Kontroll updated
On Sunday 24 September 2006 22:26, Florian Schmidt wrote: > On Sunday 24 September 2006 20:36, Florian Schmidt wrote: > > P.S.: Ah, LASH support is still missing. Will add it right away (or at > > least try) ;) > > done. have fun. Ok and since i was bored, i also added OSC message sending support (single float messages. you can specify the range). http://tapas.affenbande.org/?page_id=42 Have fun :) Flo -- Palimm Palimm! http://tapas.affenbande.org
[linux-audio-dev] Re: [linux-audio-user] [ANN]: Kontroll updated
On Sunday 24 September 2006 20:36, Florian Schmidt wrote: > P.S.: Ah, LASH support is still missing. Will add it right away (or at > least try) ;) done. have fun. Flo -- Palimm Palimm! http://tapas.affenbande.org
[linux-audio-dev] [ANN]: Kontroll updated
Hi, this is a small announcement for a minor update for a minor piece of software, and at the same time a question :) So here it goes: Kontroll is a small utility that generates midi cc messages from the mouse position. It is inspired by the MouseX and MouseY UGens in Supercollider. It simply creates an alsa sequencer port which you can then connect with your favourite patchbay. The mouse position is independent of window focus and is relative to the screen origin at the upper left. - Another small update to kontroll. Now the controller and channel numbering range from 1-128 and 1-16 as commonly seen in other midi applications and hardware. previously it as 0-127 and 0-15 which was probably confusing to non computer people. - A minor update to this little program of mine called “Kontroll”. On shutdown it saves the last used parameters to a file called ~/.kontroll and on startup reads it again. This saves setting it up all over again on each start of the program. You can also save special setups via the “File” menu. Grab it here: http://tapas.affenbande.org/?page_id=42 Or directly: http://affenbande.org/~tapas/kontroll.tgz And here's the question: A user suggested (and i'd like this idea very much) that kontroll be able to make use of other input devices attached to the computer (additional mice, joysticks, etc). Now i would like to avoid playing with /dev/input directly, cause i imagine it to be a drag. So does anyone of you guys know a small and easy to use input-library that makes accessing these devices a breeze? If so, please let me know. Regards, Flo P.S.: Ah, LASH support is still missing. Will add it right away (or at least try) ;) -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] very nice looking HW
On Wednesday 06 September 2006 16:30, nescivi wrote: > On Wednesday 06 September 2006 16:25, Alfons Adriaensen wrote: > > Any chance of this ever being supported in Linux ? > > > > http://www.marian.de/en/products/ucon_cx > > I believe they just need someone to write the drivers > > they did not seem completely against Linux, when I talked to them on the > AES (for the second time..)... Would they be willing at all to pay for the development? Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] LADSPA 2
On Sun, 23 Apr 2006 13:39:52 +0100 Steve Harris <[EMAIL PROTECTED]> wrote: > On Sun, Apr 23, 2006 at 01:10:55 +0200, Florian Schmidt wrote: > > thanks for taking the initiative on this! I would like to see a way for > > the host to pass its native buffer size to the plugin though. I know, > > this is really kind of contrary to how LADSPA is supposed to work (i.e. > > the run () function should be able to handle an arbitrary number of > > frames), but it has some serious advantages for fft-based algorithms. > > And i think it should be possible to merge the two approaches somewhat. > > Thats a new feature. It'll have to wait til after 2.0 as far as I'm > concerned. I tend to disagree as it is kinda orthogonal to the other proposed changes. What time other than a major version change is better to add such a feature? Aw hell, i'll try to get this into DSSI 2 then ;) Regards, Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] LADSPA 2
On Sat, 22 Apr 2006 10:53:58 +0100 Steve Harris <[EMAIL PROTECTED]> wrote: > Almost two years ago at the LA conference a bunch of us agreed that > something need to be done to improve LADSPA, and on the approximate > direction it should take. > > Anyway, I finally got round to making a sketch plugin and .h file: > http://plugin.org.uk/ladspa2/ Hi, thanks for taking the initiative on this! I would like to see a way for the host to pass its native buffer size to the plugin though. I know, this is really kind of contrary to how LADSPA is supposed to work (i.e. the run () function should be able to handle an arbitrary number of frames), but it has some serious advantages for fft-based algorithms. And i think it should be possible to merge the two approaches somewhat. A hint which the plugin could set that it needs power of two periodsizes for optimal operation would be nice, too. The host should be free to ignore this and the plugin should still work (doing what it sees fit. i.e. use non power of two fft window sizes or use double buffering again), but at least the host could notify the user that this plugin might work suboptimal. The approach of chopping up the period into smaller chunks (i.e. to update control data at non period boundaries) would still be available, but the plugin should be able to rely on the chunks being i.e. in this form: ---> time |128 |128 | | ||| | | | | | Native buffer size being 128 frames [upper row] and the lower row is how it is chopped up [as an example]. We see that all chopped up chunks add up to the native period size. This enables an fft based plugin to collect data over a whole period and do its stuff once for all these samples, without the need for extra double buffering which is a problem with current ladspa and dssi. There would be an additional requirement put on the host: It has to use a fixed underlying buffersize [optimally a power of two] even if it's not bound to do this by the hardware/soundinterface it uses. I.e. a host that does the processing freewheeling. What do you think? Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Patch storage for simple DSSI plugins?
On Mon, 17 Apr 2006 12:08:19 +0100 Gordonjcp <[EMAIL PROTECTED]> wrote: > What's the consensus? Implement patch storage for plugins that don't > really have that many controls, or just let the host worry about it? Dunno, if there's concencus, but i'd say, let the host worry about it. There's many plugins where patches aren't really necessary (simple stuff like i.e. my dssi_convolve). Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Re: GPL Audio Hardware
On Tue, 04 Apr 2006 15:10:36 -0400 Lee Revell <[EMAIL PROTECTED]> wrote: > On Tue, 2006-04-04 at 12:27 +0200, Alfons Adriaensen wrote: > > For me: > > > > * operate at 48 and 96 kHz. > > Many users also demand 44.1 support, although I don't quite understand > why. errm, to be able to play audio cd's without resampling? Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Re: MusE 0.8.1 released
On Tue, 28 Mar 2006 18:43:06 +0100 Robert Jonsson <[EMAIL PROTECTED]> wrote: > On Tuesday 28 Mar 2006 10:13, Florian Schmidt wrote: > <...> > > > > thing and now it _seems_ to build. We'll see.. > > > > Flo > > Just curious, did it work out? Nope make[4]: Entering directory `/home/tapas/source/build_stuff/muse-0.8.1/muse' if g++ -DHAVE_CONFIG_H -I. -I. -I.. -Imidiedit -Iarranger -Iliste -Iwidgets -Im ixer -Idriver -Iwaveedit -Implugins -Iinstruments -DINSTPREFIX=\"/usr/local\" - g -fno-exceptions -Wall -W -D_GNU_SOURCE -D_REENTRANT -DQT_CLEAN_NAMESPACE -DQ T_NO_COMPAT -I.. -I../muse/widgets -I/usr/share/qt3/include -I.. -I../synti -I ../muse/widgets -DQT_SHARED -DQT_THREAD_SUPPORT -DQT_PLUGIN -I/usr/include/alsa -I/usr/local/include/lash-1.0 -g -O2 -MT app.o -MD -MP -MF ".deps/app.Tpo" -c -o app.o app.cpp; \ then mv -f ".deps/app.Tpo" ".deps/app.Po"; else rm -f ".deps/app.Tpo"; e xit 1; fi widgets/canvas.h:93: warning: unused parameter 'item' widgets/canvas.h:111: warning: unused parameter 'item' widgets/canvas.h:111: warning: unused parameter 'n' widgets/canvas.h:111: warning: unused parameter 'pt' /usr/share/qt3/include/qtooltip.h:86: warning: 'class QToolTip' has virtual func tions but non-virtual destructor midiedit/drumedit.h:61: warning: 'class DHeaderTip' has virtual functions but no n-virtual destructor ./synth.h:75: warning: 'class SynthIF' has virtual functions but non-virtual des tructor ./synth.h:173: warning: 'class MessSynthIF' has virtual functions but non-virtua l destructor /usr/share/qt3/include/qnetworkprotocol.h:58: warning: 'class QNetworkProtocolFa ctoryBase' has virtual functions but non-virtual destructor /usr/share/qt3/include/qfiledialog.h:78: warning: 'class QFilePreview' has virtu al functions but non-virtual destructor app.cpp: In member function 'void MusE::toplevelDeleted(long unsigned int)': app.cpp:1642: warning: format '%x' expects type 'unsigned int', but argument 2 h as type 'long unsigned int' app.cpp: At global scope: app.cpp:1766: warning: unused parameter 'e' app.cpp: In member function 'void MusE::lash_idle_cb()': app.cpp:2730: error: jump to case label app.cpp:2720: error: crosses initialization of 'int ok' app.cpp:2719: error: crosses initialization of 'const char* name' app.cpp:2736: error: jump to case label app.cpp:2720: error: crosses initialization of 'int ok' app.cpp:2719: error: crosses initialization of 'const char* name' app.cpp:2743: error: jump to case label app.cpp:2720: error: crosses initialization of 'int ok' app.cpp:2719: error: crosses initialization of 'const char* name' make[4]: *** [app.o] Error 1 make[4]: Leaving directory `/home/tapas/source/build_stuff/muse-0.8.1/muse' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/home/tapas/source/build_stuff/muse-0.8.1/muse' make[2]: *** [all] Error 2 make[2]: Leaving directory `/home/tapas/source/build_stuff/muse-0.8.1/muse' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/tapas/source/build_stuff/muse-0.8.1' make: *** [all] Error 2 Looks like Frieder's Patch might fix it.. Testing.. Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Re: MusE 0.8.1 released
On Tue, 28 Mar 2006 10:44:08 +0200 Florian Schmidt <[EMAIL PROTECTED]> wrote: > I _do_ have cvs lash installed and debian's ladcca package, too. I > asssumed that because of the differing naming schemes i wouldn't run > into trouble. Seems i was wrong. Oh well, it seems it was a problem with debians libfluidsynth package. Running ./configure --disable-fluidsynth --enable-lash seems to change something about it as configure now fails with checking for SNDFILE... configure: error: need libsndfile >= 1.0.0 but ~/source/build_stuff/muse-0.8.1$ dpkg -l libsndfile1-dev Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Installed/Config-files/Unpacked/Failed-config/Half-installed |/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: uppercase=bad) ||/ Name VersionDescription +++-==-==- ii libsndfile1-de 1.0.15-1 Library for reading/writing audio files Anyways, Thorsten Willms mentioned the export PKG_CONFIG=/usr/bin/pkg-config thing and now it _seems_ to build. We'll see.. Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Re: MusE 0.8.1 released
On Tue, 28 Mar 2006 00:31:03 -0500 Dave Robillard <[EMAIL PROTECTED]> wrote: > > > What lash is this anyway? is the modern, useful, actually handy for > > stuff lash as in http://www.nongnu.org/lash/ or the ladcca from dawn > > of time one? > > 0.5.0+ versions of LASH do not contain the phrase "ladcca" in any way > (including > filenames). > > >From a quick glance, neither does Muse 0.8.1, so I would assume this is a > >system > configuration problem. I _do_ have cvs lash installed and debian's ladcca package, too. I asssumed that because of the differing naming schemes i wouldn't run into trouble. Seems i was wrong. Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] [Announce] MusE 0.8.1 released
On Mon, 27 Mar 2006 22:52:49 +0100 Robert Jonsson <[EMAIL PROTECTED]> wrote: > This release note is for MusE 0.8.1. > It is basically a bug fix release for a note-off bug that crept into 0.8. > [Known issues] > See the errata section on the homepage for the latest: > http://www.muse-sequencer.org/wiki/index.php/Errata0.8 Hmm, i don't know what's up with that, but ./configure says "build without lash support" and lateron it fails: /bin/sh ../../libtool --tag=CXX --mode=link g++ -g -fno-exceptions -Wall -W -D_G NU_SOURCE -D_REENTRANT -DQT_CLEAN_NAMESPACE -DQT_NO_COMPAT -I../.. -I../../m use/widgets -I/usr/share/qt3/include -I.. -I../../synti -I../../muse/widgets -DQ T_SHARED -DQT_THREAD_SUPPORT -DQT_PLUGIN -fPIC -O3 -ffast-math -fno-exceptions - g -O2 -o fluidsynth.la -rpath /usr/local/lib/muse/synthi -module -avoid-versio n fluidsynti.lo fluidsynthgui.lo fluidsynthguibase.lo moc_fluidsynthgui.lo ../li bsynti/libsynti.la -lfluidsynth -lasound -lm -ldl -L/usr/share/qt3/lib -lqt-mt - lqui grep: /usr/lib/libladcca.la: No such file or directory /bin/sed: can't read /usr/lib/libladcca.la: No such file or directory libtool: link: `/usr/lib/libladcca.la' is not a valid libtool archive make[5]: *** [fluidsynth.la] Error 1 make[5]: Leaving directory `/home/tapas/source/build_stuff/muse-0.8.1/synti/flui dsynth' make[4]: *** [all] Error 2 make[4]: Leaving directory `/home/tapas/source/build_stuff/muse-0.8.1/synti/flui dsynth' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/home/tapas/source/build_stuff/muse-0.8.1/synti' make[2]: *** [all] Error 2 make[2]: Leaving directory `/home/tapas/source/build_stuff/muse-0.8.1/synti' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/tapas/source/build_stuff/muse-0.8.1' make: *** [all] Error 2 configure: WARNING: LASH support is disabled [snip] configure: MusE configured using rtcap: no LASH support: no setuid root install: no setuid root build: no VST/win support: no jade:jade doxygen: /usr/bin/doxygen graphviz:no perl:/usr/bin/perl treeviews in doxygen html output: yes C++ compiler:g++ optimizing: no debug: no optimise for arch: none installation prefix: /usr/local Software synths --- FluidSynth:yes System is debian unstable (gcc 4.0.3).. Flo -- Palimm Palimm! http://tapas.affenbande.org
[linux-audio-dev] Re: [linux-audio-user] [ANN] Kontroll
On Mon, 27 Mar 2006 00:49:04 +0200 Florian Schmidt <[EMAIL PROTECTED]> wrote: > http://tapas.affenbande.org/?page_id=42 Oh and i forgot to ask a question: What is the canonical way for a gtk app to receive hotkey press events regardless of window focus? Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Linux Audio Conference 2006: Register now!
On Fri, 24 Mar 2006 00:24:07 +0100 Frank Neumann <[EMAIL PROTECTED]> wrote: > > Hi all, > > this mail is just to inform you that registration for the 4th International > Linux Audio Conference (or "LAC2006" for short), APril 27th-30th, in > Karlsruhe, > Germany, is possible as of now by visiting our web page at > > http://lac.zkm.de > > and clicking on the link "Registration" on the left. Hi, the "profession" part of the registration says "multiple checks are ok", but they aren't, as the radiobuttons allow only one selection. Regards, Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] OSS, Line in directly to Line out?
On Wed, 15 Mar 2006 10:17:49 +0100 "Tobias Scharnberg" <[EMAIL PROTECTED]> wrote: > Hello List, > this might really be a dumb question, but anyway: When I have an > Audio-Source on Line-In or MIC, what do I have to do to directly > output it to LineOut? > > Is it possible to directly put it through by using /dev/mixer ? Or do > I have to record the Line-In audiostream in a buffer and then read > from the buffer for output? At least duplex capability is given in my > device! > > And: I really can't use ALSA for that device, which is a shame. It depends on whether the soundcard supports this. Usually on comsumer grade hw you do have the option to route the analog line input directly to the analog line output. I just don't have an idea on how to do this with OSS. In ALSA you just go the playback page of the mixer and pull up the line-in fader there (which is distinct from the line-in fader in the recording page of the mixer, which determines the level of input when actually recording from that line in). Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] a bit off topic: GUI-lib-programming (how does it usually work?)
On Mon, 6 Mar 2006 16:34:05 +0100 (CET) Julien Claassen <[EMAIL PROTECTED]> wrote: > Hi! > Florian: What can such a lib do?I already have toggle-buttons, progressbars, > labels and I'm working on text-entry-fields. Next thing is menus and > mutiple-choice-buttons (radio-buttons, lists) and sliders. That is all stuff > I can imagine Hi Julien, i kinda hijacked your thread. I was wondering about whether such a lib generally existed,because it would make programming apps for all kinds of user groups easier, as one could create different interfaces from the same interface description. I didn't want to imply that you should wirte such a lib. I was just generally interested in the subject and wondered how far many day-to-day typical UI usecases could be abstracted. Regards, Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] a bit off topic: GUI-lib-programming (how does it usually work?)
On Sun, 5 Mar 2006 17:15:04 +0100 (CET) Julien Claassen <[EMAIL PROTECTED]> wrote: > Hi! > First of all: Thanks for you prompt responses. > Second: the lib is written in c++. Is it ok for my toggle(on/off)-buttons to > just have a function that returns either 1/0 (true/false)? > Third: Chris: This lib will be ncurses-based. But I made some special > adjustments for cursor-movement, generalised how things look like. I also > predefined the keys to move/active/etc. It should be useable for blind people > and perhaps people with other disabilites. First thing in mind was of course > audio-software. If you're interested in this, or if anyone else is and feels, > it's not a topic for lad, just post back by private mail. > Kindest egards >Julien I always wondered whether it was possible to do something like a generic/abstract UI thing. You bind yourself to ncurses at the moment, but wouldn't it be cool, if the same code could sit ontop of either ncurses or X/Gtk/Qt, or even be communicated via audio interface and user talking. I doubt though that all user interface aspects can be generalized in this way, but it would be cool already to get a subset as huge as possible. All X gui stuff is basically representable on an ncurses screen as long as there's no real graphical stuff involved (i.e. the typical preferences dialog serves as a good example. Just a bunchof checkboxes, comboboxes and ok and cancel buttons). I even can imagine how this could be presented to the user via a voice interface, though this is more difficult to get right. The most obvious example were this scheme would fail would be i.e. something like the GIMP. Operating on a per RGB pixel basis is very difficult to translate to ncurses or voice. Something like ardour would be almost equally tough as its interface is highly graphical (at least the editor and mixer windows. All the preferences stuff could be again dealt with). Anyone know if such a thing exists yet? Or any thoughts on the matter? Flo -- Palimm Palimm! http://tapas.affenbande.org
[linux-audio-dev] Re: [Jackit-devel] [ANN] das_watchdog 0.0.1 and jack_capture v0.2.3
On Sun, 12 Feb 2006 20:46:08 -0800 (PST) "Kjetil S. Matheussen" <[EMAIL PROTECTED]> wrote: > Das_Watchdog > > > ABOUT > - > Das_Watchdog is a program heavily and shamefully inspired by the > rt_watchdog program made by Florian Schmidt: > http://tapas.affenbande.org/?page_id=38 Hehe, why shamefully? This is open source, baby. So i'm glad there's some alternative to my messy code ;) And btw: the two programs are still a bit different. rt_watchdog is a daemon. I have wondered about how to make it known to the user that it has kicked in. The only solution i found was to write into the logs. Opening an xwindow is an interesting solution. Does linux maybe even have a standardized way for this kinda stuff? > However, this one has some improvements: > > 1. It works with 2.4 kernels as well as 2.6. (well, at least I think it > works with 2.6...) > 2. Instead of permanently setting all realtime processes to run > non-realtime, das_watchdog only sets them temporary. > 3. When the watchdog kicks in, an X window should pop up that tells you > whats happening. (just close it after reading the message). > > > INSTALLING > -- > make > cp das_watchdog /usr/local/sbin/ > echo '/usr/local/sbin/das_watchdog & >/dev/null' >>/etc/rc.local This assumes an initscript style that's not used on all linux systems. > reboot Also i wonder: Is it safe to simply use a static int as "event counter"? Might this not fail on SMP boxes? I think i make a similar mistake by using a volatile int (not as a counter, just as a exit state indicator) instead. Any gurus care to comment? Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] xt2 coming to linux
On Fri, 27 Jan 2006 10:02:06 -0800 [EMAIL PROTECTED] wrote: > I would do it so that I have a potentially viable alternative to the > current state of affairs, which is that I boot windows to do music. > > eXT is a decent program. The reason we don't get commercial apps on Linux > is because no one buys them. Linux folks want everything free. > > I am as big a believer in Open Source as anyone else here. But I am also > a pragmatist, and until such time as a viable free alternative exists, I > have to use commercial apps. Given that, would I rather use and support a > commercial app that runs on Windows only or one that runs on Linux, too? > > If you can't cope with the idea that someone makes a living doing > software, don't buy eXT, don't help Jorgen, and just ignore it. I, for > one, will do anything I can to help him make Linux into a competitive > migration path for Windows users. Oh i can live perfectly well with the idea of people making a living doing linux software. I myself code linux apps for money. But i have a problem with opening parts of it as open source in the hope that users fill in the blanks. Users that have to pay for the software in the first place. Well you might argue that you can get the open source part free. Yeah, but it's useless without the pay package. But this is just my personal opinion. Besides. Depending on the app design adding jack support is dead simple. Have fun, Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] xt2 coming to linux
On Fri, 27 Jan 2006 08:29:51 -0800 [EMAIL PROTECTED] wrote: > in another thread, Jorgen said that the input/output part of eXT on Linux > will be Open Source, so the JACK wizards can JACKify it as soon as it is > released. Why would anyone do this? To generate more revenue for the author? This is not really how open source works. Well, people might do this to help out their fellow users, but for me above would leave a bitter taste. Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] A sequencer example
On Sun, 15 Jan 2006 10:46:14 +0100 [EMAIL PROTECTED] wrote: > Does someone have a very simple sequencer example? > Something sending one note in 4/4? > I don't get along with the timing of the alsa sequencer. > I hope you can help me. > Scar Hi, i don't know whether you want to send midi events directly or schedule them via alsa_seq. If the former case have a look at my small test programs in this tarball http://affenbande.org/~tapas/midi_timer-1.tgz There's two programs in that tarball, one using the RTC for timing and the other using the system timer and usleep() (the RTC based one does achieve better accuracy). Both programs simply try to send a stream of note-on events at very regular intervals. If you want to use alsa_seq's queues to schedule events for delivery at a later time, you probably might have a look at rosegarden. Have fun, Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Alsa device to Jack ports?
On Wed, 04 Jan 2006 17:42:44 + Melanie <[EMAIL PROTECTED]> wrote: > Been there, done that. > > IT actually enumerates _hardware_ sound cards - nothing in .asoundrc > is recognized by this app. Broken app. Every sane ALSA app should use the pcm device "default" by default and should allow the user to enter any pcm device string. It might additionally provide a list of devices from which the user might choose (for convenience). Write a bug report ;) Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Audio/Midi system - RT prios..
On Fri, 30 Dec 2005 17:04:24 +0100 Florian Schmidt <[EMAIL PROTECTED]> wrote: > I also stumbled across some problems with sleep() and especially waking > up when the sleep time has expired in the course of writing my > rt_watchdog program. Sometimes the high prio SCHED_FIFO thread wasn't > woken up as long as a lower SCHED_FIFO prio thread hugged the cpu even > when the sleep time of the high prio thread was long expired.. Ingo told > me back then that there's extra kernel threads for the timing subsystem > which need to be setup to high prios too for this to work correctly. > Haven't really investigated further into this. > > I need to write another small test app that uses sleep based timing and > a high prio, too, to drive ughsynth. Will report what results i get. Ok, as Ingo has told me before (and which i just remembered again today): To make sure threads actually do get woken up after their sleep() time elapses one needs to make the softirq-timer/0 thread (only in -rt) systems high priority, too (higher than jack). I actually implemented a small sleep() based midi note generator (similar to the rtc based one) and the results i get are a little worse than with the rtc based timer, but not by thaaat much: diff: 6047 diff: 5999 diff: 5999 adjusted midi event [frame offset >nframes] - handed to next period diff: 5999 diff: 6048 diff: 5998 diff: 6000 diff: 6047 diff: 5999 diff: 5999 diff: 5999 diff: 6047 diff: 5999 diff: 6000 diff: 5999 diff: 6047 That's in the 1-2% jitter range which is still fine for me. Rosegarden in 2.6.14 vanilla and with the adjusted softirq-timer/0 kernel thread works a lot better than w/o the adjustment (naturally) when using the system timer as timing source. So all -rt users beware, make your softirq-timer/0 thread high prio, too :) Benefits seq24, too. Regards, Flo P.S.: if you want to make your own experiments, here's the updated tarball with the sleep() and rtc based test note generators (they only produce note-on events though): http://affenbande.org/~tapas/midi_timer-0.tgz -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] [ANN] sverb 0.9
On Tue, 3 Jan 2006 15:44:12 +0100 (CET) Cedric Roux <[EMAIL PROTECTED]> wrote: > Hello Linux Audio People, > > sverb is a CFDN order 15 reverb. > It is available at: > http://sed.free.fr/sverb > > 0.9 is the first public release. > > Any help/hacking/whatever is very welcome. build failure reports, too? Here's one in any case: gcc -Wall -O3 -ffast-math -fomit-frame-pointer `pkg-config gtk+-2.0 --cflags` -o sverb_gui.o -c sverb_gui.c sverb_gui.c: In function 'audio_ready': sverb_gui.c:124: error: invalid storage class for function 'impulse_play_callback' sverb_gui.c:125: error: invalid storage class for function 'alert' sverb_gui.c:126: warning: implicit declaration of function 'impulse_play_callback' sverb_gui.c:127: warning: implicit declaration of function 'alert' sverb_gui.c: At top level: sverb_gui.c:331: warning: conflicting types for 'alert' sverb_gui.c:331: error: static declaration of 'alert' follows non-static declaration sverb_gui.c:127: error: previous implicit declaration of 'alert' was here sverb_gui.c: In function 'file_event': sverb_gui.c:365: warning: implicit declaration of function 'strlen' sverb_gui.c:365: warning: incompatible implicit declaration of built-in function 'strlen' sverb_gui.c:370: warning: implicit declaration of function 'strcpy' sverb_gui.c:370: warning: incompatible implicit declaration of built-in function 'strcpy' sverb_gui.c: In function 'do_load': sverb_gui.c:520: warning: incompatible implicit declaration of built-in function 'strcpy' sverb_gui.c: In function 'do_save': sverb_gui.c:611: warning: incompatible implicit declaration of built-in function 'strcpy' sverb_gui.c: In function 'update_menu_callback': sverb_gui.c:637: error: invalid storage class for function 'update_impulse_callback' sverb_gui.c:638: warning: implicit declaration of function 'update_impulse_callback' sverb_gui.c: In function 'play_menu_callback': sverb_gui.c:643: error: invalid storage class for function 'impulse_play_callback' sverb_gui.c: In function 'preset_callback': sverb_gui.c:694: warning: implicit declaration of function 'strcmp' sverb_gui.c: At top level: sverb_gui.c:878: warning: conflicting types for 'update_impulse_callback' sverb_gui.c:878: error: static declaration of 'update_impulse_callback' follows non-static declaration sverb_gui.c:638: error: previous implicit declaration of 'update_impulse_callback' was here sverb_gui.c: In function 'update_impulse_callback': sverb_gui.c:886: error: invalid storage class for function 'impulse_play_callback' sverb_gui.c: At top level: sverb_gui.c:904: warning: conflicting types for 'impulse_play_callback' sverb_gui.c:904: error: static declaration of 'impulse_play_callback' follows non-static declaration sverb_gui.c:126: error: previous implicit declaration of 'impulse_play_callback' was here In file included from sverb_gui.c:1371: help_gui.h: In function 'sverb_gui_init': help_gui.h:3: warning: missing sentinel in function call help_gui.h:6: warning: missing sentinel in function call help_gui.h:9: warning: missing sentinel in function call help_gui.h:12: warning: missing sentinel in function call help_gui.h:15: warning: missing sentinel in function call help_gui.h:18: warning: missing sentinel in function call help_gui.h:21: warning: missing sentinel in function call help_gui.h:24: warning: missing sentinel in function call help_gui.h:27: warning: missing sentinel in function call ...plus more of these. Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Audio/Midi system - RT prios..
On Sun, 1 Jan 2006 13:18:09 +0100 [EMAIL PROTECTED] wrote: > > in a low latency *live* system, "timing" doesn't really exist outside of > > the current period. there is no concept of "when" that exists beyond the > > end of the current period. > to remove jitter i would delay all events i recive during one period > calculation by one period. so exact timestamping is very vital. > > there are only 2 things a live setup can do with an event received > during calculating the current audio buffer. > 1. play as fast as possible... results in midievents jittering to period > boundaries. > 2. add some fixed delay so that if the events were received in 10 sample > clocks distance they are injected into the system with a 10 sample time > distance. Of course in my original post i assumed that softsynths use this second scheme (with exactly one period extra delay) otherwise all the hassle with midi thread priorities would be void anyways: "Anyways, one more thing to note is for this to work nicely, the softsynth needs to have an extra midi handling thread that is also running with a priority in the 90s range, so it can timestamp the event properly when it arrives." "The interesting number is the "diff" output as it tells us the difference of the previous midi event timestamp to the current one.The "next" field is the offset into the currently to-be-processed period." Only implicitly so, but let's assume the scheme you mentioned as given from now on. Paul's post was a bit ambiguous though i must admit. Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Audio/Midi system - RT prios..
On Fri, 30 Dec 2005 19:01:46 +0100 Werner Schweer <[EMAIL PROTECTED]> wrote: > higher priority thread can interrupt lower priority threads. What do > you gain if the soundcard can interrupt the jack thread? I believe > it does not matter. Midi input is generating IRQ's, too (at least it appears so by watching /proc/interrupts with midi activity only (i.e. hook up your midi in to a midi monitor, as it seems withouth connections ALSA doesn't bother about the MIDI at all)). So having the soundcard IRQ handler thread prio higher than jackd makes sense to get stable midi input timing. > Interrupt routines on a well behaved system are using only some > microseconds so it should not matter at all for audio purposes. > Or do i miss something here? I'm not sure. But experience shows (to me at least) that a -rt kernel, where i can make i.e. the hard disk controller IRQ's lower prio than jackd and the soundcard irq, is handling additional load on the system better than a non -rt kernel without this tuning. > > It is very useful to be able to do other stuff while audio/midi is > > working uninterrupted. I got used to be able to compile a kernel > > alongside running jackd with a periodsize of 32 or 16 frames :) which > > means, i can play o.e. guitar while waiting for the damn compile to > > finish. > > 32 or 16 frames is IMHO insanely low. Lets assume your keyboard is > only 3.5m away from the drummer, you are about 10msec out of sync, which > translates to about 256 frames. This works reliable on a vanilla kernel > whatever you are doing in the background. It didn't for me. Vanilla generally is a bit more prone to xruns than -rt is, even at large periodsizes (> 64 frames). But if its good enough for you.. ;) I suppose it also depends on what exact hw you use. > An interesting question is what max. latencies are accepted for real live > situations? Well, we had this discussion earlier :) Always keep in mind that latencies are accumulative and people are different. I like to use 32 frames when playing my guitar through the computer, although 64 frames is good enough, too. 128 or 256 frames definitely start to feel weird, especially when effects add additional latency. Keep in mind that i additionally run around in my room :) So the distance from the speakers produces additional latency. > I can comfortably play keyboard at 20msec latency. > Something really bad is a timing _jitter_ of midi events. For > some drumloops you can hear a jitter of 2ms or lower. Latencies are > not so important for me but low jitter is. 20ms would be way too much for me. I agree though that midi jitter is also bad :) Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Audio/Midi system - RT prios..
On Fri, 30 Dec 2005 18:40:20 -0500 Lee Revell <[EMAIL PROTECTED]> wrote: > The relative priorities of the JACK and soundcard IRQs really don't > matter because they never contend - one is woken up by the other. This is true for the audio only case. Imagine for now, that MIDI activity i shandled by IRQ's, too. Let's further assume jackd's prio is higher than the soundcard IRQ (which is there for both audio and midi in this scenario). Ok, some audio irq has happened and jackd is doing its thing. Now a soundcard IRQ is generated for an incoming midi event. As the soundcard handler thread prio is lower than jackd's it will not get to run until jackd's process loop is finished. All of this depends on whether physical port midi activity is really handled by IRQ's too. Anyone know more? Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Audio/Midi system - RT prios..
On Fri, 30 Dec 2005 20:54:53 +0100 "Ralf Beck" <[EMAIL PROTECTED]> wrote: > Suppose a jack thread is running and your midiin device irq comes in but not > through, > because the device's irq thread has lower priority and does not get > scheduled! This is an interesting question: How is midi activity on physical ports handled? Is it polled? Or is the soundcard IRQ used for this, too? In the latter case the soundcard IRQ should definetly have a higher prio than all jack stuff. Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Audio/Midi system - RT prios..
On Fri, 30 Dec 2005 15:17:04 + GMT "Chris Cannam" <[EMAIL PROTECTED]> wrote: > - ALSA sequencer can sync to RTC, but the > associated module (snd-rtctimer) appears to hang > some kernels solid when loaded or used. I don't have > much information about that, but I can probably find > out some more. Yeah, i got a nice and juicy BUG in it (see below). So this is what kills rosegarden regularly here when run with RTC timing source. I'm not sure this is ALSA's fault, though, might be the -rt kernel, too. But nonetheless after this happens rosegarden is hung and syslog tells me "rtc: lost some interrupts at 1024hz" over and over until infinity (or reboot): Dec 30 17:30:27 mango kernel: BUG at include/linux/timer.h:83! Dec 30 17:30:27 mango kernel: [ cut here ] Dec 30 17:30:27 mango kernel: kernel BUG at include/linux/timer.h:83! Dec 30 17:30:27 mango kernel: invalid operand: [#1] Dec 30 17:30:27 mango kernel: PREEMPT Dec 30 17:30:27 mango kernel: Modules linked in: snd_rtctimer snd_seq_dummy snd_seq_oss snd_seq_midi snd_seq_midi_event snd_seq realtime iptable_nat ipt_addrtype ipt_state iptable_filter agpgart snd_intel8x0 usb_storage scsi_mod ohci_hcd usbcore ipt_MASQUERADE ip_nat ip_tables ip_conntrack snd_ice1712 snd_ice17xx_ak4xxx snd_ak4xxx_adda snd_cs8427 snd_i2c snd_mpu401_uart bsd_comp ppp_deflate zlib_deflate ppp_async ppp_generic slhc crc_ccitt sis900 mii crc32 snd_cs46xx gameport snd_rawmidi snd_seq_device snd_ac97_codec snd_ac97_bus snd_pcm_oss snd_mixer_oss snd_pcm snd_timer snd soundcore snd_page_alloc Dec 30 17:30:27 mango kernel: CPU:0 Dec 30 17:30:27 mango kernel: EIP:0060:[]Not tainted VLI Dec 30 17:30:27 mango kernel: EFLAGS: 00210296 (2.6.15-rc7-rt1) Dec 30 17:30:27 mango kernel: EIP is at rtc_do_ioctl+0x9c1/0xa00 Dec 30 17:30:27 mango kernel: eax: 0024 ebx: 0001 ecx: 00200246 edx: 0001 Dec 30 17:30:27 mango kernel: esi: 00200202 edi: 00821192 ebp: d680ea40 esp: e45e1d94 Dec 30 17:30:27 mango kernel: ds: 007b es: 007b ss: 0068 preempt: 0001 Dec 30 17:30:27 mango kernel: Process rosegardenseque (pid: 6134, threadinfo=e45e task=dcad58c0 stack_left=7520 worst_left=-1) Dec 30 17:30:27 mango kernel: Stack: c02e8de1 c02ed6bf 0053 Dec 30 17:30:27 mango kernel: c0331b20 e45e e45e c013953a e45e 0001 Dec 30 17:30:27 mango kernel:00200246 00200246 d680ea40 c02dda53 c0331b20 7005 f09f7e44 Dec 30 17:30:27 mango kernel: Call Trace: Dec 30 17:30:27 mango kernel: [] sub_preempt_count+0x1a/0x20 (56) Dec 30 17:30:27 mango kernel: [] _spin_lock_irqsave+0x23/0x60 (28) Dec 30 17:30:27 mango kernel: [] rtctimer_start+0x43/0x70 [snd_rtctimer] (40) Dec 30 17:30:27 mango kernel: [] snd_timer_start1+0x89/0xa0 [snd_timer] (20) Dec 30 17:30:27 mango kernel: [] snd_timer_start+0xaf/0xe0 [snd_timer] (16) Dec 30 17:30:27 mango kernel: [] snd_seq_timer_continue+0x41/0x70 [snd_seq] (36) Dec 30 17:30:27 mango kernel: [] snd_seq_queue_process_event+0x144/0x160 [snd_seq] (16) Dec 30 17:30:27 mango kernel: [] snd_seq_control_queue+0x57/0xb0 [snd_seq] (32) Dec 30 17:30:27 mango kernel: [] snd_seq_deliver_single_event+0x181/0x190 [snd_seq] (28) Dec 30 17:30:27 mango kernel: [] snd_seq_deliver_event+0x42/0xa0 [snd_seq] (52) Dec 30 17:30:27 mango kernel: [] snd_seq_client_enqueue_event+0x91/0x160 [snd_seq] (28) Dec 30 17:30:27 mango kernel: [] snd_seq_write+0x16b/0x200 [snd_seq] (44) Dec 30 17:30:27 mango kernel: [] vfs_write+0xd5/0x1b0 (80) Dec 30 17:30:27 mango kernel: [] sys_write+0x4b/0x80 (36) Dec 30 17:30:27 mango kernel: [] syscall_call+0x7/0xb (40) Dec 30 17:30:27 mango kernel: --- Dec 30 17:30:27 mango kernel: | preempt count: 0001 ] Dec 30 17:30:27 mango kernel: | 1-level deep critical section nesting: Dec 30 17:30:27 mango kernel: Dec 30 17:30:27 mango kernel: .. [] add_preempt_count+0x1a/0x20 Dec 30 17:30:27 mango kernel: .[<>] .. ( <= _stext+0x3feffde0/0x60) Dec 30 17:30:27 mango kernel: Dec 30 17:30:27 mango kernel: -- Dec 30 17:30:27 mango kernel: | showing all locks held by: | (rosegardenseque/6134 [dcad58c0, 1]): Dec 30 17:30:27 mango kernel: -- Dec 30 17:30:27 mango kernel: Dec 30 17:30:27 mango kernel: #001: [e1d2d4e8] {&timer->lock} Dec 30 17:30:27 mango kernel: ... acquired at: snd_timer_start+0x8e/0xe0 [snd_timer] Dec 30 17:30:27 mango kernel: -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Audio/Midi system - RT prios..
On Fri, 30 Dec 2005 11:54:56 -0500 Paul Davis <[EMAIL PROTECTED]> wrote: > you don't know the correct priority to use. i imagine an api along the > lines of: true. > > jack_create_thread (pthread_t*, void* (thread_function)(void*), > void* arg, int relative_to_jack); > > the last argument would specify that the thread should run at, above or > below the jack RT thread(s) by a given amount. typical values would be > +1, 0, -1 etc. Why not simply /* * returns the priority (1-99) of the jack main loop (which is already one * above the clients' process() threads or 0 if not realtime. Clients having * a midi handling thread should create it with a priority at least one * above the return value of this function. */ int jack_get_rt_priority(); Then the app can decide itself about how to create the thread. > > Agreed. why not make it prio 98 by default then? (system timer should > > still be higher i suppose). With a difference of only 10 between main > > jack loop and the watchdog, it might get a little crowded :) > > good point. OTOH, i'm not really sure if 9 priority levels isn't enough. It seems one above jacks main thread should be good enough for most midi purposes. Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Audio/Midi system - RT prios..
On Fri, 30 Dec 2005 17:37:13 +0100 Werner Schweer <[EMAIL PROTECTED]> wrote: > its right that MusE uses a RT midi thread to schedule midi > events. ALSA is used only to deliver (route) midi events. > I think this is the easiest possible solution and gives the app > the best control over timing. > Using the ALSA seq api means that ALSA has to operate the RT thread which > only moves the problems to ALSA. This is my understanding also. > The ALSA seq api is from ancient time were no realtime threads were > available in linux. Only a kernel driver could provide usable > midi timing. But with the introduction of RT threads the > ALSA seq api is obsolete IMHO. I wouldn't say obsolete, but IMHO RT thread based midi dispatching is more easy to get right. > Midi is synced to audio in MusE by using JACK frame timing to > schedule midi events which is also easy and straightforward. > There is nothing for a user to configure except he changes the > priority of the JACK RT thread. > The priority of the MusE midi RT thread has to be at least one above the > JACK RT priority. The point is that this allows the midi thread > to interrupt the JACK audio process thread which is necessary > to provide acceptable midi timing. Yep. I agree. Is this "one above the JACK RT priority" automated in muse? Your first sentence seems to imply otherwise. It's probably a reasonable approach to do it this way. Although manual user overide would be nice, too. > Last note about RT-linux kernels: its not _that_ important. Its > only a micro optimization. A normal recent kernel works pretty well. > If your normal kernel does not operate with sufficient low latencies, > the RT-kernel will most likely also not work. I do not agree. While this is true for an otherwise unloaded system, it is rather easy (on a vanilla kernel) to produce xruns by putting other load on the system. The IRQ priorization provided by -rt kernels is extremely useful to avoid these. It is _vital_ to run jackd with a priority higher than those IRQ handlers not doing audio/midi stuff (network, disk, etc). The soundcard IRQ handler must run with a high prio for this to work, too. I'm not all too sure about whether it matters which of the two (jack or soundcard irq) is higher though, as long as both are higher than other irq handlers. It is very useful to be able to do other stuff while audio/midi is working uninterrupted. I got used to be able to compile a kernel alongside running jackd with a periodsize of 32 or 16 frames :) which means, i can play o.e. guitar while waiting for the damn compile to finish. Flo P.S.: softsynths need to have their midi thread higher prio than jackd, too, for the flawless midi timing to work. So i suppose it's time for some bug reports to softsynth authors :) -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Audio/Midi system - RT prios..
On Fri, 30 Dec 2005 10:41:46 -0500 Paul Davis <[EMAIL PROTECTED]> wrote: > several people have wanted JACK to export a thread create call that > would take care of the RT-ness. that way, if you can run JACK with RT > scheduling, you can run a MIDI thread too, with no extra steps. it would > also be useful for people doing FFT in JACK clients using a separate > thread. actually, with realtime-lsm, there's really no need for this, except for some convenience. Every app can create its own RT threads these days. The 2.4.x capabilities days are (thank gawd) over :) > i don't agree with florian that the MIDI thread should run with higher > priority than the JACK watchdog, btw. i think the watchdog should be > higher than anything else until such a time as the kernel guarantees > "watchdog" functionality itself. Agreed. why not make it prio 98 by default then? (system timer should still be higher i suppose). With a difference of only 10 between main jack loop and the watchdog, it might get a little crowded :) Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Audio/Midi system - RT prios..
On Fri, 30 Dec 2005 15:17:04 + GMT "Chris Cannam" <[EMAIL PROTECTED]> wrote: Hi Chris, > > and midi events are not queued > > for future delivery but always delivered immediately. > > but this isn't -- Rosegarden always queues events > from a non-RT thread and lets the ALSA sequencer > kernel layer deliver them. (Thru events are delivered > directly, with potential additional latency because of > the lower priority used for the MIDI thread.) In > principle this should mean that only the priority of > the receiving synth's MIDI thread is significant for > the timing of sequenced events. Hi, i tested Rosegarden running with the system timer as timing source (RTC is a bit broken atm on -rt kernels for me), and i do not get satisfactory results. I used my ughsynth again (which is heavy on the cpu which makes the problems just clearer) with its midi thread at priority 95. Here's example output with rosegarden producing a supposedly steady stream of 16th notes at 120 bpm: note on, frame_time: 205200106 next event: 744 next event: 746 diff: 5998 note on, frame_time: 205206104 next event: 599 next event: 600 diff: 6042 note on, frame_time: 205212146 next event: 497 next event: 498 diff: 6157 note on, frame_time: 205218303 next event: 510 next event: 511 diff: 6140 note on, frame_time: 205224443 next event: 506 next event: 507 diff: 6145 note on, frame_time: 205230588 next event: 507 next event: 508 diff: 5511 note on, frame_time: 205236099 next event: 898 next event: 899 diff: 6000 note on, frame_time: 205242099 next event: 754 next event: 755 diff: 5998 note on, frame_time: 205248097 next event: 608 next event: 609 diff: 6034 note on, frame_time: 205254131 next event: 498 next event: 499 diff: 6153 note on, frame_time: 205260284 next event: 507 next event: 508 diff: 6141 note on, frame_time: 205266425 next event: 504 next event: 505 diff: 6148 note on, frame_time: 205272573 next event: 507 next event: 509 diff: 5521 note on, frame_time: 205278094 next event: 908 next event: 910 next event: 510 which is again in the range as with my test program and ughsynth having a low midi thread prio. This is clearly audible, too: http://affenbande.org/~tapas/rosegarden_ughsynth.ogg this is the rosegardenfile used for this: http://affenbande.org/~tapas/test16th.rg This would imply to me, that either the way the events are scheduled in rosegarden is buggy (unlikely as it works fine when there's less audio load on the system) or that the event queue delivery by ALSA is somehow happening with only SCHED_OTHER priority as well. I have not yet found an option for ALSA to configure this. > - ALSA sequencer uses kernel timers by default and > of course they only run at 100 or 250Hz in many > kernels. In my case i have compiled the kernel to use a system timer frequency of 1000hz. It would be interesting to know though what priority the ALSA event queue handling gets. I also stumbled across some problems with sleep() and especially waking up when the sleep time has expired in the course of writing my rt_watchdog program. Sometimes the high prio SCHED_FIFO thread wasn't woken up as long as a lower SCHED_FIFO prio thread hugged the cpu even when the sleep time of the high prio thread was long expired.. Ingo told me back then that there's extra kernel threads for the timing subsystem which need to be setup to high prios too for this to work correctly. Haven't really investigated further into this. I need to write another small test app that uses sleep based timing and a high prio, too, to drive ughsynth. Will report what results i get. > - ALSA sequencer can sync to RTC, but the > associated module (snd-rtctimer) appears to hang > some kernels solid when loaded or used. I don't have > much information about that, but I can probably find > out some more. I have never bothered to try this either. > - ALSA sequencer can sync to a soundcard clock, > but this induces jitter when used with JACK and has > caused confusion for users who find themselves > inadvertently sync'd to an unused soundcard (the > classic "first note plays, then nothing" symptom). > The biggest advantage of course is not having to run > an RT MIDI timing thread. My impression is that this > aspect of MusE (which does that, I think) causes > as many configuration problems for its users as using > ALSA sequencer queue timers does for Rosegarden's. > > Any more thoughts on this? >From my point of view just setting up a RT midi thread driven by RTC and with a sufficiently high prio for dispatching midi events immediately is the best way. As it seems to work well, at least for my small test case. Further testing needs to be done though. I will report back. I haven't really tried muse. Will do so if i find the time though.. Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Audio/Midi system - RT prios..
On Fri, 30 Dec 2005 16:03:44 +0100 Jens M Andreasen <[EMAIL PROTECTED]> wrote: > Flo! > > Is it important for the midi thread priority to be above the soundcard > IRQ, or is it enough to be above jackd? This is not 100% clear to me. I'd figure it should be above soundcard irq, too, just to be safe. I don't know enough about the internals of how and if priority is inherited by threads waiting for IRQ's. The previous posts were also all about midi routing from different apps on the same machine. I do not know how sending/receiving MIDI to physical ports is handled. Maybe the soundcard IRQ does play a role here. But a higher prio thread waiting for a lower prio IRQ will not really cause any troubles from my understanding (it will just work). > How will having several sound/midi cards fit into this scheme? Well, to get several audio cards working you will need to get them to work in a single jack server first, so the part about the jack priority setup still holds. Then again just put all midi stuff again at the higher prios. I don't see much changing here. Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Audio/Midi system - RT prios..
On Fri, 30 Dec 2005 01:52:10 +0100 Florian Schmidt <[EMAIL PROTECTED]> wrote: > On Fri, 30 Dec 2005 00:47:46 +0100 > Florian Schmidt <[EMAIL PROTECTED]> wrote: > > [snip] > > Hmm, > > forget this post :) I need to do some more testing first.. Ok, my synth was buggy (damn copy and paste). Now it works like a charm with a setup as described in my original post. To illustrate the difference a proper priority setup can make, here's debug output of my ughsynth driven by the RTC based midi_timer note generator (links for both programs at bottom of the mail): note on, frame_time: 56249918 next event: 574 diff: 6071 note on, frame_time: 56255989 next event: 501 diff: 6144 note on, frame_time: 56262133 next event: 501 diff: 6144 note on, frame_time: 56268277 next event: 501 diff: 6143 note on, frame_time: 56274420 next event: 500 diff: 5500 note on, frame_time: 56279920 next event: 880 diff: 6000 note on, frame_time: 56285920 next event: 736 diff: 6000 note on, frame_time: 56291920 next event: 592 diff: 6054 note on, frame_time: 56297974 next event: 502 diff: 6145 note on, frame_time: 56304119 next event: 503 diff: 6142 note on, frame_time: 56310261 next event: 501 diff: 6144 note on, frame_time: 56316405 next event: 501 diff: 5517 note on, frame_time: 56321922 next event: 898 diff: 6001 note on, frame_time: 56327923 next event: 755 diff: 6000 note on, frame_time: 56333923 next event: 611 diff: 6035 note on, frame_time: 56339958 next event: 502 diff: 6144 note on, frame_time: 56346102 next event: 502 diff: 6143 note on, frame_time: 56352245 next event: 501 diff: 6144 The interesting number is the "diff" output as it tells us the difference of the previous midi event timestamp to the current one.The "next" field is the offset into the currently to-be-processed period. In above output the midi handling thread of ughsynth ran with a priority of 59, which is below the jackd stuff in my system (-P 70). Here's output with the midi handling in ughsynth running at a priority of 95: note on, frame_time: 71319937 next event: 385 diff: 6000 note on, frame_time: 71325937 next event: 241 diff: 6000 note on, frame_time: 71331937 next event: 97 diff: 6002 note on, frame_time: 71337939 next event: 979 diff: 6000 note on, frame_time: 71343939 next event: 835 diff: 6000 note on, frame_time: 71349939 next event: 691 diff: 6000 note on, frame_time: 71355939 next event: 547 diff: 6000 note on, frame_time: 71361939 next event: 403 diff: 6001 note on, frame_time: 71367940 next event: 260 diff: 6000 note on, frame_time: 71373940 next event: 116 diff: 6001 note on, frame_time: 71379941 next event: 997 diff: 6000 note on, frame_time: 71385941 next event: 853 diff: 6001 note on, frame_time: 71391942 next event: 710 diff: 6000 note on, frame_time: 71397942 next event: 566 diff: 6000 note on, frame_time: 71403942 next event: 422 diff: 6000 note on, frame_time: 71409942 next event: 278 diff: 6001 note on, frame_time: 71415943 next event: 135 diff: 6000 note on, frame_time: 71421943 next event: 1015 The difference is either 5999, 6000 or 6001 frames, which at a framerate of 48000hz is tightly around 1/(6000/48000) = 8 hz which is exactly what the midi note generator is setup to do. The variance is several orders of magnitude lower than in the previous example output above with midi handling prio of 59, which does make an audible difference: http://affenbande.org/~tapas/stable_timing.ogg as opposed to: http://affenbande.org/~tapas/unstable_timing.ogg To summarize here's how a well tuned -rt system for combined midi/audio usage should look like (prioritywise): 99 System timer IRQ (you cannot change this anyways) 98 RTC IRQ 95 . . Midi handling threads of softsynths/midi sequencers (preferably . 85 82 Soundcard IRQ 80 Jackd watchdog thread 70 Jackd main loop 69 Jackd client (softsynths/midi sequencers) audio process callbacks 60 . . Other IRQ handlers (disk, network, USB, GFX) . 40 0 (SCHED_OTHER) All other software in the system Sadly not many app authors are aware of this (as my unsuccessful quest to get stable midi timing with available linux software showed), so i hope this post does raise the awareness on the issue. It would be ideal, if app authors allowed the user to finetune the realtime priorities of each component of their software (well, especially the midi handling part, as the audio processing priorities are determined by what priority jack is given). Here's the software i used for the test http://affenbande.org/~tapas/midi_timer.tgz http://affenbande.org/~tapas/ughsynth-0.0.3.tgz Regards, Flo P.S.: i also summarized the results a little bit on this page: http://tapas.affenbande.org/?page_id=40 Please let me know if there's any big errors on that page. P.P.S.: Additionally to RTC or sleep() based mechanism (which relies right now on the system timer frequency (which is a mere 250hz in def
Re: [linux-audio-dev] Audio/Midi system - RT prios..
On Fri, 30 Dec 2005 00:47:46 +0100 Florian Schmidt <[EMAIL PROTECTED]> wrote: [snip] Hmm, forget this post :) I need to do some more testing first.. Flo -- Palimm Palimm! http://tapas.affenbande.org
[linux-audio-dev] Audio/Midi system - RT prios..
Hi, i was wondering: With the new shiny -rt kernels and realtime scheduling available to non root users via the usual mechanisms, there's the possibility of really finetuning an audio/midi system. The main issue i am interested in is the interplay between midi and audio in such a system. How to tune the audio side to get a very reliable system is pretty easy these days, thanks to the great jack audio connection kit, alsa and the new -rt kernels. But now i wonder how midi software fits into this. I'm here interested in the special case of a software sequencer (like i.e. Rosegarden) driving a softsynth (like i.e. om-synth or supercollider3) or whatever. Ok, on a normal audio tuned -rt equipped linux system the SCHED_FIFO priorities which are used for the different components look something like this: 99 - system timer 98 - RTC 81 - soundcard IRQ handler 80 - jack watchdog 70 - jack main loop 69 - jack clients' process loops 50 - the other IRQ handlers Now, i wonder how midi threads would fit in best into this scheme. Let's assume our midi sequencer uses either sleep() or RTC to get woken up at regular intervals, and let's further assume that it properly deals with these timing sources to get relatively jitter free midi output given that it get's woken up often enough by the scheduler. I further assume that the alsa seq event system is used and midi events are not queued for future delivery but always delivered immediately. All this implies that for midi delivery timing to not be influenced by audio processing on the system (which gets a problem especially at large buffer size, where quite a bit of work is done at a time), all the stuff that handles midi should run with realtime priorities above the jack stuff (i.e. around 90). I wonder whether it also needs to have a higher priority than the soundcard irq handler, too. Does the jackd main loop "inherit" the priority of the soundcard irq handler? Anyways, one more thing to note is for this to work nicely, the softsynth needs to have an extra midi handling thread that is also running with a priority in the 90s range, so it can timestamp the event properly when it arrives. So i wonder now: Assuming our system is setup as described above and all midi handling is done from threads with sufficiently high pririties not to get disturbed by audio stuff, will the alsa event system play nice? I ask this, because i have setup a system as above with a simple midi generator (see code below) and some different softsynths (one of which i have written which does have its midi thread at an appropriate priority. you can get a tarball here. http://affenbande.org/~tapas/ughsynth-0.0.3.tgz Beware it eats unbelievable amounts of cpu and is in no way considered being finished. it just lay around handy for this test ;)). But i still get some regular jitter in my sound. Here's recorded example output (running jackd at a periodsize of 1024 and the test notes are produced at a frequency of 8hz). First with ughsynth then with jack-dssi-host hexter.so. The effect is less prominent with hexter, i suppose because the jack load with it is only at 2 or 3% as opposed to ughsynth that uses 50% here on my athlon 1.2 ghz box. In case you don't hear what i mean: The timing of every ca. 7th or 8th note is a little bit off. http://affenbande.org/~tapas/midi_timing.ogg So i wonder: what's going wrong? Is the priorities setup described above not correct? Is alsa seq handling somehow not done with RT priority? What else could be wrong? Please enlighten me :) And yeah, i do _not_ want to hear about jack midi. It's a good thing, and i'm all for it as it will make at least some scenarios work great (sequencer and softsynth both being jack midi clients), but not all. Thanks in advance, Flo midi_timer.cc: #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #define RTC_FREQ 2048.0 #define NOTE_FREQ8.0 #define RT_PRIO 85 int main() { int fd; fd = open("/dev/rtc", O_RDONLY); if (fd == -1) { perror("/dev/rtc"); exit(errno); } int retval = ioctl(fd, RTC_IRQP_SET, (int)RTC_FREQ); if (retval == -1) { perror("ioctl"); exit(errno); } std::cout << "locking memory" << std::endl; mlockall(MCL_CURRENT); // std::cout << "sleeping 1 sec" << std::endl; // sleep(1); snd_seq_t *seq_handle; int err, port_no; err = snd_seq_open(&seq_handle, "default", SND_SEQ_OPEN_OUTPUT, 0); if (err < 0) { std::cout << "error" << std::endl; exit(0); } std::string port_name = "midi_timer"; // set the name to something reasonable.. err = snd_seq_set_client_name(seq_handle, port_name.c_str()); if
Re: [linux-audio-dev] jack_callback <-> rest of the world
On Wed, 7 Dec 2005 09:30:28 +0100 Stéphane Letz <[EMAIL PROTECTED]> wrote: > jackd (of jackdmp in "synch" mode) where the server waits for all > clients to finish in a given cycle require the used synchronization > primitive to have a "wait with time-out" operation. Fifo can do that > (using poll), Mach semaphore on OSX can do that, but POSIX named > semaphore not. > > Do process shared mutexes/CVs have a "wait with time-out" operation.? Well, there's int pthread_cond_timedwait(pthread_cond_t *cond,pthread_mutex_t *mutex, const struct timespec *abstime); (from man "pthread_cond_wait") which probably works with a condition variable/mutex pair in shared memoryu, too. But i don't know anything about any timing guarantees. I suppose POSIX simply doesn't make any. Anyways, the precision of the timed wait is probably implementation dependent. Regards, Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] NPTL jack+ardour: large memlock required (was: Jack and NPTL (again?))
On Fri, 25 Nov 2005 11:10:05 +1030 (CST) Jonathan Woithe <[EMAIL PROTECTED]> wrote: > One thing I will try next is recompiling ardour; perhaps there's something > funny there. In any case though, does any of this ring a bell with anyone? Did you try the --unlock jack switch? Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] LASH and LASH_Terminal client flag problem
On Wed, 23 Nov 2005 16:41:35 +1100 Dave Robillard <[EMAIL PROTECTED]> wrote: > > This remains though :) So basically apps who save state to files should > > ignore state files specified on the commandline when in LASH mode, > > except for those apps that are unable to change the state file selection > > lateron upon user demand (i.e. little terminal helper tools, that don't > > have a GUI or other means to load a different state file). > > If it's specified on the command line, why save it as a key and/or file? > Pick one. :) No, the issue is that the user might very well invoke the client with a commandline option specifying a file when initially adding it to the session. Lateron he changes his mind and uses the app's menu to load a different one. This is all about apps which can _optionally_ specify a file via commandline (like ermmm, almost every single one) at startup. Then there's conflicting state info in LASH making the app load the one state file via commandline first and the other a moment later via the restore event. > I can put something in the docs, but it's a bit obvious and/or app > dependant. Ignoring some command line arguments is an acceptable > solution, but so is ignoring the configure key and/or file. No, ignoring the state file from LASH is IMHO absolutely not an option as this would then mean the session is not restored in that state which the user saved it in. I'd say apps should rather ignore their commandline option when they made a sucessfull LASH connection right at startup. Example usecase: - add seq24 and om/om_gtk to a LASH session. load stuff via the respective menus of the apps. - add jamin to the LASH session and specify a setup file on the commandline - drats, wrong jamin setup. So user selects a different one from the menu. - user hits LASH session save in lash_panel causing ardour to send its session file to LASH and jamin saves its whole setup file into the LASH specified dir. - user closes LASH session - user restores it at some latter time -> here the jamin setup file which was stored in the lash dir should be the only one getting opened. Ignoring the lash data is not really an option. Ignoring the commandline option in the first place is one. Regards, Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] LASH and LASH_Terminal client flag problem
On Tue, 22 Nov 2005 20:01:05 +0100 Florian Schmidt <[EMAIL PROTECTED]> wrote: > I ardour i try to hack around this by stripping "offending" commandline > options from argv before passing it to lash_init.. Actually this doesn't work right. > so the docs should at least point out the problem This remains though :) So basically apps who save state to files should ignore state files specified on the commandline when in LASH mode, except for those apps that are unable to change the state file selection lateron upon user demand (i.e. little terminal helper tools, that don't have a GUI or other means to load a different state file). Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] LASH and LASH_Terminal client flag problem
On Tue, 22 Nov 2005 15:34:14 +1100 Dave Robillard <[EMAIL PROTECTED]> wrote: > > this line seems to be the culprit (liblash/loader.c): > > > > #define XTERM_COMMAND_EXTENSION "&& sh || sh" > I don't know of anyone else actually using the terminal client stuff > anyway, so I might just remove that part if there's no objections... Personally i have no objection to removing the XTERM_COMMAND_EXTENSION. There's another issue though with LASH which again might be due to my limited understanding of it. Imagine starting a lash client where you specify a state file on the commandline (i.e. someone sent you a synth patch and you want to use it in one of your projets now). Ok, so you start your synth as mySynth AwesomePatchStateFile This is the commandline LASH will remember for this client app. Now during the course of the LASH session the user chooses to load a different statefile via the apps menu. Then the user chooses to save the LASH prject and then close it (the synth will then save the current statefile to the dir specified by LASH). Now what happens on restore? Well, the client will be started by LASH with the commandline mySynth AwesomePatchStateFile and right after starting up it will get a Restore_Data_File (or whatever it was called) event from LASH which tells it to use a different statefile (the one from the LASH project dir). So basically the client loads two statefiles right after the other. Which is a waste of cpu cycles and might generally be a nuisance for all kinds of reasons. I ardour i try to hack around this by stripping "offending" commandline options from argv before passing it to lash_init.. If this is the recommended way of solving the problem, i'd suggest the docs should be updated to reflect this. For different app designs the solutions might of course look different, so the docs should at least point out the problem. Any thoughts? Regards, Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] LASH and LASH_Terminal client flag problem
On Sun, 20 Nov 2005 16:32:21 +0100 Florian Schmidt <[EMAIL PROTECTED]> wrote: > The problem i see is: when the client in the term exits (either by means > of LASH telling it to, or by sending i.e. a SIGINT), it just drops to a > bash prompt instead of exiting the terminal. Hmm, this line seems to be the culprit (liblash/loader.c): #define XTERM_COMMAND_EXTENSION "&& sh || sh" as this basically makes sure the xterm is not exited. So maybe this is even expected behaviour. Hmm, i simply replaced it with "" and am happy now. Maybe this should be made configurable. Perhaps as an option which can be specified when restoring a session (default should be xterms close after the client in them terminates). Regards, Flo -- Palimm Palimm! http://tapas.affenbande.org
[linux-audio-dev] LASH and LASH_Terminal client flag problem
Hi, in the course of LASH'ifying jack_convolve i stumbled across the LASH_Terminal client flag which specifies to LASH that the client wants to be run in its own terminal when the session is restored. The code for this is in liblash/loader.c:121 The problem i see is: when the client in the term exits (either by means of LASH telling it to, or by sending i.e. a SIGINT), it just drops to a bash prompt instead of exiting the terminal. For reference i have included the code in question below. I also wonder why it is necessary to start another bash anyways? I tried to remove the extra bash call and use xterm -e command_buffer directly, but then the program doesn't even start correctly. Any other thought on this? static void loader_exec_program_in_xterm(int argc, char **argv) { size_t command_size; char *command_buffer; char *xterm_argv[6]; command_size = get_command_size(argc, argv); command_buffer = lash_malloc(command_size); create_command(command_buffer, argc, argv); xterm_argv[0] = "xterm"; xterm_argv[1] = "-e"; xterm_argv[2] = "bash"; xterm_argv[3] = "-c"; xterm_argv[4] = command_buffer; xterm_argv[5] = NULL; /* execute it */ execvp("xterm", xterm_argv); ... Regards, Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Channels and best practice
On Tue, 15 Nov 2005 14:49:01 +0100 Alfons Adriaensen <[EMAIL PROTECTED]> wrote: > The only place where I've seen prefetch used explicitly is in Brutefir's > sse and 3dnow routines which I recently modified for use in one of my own > projects. I think libDSP does prefetch and cache alignment, SIMD, yadayada :) http://libdsp.sourceforge.net/overview.html I don't know though to which degree each one of the functions is optimized. Best to ask Jussi himself (CC'ed) :) Regards, Flo -- Palimm Palimm! http://tapas.affenbande.org
[linux-audio-dev] Re: [linux-audio-user] [ANN] rt_watchdog
On Fri, 4 Nov 2005 20:21:51 +0100 Florian Schmidt <[EMAIL PROTECTED]> wrote: > > Hi, > > here > > http://affenbande.org/~tapas/rt_watchdog.tgz Hi again. Something is fishy. It just stopped working. And i don't know why. Maybe something with kernel timer priorities. I don't know. So better don't use it unless you want to help debugging :) i idle in #lad on irc.freenode.org. Have fun, Flo -- Palimm Palimm! http://tapas.affenbande.org
[linux-audio-dev] [ANN] rt_watchdog
Hi, here http://affenbande.org/~tapas/rt_watchdog.tgz you find a small program which acts as a watchdog daemon that kills runaway SCHED_FIFO tasks. It does so by setting up two threads: - one high priority (99) consumer that runs every 3 seconds - one low priority (1) producer that runs every second the producer fills a ringbuffer and the consumer drains it. When the ringbuffer runs empty a shell script will be run (via the system() function) that tries to change the scheduling policy of all threads in the system from SCHED_FIFO to SCHED_OTHER. the offending task may naturally not run at prio 99, otherwise the watchdog would never get to run. Here's a potential problem: The shell script potentially changes its own scheduling policy to SCHED_OTHER prior to chnaging the offending runaway task. Ugh! Any hint? Also all IRQ handler threads also have their policy changed. Dunno whether that's good or not. For reference i pasted the script and the c file here: unfifo_stuff.sh (the two arguments are the two thread id's of the rt_watchdog threads, as these are not supposed to have their policy changed). There's also a small test program in the tarball (compiles to test_rt) that wastes cycles SCHED_FIFO (locking the system) but exits eventually, so it's quite safe to test the watchdog with it. Maybe someone else find's it useful. Recommendations, tips, critique are all welcome.. unfifo_stuff.sh #!/bin/bash for i in $( ps -eL -o pid ); do if [ "$i" != "$1" -a "$i" != "$2" ] then chrt -o -p 0 $i fi done rt_watchdog.c: #include #include #include #include #include #include #include #include #include #include #include _syscall0(pid_t,gettid) #include "ringbuffer.h" /* how long to sleep between checks in the high prio thread */ #define SLEEPSECS 3 /* how long to sleep between writing "alive" messages to the ringbuffer from the low prio thread */ #define LP_SLEEPSECS 1 /* the priority of the high prio thread */ #define PRIO 99 /* the priority of the low prio thread */ #define LP_PRIO 1 /* the ringbuffer used to transfer "alive" messages from low prio producer to high prio consumer */ jack_ringbuffer_t *rb; pthread_t low_prio_thread; /* the thread id's for the low prio and high prio threads. these get passed to the unfifo_stuff.sh script to make sure the watchdog doesn't repolicy itself to SCHED_OTHER */ pid_t lp_tid; pid_t hp_tid; void signalled(int signal) { } volatile int thread_finish; /* this is the low prio thread. it simply writes to the ringbuffer to signal that it got to run, meaning it is still alive */ void *lp_thread_func(void *arg) { char data; struct timespec tv; lp_tid = gettid(); /* syslog(LOG_INFO, "lp tid: %i", gettid()); */ data = 0; while(!thread_finish) { /* we simply write stuff to the ringbuffer and go back to sleeping we can ignore the return value, cause, when it's full, it's ok the data doesn;t have any meaning. it just needs to be there running full shouldn't happen anyways */ jack_ringbuffer_write(rb, &data, sizeof(data)); /* then sleep a bit. but less than the watchdog high prio thread */ tv.tv_sec = LP_SLEEPSECS; tv.tv_nsec = 0; // sleep(LP_SLEEPSECS); nanosleep(&tv, NULL); } return 0; } int main() { pid_t pid, sid; int done; struct sched_param params; char data; int err; int consumed; int count; struct timespec tv; char unfifo_cmd[1000]; /* Fork off the parent process */ pid = fork(); if (pid < 0) { exit(EXIT_FAILURE); } /* If we got a good PID, then we can exit the parent process. */ if (pid > 0) { exit(EXIT_SUCCESS); } /* Change the file mode mask */ umask(0); /* Open any logs here */ openlog("rt_watchdog", 0, LOG_DAEMON); syslog(LOG_INFO, "started"); /* Create a new SID for the child process */ sid = setsid(); if (sid < 0) { /* Log any failure here */ syslog(LOG_INFO, "setsid failed. exiting"); exit(EXIT_FAILURE); } /* Change the current working directory */ if ((chdir("/")) < 0) { /* Log any failure here */ syslog(LOG_INFO, "chdir failed. exiting"); exit(EXIT_FAILURE); } /* Close out the standard file descriptors */ close(STDIN_FILENO); close(STDOUT_FILENO); close(STDERR_FILENO); // syslog(LOG_INFO, "closed fd's"); /* syslog(LOG_INFO, "hp tid: %i", gettid()); */ hp_tid = gettid();
[linux-audio-dev] Re: [linux-audio-user] jack and setuid
On Wed, 2 Nov 2005 18:46:03 +0100 conrad berhörster <[EMAIL PROTECTED]> wrote: > > Am Mittwoch, 2. November 2005 18:38 schrieb Florian Schmidt: > > > > Just for completeness sake: You can use the realtime lsm for 2.6.13 and > > above, too. I would even recommend it, since it's much less of a hassle > > to setup (rt_limits being the "correct" solution or not). > > > > Flo > > Just to be sure . Do you mean lsm instead of rt_limits or both. > > and do you have a link for the lsm patch. > > sizu c~ > yep, instead of rt_limits (they both achieve the same goal with vastly different approaches). http://sourceforge.net/projects/realtime-lsm/ -- Palimm Palimm! http://tapas.affenbande.org
[linux-audio-dev] Re: [linux-audio-user] jack and setuid
On Wed, 2 Nov 2005 14:25:45 +0100 conrad berhörster <[EMAIL PROTECTED]> wrote: > Am Mittwoch, 2. November 2005 14:02 schrieb Paul Davis: > thanks paul, > > > bash-3.00# chmod ugo+s /usr/local/bin/jackd > > > bash-3.00# exit > > > bash-3.00$ ls -la /usr/local/bin/jackd > > > -rwsr-sr-x 1 root root 206476 2005-11-01 15:23 /usr/local/bin/jackd > > > > this is a really, really, really bad thing to do. > yes, i have read that, because of security. but don't know a better way. > > > there is no reason to > > run jackd as root or set it up as setuid root. you should be using some > > kernel-based technique that allows you to get realtime priviledges > > without being root (capabilities on 2.4 kernels, realtime-lsm for 2.6.12 > > or lower, or the new rtlimits code for 2.6.13 or above). > since i'm using 2.6.14 , you mean set_rtlimits from > http://www.physics.adelaide.edu.au/~jwoithe/set_rtlimits-1.1.0.tgz ? > > but if i run jack as a user, there are no capture ports, and i have tons of > xruns. Just for completeness sake: You can use the realtime lsm for 2.6.13 and above, too. I would even recommend it, since it's much less of a hassle to setup (rt_limits being the "correct" solution or not). Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] xruns
On Wed, 2 Nov 2005 15:30:49 +0100 conrad berhörster <[EMAIL PROTECTED]> wrote: > Hello list, > well i try to understand the reason of xruns. when will they appear? > > for me it's curious, that , while copy a big file (> 50MB ) or many small > one, > there are xruns. so, it seems, that it has nothing to do with the soundcard > buffers. > > any comments? Well, yeah. First of all your question is very unprecise. I will try to guess the blanks. 1) you are probably talking about jackd as most other alsa apps don't even report their xruns 2) you are probably not running a realtime preemption or other low latency kernel 3) you are not running jack with the realtime flag (-R) The reason for an xrun is basically: The process consuming/producing audio did not do this fast enough (Audio is processed in chunks and you have the time equivalent to one chunk of audio to produce/consume it). This can have many reasons: - you ask too much of your computer (like the computations involved are simply too complex). This would produce a constant stream of xruns though. I suppose you probably see much less then 1 per periodsize/samplerate sec. - this is the more probable reason: Some other process on your system kept your audio producing/consuming process from doing its thang. This second one can be remedied by changing step 2 and 3 above. There's two more potential reasons which i can think of right now: 4) your jack tmpfs is not mounted on a tmpfs or shmfs filesystem 5) NPTL hell (google for this one) Have fun, Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] jack_callback <-> rest of the world
On Wed, 2 Nov 2005 11:05:34 +0100 Stéphane Letz <[EMAIL PROTECTED]> wrote: > In Jackdmp we have tested 2 system for inter-process synchronization: > fifo (the way it was done in regular jackd) and POSIX named semaphore > (which are built on top of futex on recent system version) > > In both cases, each already running client get access to the > synchronization primitive (fifo or POSIX named sema) defined by a new > coming client. The synchronization primitive is "opened" once when a > new client appears and is "closed" when the client quits. The > synchronization primitive that has to be signaled then depends of the > graph topology. > > > But ISTR that OSX only has named shared futexes (i.e. accessed > > via a file descriptor), and then of course the problem remains. > > On OSX, on can use Mach semaphore (internal and non portable...) > POSIX named semaphore or fifo. > > Stephane What results did you get? Did the semaphore perform better/worse than the fifo? What about pthread condition variables with pshared flag set? I read somewhere it should be implemented by now (at least on 2.6 systems). Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] jack_callback <-> rest of the world
On Mon, 31 Oct 2005 10:57:46 -0500 Lee Revell <[EMAIL PROTECTED]> wrote: > > Btw: i just discovered that pthread mutexes and condvars can have a > > "process shared" flag which makes it possiblo to synchronize threads > > across processes as it seems. Could be useful for jack, no? > > > > pthread_condvar_setpshared() > > pthread_mutexattr_setpshared() > > > > Or do i misread that manpage? > > What manpage? I don't have those on my system. Hi, i misspelled the first one: pthread_condattr_setpshared is the name. See here, too: http://www.google.de/search?q=pthread_condattr_setpshared SCNR ;) Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] jack_callback <-> rest of the world
On Sun, 30 Oct 2005 14:14:19 +0100 fons adriaensen <[EMAIL PROTECTED]> wrote: > On Sun, Oct 30, 2005 at 01:53:48PM +0100, Florian Schmidt wrote: > > > Oh i thought i read somewhere that when pthread_cond_wait it is not > > guaranteed that anyone actually signalled. Will do some more reading. > > It can return on unix signals, so you have to test for EINTR. > I don't think it will wake up unexpectedly otherwise. > > I'm thinking of rewriting the whole ITC object so it uses a > futex instead of the CV (that would also enable it to work > in shared memory across process boundaries), but then I really > need a lock free implementation for the linked lists. > I guess the required primitives are platform dependant. > Is their some library that provides them ? Btw: i just discovered that pthread mutexes and condvars can have a "process shared" flag which makes it possiblo to synchronize threads across processes as it seems. Could be useful for jack, no? pthread_condvar_setpshared() pthread_mutexattr_setpshared() Or do i misread that manpage? Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] jack_callback <-> rest of the world
On Sun, 30 Oct 2005 13:36:41 +0100 fons adriaensen <[EMAIL PROTECTED]> wrote: > This can still be so if the access to the shared data structure is > regulated by the same mutex that protects the condition variable, > provided that the operation on the shared data is very simple and fast. True.. > In libclthreads, the operation performed while the mutex is held is > either incr/decr of a counter, or adding/removing an element at > the end/start of a linked list. Both operations are so trivial > that it would be silly to use a separate mutex for them. > The nice thing about condition variables is that they allow you > to re-use the mutex in this way, and in fact that's why they > exist at all. Yeah, but i figured with RT constrains on the signaller it looked a bit different. Thinking about it a bit more it seems i was wrong :) As in the other case (a data structure where operations carried out by the signallee might take a long time (signaller op naturally needs to be short time RT safe)) there's also nothing gained by seperating the two mutexes. > Thinking a bit further, the conclusion is that the use of lock-free > data structures is warranted only iff the event passing mechanism > is lock-free as well, otherwise nothing is gained. Naturally. > False wakeups are easy to avoid as well in this way. In libclthreads > the sender will only signal the single CV iff the change of state of > the ITC object (all the semas and mailboxes) would trigger some > action in the receiver, i.e. it checks what the receiver is waiting > for. In the other case only the state is updated. Oh i thought i read somewhere that when pthread_cond_wait it is not guaranteed that anyone actually signalled. Will do some more reading. Thanks for your insights, Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] jack_callback <-> rest of the world
On Sun, 30 Oct 2005 14:03:39 +1100 Dave Robillard <[EMAIL PROTECTED]> wrote: > On Sun, 2005-30-10 at 00:42 +0200, Richard Spindler wrote: > > I read the tutorial at http://userpages.umbc.edu/~berman3/ , it uses > > mutex+condition, is it okay to do this? Are there better ways? > > That's a wonderful tutorial... on how NOT to write a Jack client. > (There's no lock free data structures "in C" or Linux? There's one > _included with Jack_...) > > On mutexes, calling pthread_mutex_trylock in the process thread is okay, > but pthread_mutex_lock is not. Don't Do That(TM). The tutorial client can be fixed pretty easily with this (see below), though as the comment said, the code was written when there didn't exist a lockfree ringbuffer implementation. This has changed (jack provides one). I know that you know this David, just restating the obvious for less experienced readers. Of course the tutorial client still needs to be considered broken even with the trylock. Also the basic problem of signalling (in this case the disk thread that there is work to do) still persists even with lockless ringbuffers. The other thinkable approach would be to make the diskthread wakeup regularly and check whether data is available in the ringbuffer. This is nasty, too, and unsuitable for some situations. So there's basically these approaches nowadays - using named pipes and having the signaller passing single bytes through it to wake up the signallee that blocks on the pipe. This is really not correct, though it might work ok in practice (see jack/ardour (in jack it is used to work around the fact that there's no inter-process mechanism for this though futexes could be used, too, if i understand correctly)). - using condition variables Btw: i think when using a single mutex only for the condition variable (as opposed to the tutorial client which uses the same mutex for the signalling _and_ for synchronizing access to its data structure), the likeliness of contention is rather unprobable given that that the signaller aquires (again, of course via trylock) right before and releases the lock right after the pthread_cond_signal/broadcast. Same for the singallee. As pthread_cond_wait releases the mutex while waiting and reaquires it before handing back control to the signallee the contention case becomes very unprobable when aquiring and releasing the lock right before and after the pthread_cond_wait. Of course with this strategy the case that trylock fails on the condition mutex needs to be handled gracefully (i.e. remembering for the next process callback to try again). Besides the one mutex used only for the condition var, there maybe need to be additional mutexes in this scenario for the shared data structures. Of course from the process callback pthread_mutex_lock is still a nono (again use trylock and handle failure gracefully). But i suppose most locking for the shared data can be worked around with lockless data structures... Btw: in the general case the signallee should always check for spurious wakeups which is not nessecary with the capture_client from the jack distribution as it doesn't hurt to wake up once or twice too often once in a while (at least not when using a ringbuffer - when it's empty the signallee simply goes back to waiting). Maybe the pthread_mutex_lock calls might be moved around the pthread_cond_wait call. The disk thread has the mutex locked all the time during writing. int process (jack_nframes_t nframes, void *arg) { thread_info_t *info = (thread_info_t *) arg; jack_default_audio_sample_t *in; sample_buffer_t *buf; unsigned int i; if (!info->can_process) { return 0; } /* we don't like taking locks, but until we have a lock free ringbuffer written in C, this is what has to be done */ if (pthread_mutex_trylock (&buffer_lock) != 0) { /* this is the unprobable contention case. we will simply do nothing here. audio will be lost. better than an xrun though */ return 0; } /* ok, aquired the mutex, so we can do our thing */ buf = get_free_buffer (nframes, nports); for (i = 0; i < nports; i++) { in = (jack_default_audio_sample_t *) jack_port_get_buffer (ports[i], nframes); memcpy (buf->data[i], in, sizeof (jack_default_audio_sample_t) * nframes); } put_write_buffer (buf); /* tell the disk thread that there is work to do */ pthread_cond_signal (&data_ready); pthread_mutex_unlock (&buffer_lock); return 0; } Have fun and correct me where wrong, Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Radio receiver.
On Fri, 28 Oct 2005 02:16:16 +0200 fons adriaensen <[EMAIL PROTECTED]> wrote: > You could indeed make some processor that would regenerate > a Morse signal (and also decode it on the fly). But you would > have a difficult time trying to outperform human hearing in > that application. For example, our hearing mechanisms are not > limited by the usual 'uncertainty principle' that limits the > product of resolution in time and in frequency, they can do > much better. Interesting subject. Got any pointers on the subject. I suppose that the ear, especially in composite signals might be able to determine relative timing differences with a very high precision (even for very low freq signals). Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] External audio interface (edirol FA/UA-101)
On Fri, 30 Sep 2005 21:38:17 +0300 Jussi Laako <[EMAIL PROTECTED]> wrote: > Hmmh, at least the documentation didn't state anything like that a while > back (if I remember correctly). And at least my applications won't > expect anything about the buffer size. http://jackit.sourceforge.net/docs/reference/html/types_8h.html#a9 well, for an app that can handle any buffersize, this doesn't really matter, as it, well, can handle any buffer size :) > In fact when using sample rate converted ports (some of my experimental > stuff) it may not be even same size for all callbacks. But hey, that's > why there's period size argument passed to the callback, IMO. Well, the nframes argument is really there for apps that don't need to update any significant state upon buffer size changes and thus don't have a buffersize_callback registered. They simply use what they get from the nframes argument. Nowadays the jack API garantees to clients that a] the buffersize is a power of two b] that the buffersize will not change inbetween two buffersize_callbacks. http://jackit.sourceforge.net/docs/reference/html/types_8h.html#a13 I suppose your sample rate conversion stuff is violating the jack API requirements. > Converting the buffer size on application side to some specific 2^x is > anyway needed when doing FFT or similar. Well, my jack_convolve simply uses what jack is providing (chopping up the response in buffersized chunks so each block can be directly processed without extra delay). jack_convolve relies on both a] and b] from above (well it really ignores the possibilities of the buffer size changing, so it really is broken, too, in a sense). > > There may be applications or hardware passing something other than 2^x > buffers for example because it's some specific length in milliseconds, > etc. Which really is a problem i think. I don't use any USB hw for example, so i don't really know how this is solved in these cases. Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] External audio interface (edirol FA/UA-101)
On Fri, 30 Sep 2005 01:02:47 +0300 Jussi Laako <[EMAIL PROTECTED]> wrote: > On Thu, 2005-09-29 at 23:14 +0400, Dmitry S. Baikov wrote: > > Jackd needs buffers to be power of 2, and usb-audio - multiples of 1ms. > > I just verified jack to work with any buffer size using OSS backend. Hi, i think the jack API garantees power of 2 size buffers to clients. So while jack itself might work, some clients that depend on a power of 2 buffersize will fail horribly if something different is used. Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] JACK error 4294967295
On Wed, 14 Sep 2005 08:16:26 -0600 Hans Fugal <[EMAIL PROTECTED]> wrote: > That's good to know about, thanks. I googled and found your page > http://tapas.affenbande.org/?page_id=6 which shows me how to set the irq > handler prio, however my soundcard seems to be sharing with my video > card. Do you know a way to set which IRQs are used for devices? I have a > cs46xx card as well. AFAIK, this depends on the BIOS on your mainboard. On my mainboard i only had the option of switching slots around until i found one in which the soundcard got its own irq. Sometimes the irq line routing is documented in the mainboard manual, but more often than not it's not. Beware though that sometimes BIOSes also take the class of the device into account. So, it's mostly trial and error switching slots. Regards, Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] JACK error 4294967295
On Wed, 14 Sep 2005 16:01:12 +0200 Florian Schmidt <[EMAIL PROTECTED]> wrote: > > Ok, I understand that. So I take it if you set qjackctl priority to 0, > > it will not specify it and therefore use the JACK default which is >=2? > > I tried setting -P to 2 in qjackctl, and it works fine. > > I suppose jackd should exit with an error when the prio is undefined. What Oops, hit send too early. I didn't mean "undefined". I actually meant "out of range". When the pri is "undefined" as in "not specified by user", of course the default should be used. BTW: i think it would make sense to raise the default prio to 70, which would make jackd work better out of the box with RP kernels (which make the prio of the irq handlers all around 50). Only thing left for the user would then be to raise the soundcard irq handler prio. Regards, Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] JACK error 4294967295
On Wed, 14 Sep 2005 07:55:16 -0600 Hans Fugal <[EMAIL PROTECTED]> wrote: > > there's no priority 0 for SCHED_FIFO threads AFAIK, thus, as jackd runs > > at the prio specified via the comandline, the watchdog at prio +10 and > > the clients at prio -1, you effectively get a prio of 0 for the clients > > when starting jackd with -P 1. Which doesn't work. So, a checking > > whether the argument to -P is >= 2 should be enough. Plus the > > documentation might need some updating to document the behaviour. I'll > > send in a patch for the Documentation in a little while. > > Ok, I understand that. So I take it if you set qjackctl priority to 0, > it will not specify it and therefore use the JACK default which is >=2? > I tried setting -P to 2 in qjackctl, and it works fine. I suppose jackd should exit with an error when the prio is undefined. What BTW: if you run a RP system you usually want the jack threads to be higher prio than all irq handlers other than the soundcard. --- jackd.1.in.orig 2005-09-14 15:52:00.0 +0200 +++ jackd.1.in 2005-09-14 16:00:58.0 +0200 @@ -74,7 +74,7 @@ .TP \fB\-P, \-\-realtime\-priority \fIint\fR When running \fB\-\-realtime\fR, set the scheduler priority to -\fIint\fR. +\fIint\fR. The valid range of values for this switch is between 2 and 89 (This is due to the fact that jackd runs its main thread, the watchdog thread and the clients threads at different priorities). The default value is 10. .TP \fB\-\-silent\fR Silence any output during operation. -- Palimm Palimm! http://tapas.affenbande.org jackd.1.in.patch Description: Binary data
Re: [linux-audio-dev] JACK error 4294967295
On Wed, 14 Sep 2005 07:33:58 -0600 Hans Fugal <[EMAIL PROTECTED]> wrote: > So you're saying jackd should run at priority 1 or higher, and we ought > to check for that? I could probably manage such a patch, but running at > priority 1 is what was causing this error for me with jack apps. Is > there a way to match priority from jack apps automatically? Hi, there's no priority 0 for SCHED_FIFO threads AFAIK, thus, as jackd runs at the prio specified via the comandline, the watchdog at prio +10 and the clients at prio -1, you effectively get a prio of 0 for the clients when starting jackd with -P 1. Which doesn't work. So, a checking whether the argument to -P is >= 2 should be enough. Plus the documentation might need some updating to document the behaviour. I'll send in a patch for the Documentation in a little while. Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] Re: [ANN] dssi_convolve
On Wed, 17 Aug 2005 20:32:17 +0100 James Stone <[EMAIL PROTECTED]> wrote: > Problem with libconvolve: doesn't automatically create libconvolve.a with > resulting problems from ld not being able to find it. > > I had to manually do: > > ar cru libconvolve.a libconvolve.so.0.0.4 > > ranlib libconvolve.a (not sure whether this is needed??) Hmm, weird. i didnt know this file was needed for dynamic linking against it. Is this in some way distribution dependent? Isn't a .a file a static lib? why would ld complain if this file didn't exist? > cp libconvolve.a /usr/local/lib > > Oh, also while I am on the subject of bugreports, your > libconvolve.0.0.5.tgz is actually a tar file, not tar.gz Right, fixed that along with the target name :) Regards, Flo -- Palimm Palimm! http://tapas.affenbande.org
[linux-audio-dev] Re: [linux-audio-user] Re: [ANN] dssi_convolve
On Wed, 17 Aug 2005 19:30:00 +0100 James Stone <[EMAIL PROTECTED]> wrote: > > > I cannot build libconvolve though because it depends on libdsp which is > > not available for Debian any more (AFAIK) and the source tarball seems to > > depend on NPTL headers (which do not seem to exist on my machine): > > > > DynThreads.cc:28:26: nptl/pthread.h: No such file or directory > > > > The problem was with the line: > > ifndef USE_NPTL > > which was not defined, but for some reason my machine was ignoring > (perhaps some gcc related problem??), leading it to look for > /usr/include/nptl/pthread.h instead of /usr/include/pthread.h > > Fixed it with a little hand-editing! > > James > Thanks for the report. I put Jussi (author of libdsp) on the CC list to let him know. Flo -- Palimm Palimm! http://tapas.affenbande.org
[linux-audio-dev] [ANN] dssi_convolve
Hi, this is the first public tarball of dssi_convolve, a DSSI wrapper around libconvolve. grab here: http://affenbande.org/~tapas/jack_convolve/dssi_convolve-0.0.2.tgz you need libconvolve-0.0.5.tgz for this, too (fixed some minor bugs) http://affenbande.org/~tapas/jack_convolve/libconvolve-0.0.5.tgz Features: - no GUI yet. Send OSC commands directly to load a responsefile. If you use the included test_dssi_convolve.om patch in om-synth (you need latest cvs for this), you can use sclang [supercollider3] like this to send it stuff: a = NetAddr("localhost", 16180); a.sendMsg("/dssi/test/Convolve_0/configure", "rtprio", "0"); a.sendMsg("/dssi/test/Convolve_0/configure", "responsefile", "/home/tapas/sound/ResponseFiles/room/Thick Room.wav"); The included om patch provides stereo input and stereo output. A copy of the stereo input is delayed (by 0.68s which is just the delay introduced by dssi_convolve at a samplerate of 48khz and a partition size of 16384) and mixed back into the output. Both input signals are also mixed together and fed into the single convolution input. The stereo convolution output is mixed into the stereo patch output. While this kinda abuses the convolution to do stuff it shouldn't it still sounds nice :) configure keys understood: "responsefile" value: filename. load the specified responsefile "rtprio" value: the desired SCHED_FIFO prio for the worker thread. when 0 is specified SCHED_OTHER is used. - the convolution runs in a lower prio thread and a huge buffer [default 16384] is used to decouple the convolution size from the hosts periodsize. This introduces 2*partition size frames additional latency which is reported on the "latency" control output. The partition size can be changed by altering the DEFAULT_PARTITION_SIZE in the source code. A configure key for this will be added. Bugs: - loads only in om, not in jack-dssi-host. [any clue?] - might have problems loading mono files. not tested. - SConstruct broken. Use the Makefile - might lock up your boxen. so if this is a problem for you, inspect the source first and fix all bugs ;) - millions more i'm sure. if you find any please let me know :) TODO: - GUI - Different channel versions (mono, 4 channel, 6 channel). This provides always a single input only but the loaded response files may then be mono, stereo, 4-channel or 6-channel. - Realtime mode (where partition size == hosts buffer size -> no additional delay). Need to figure out some DSSI specifics to find out how to discover host buffer size before initial configure call is done (which might load a responsefile and for which the partitioon size needs to be specified) TODO's might take a while due to my limited time atm [studying]. Help appreciated. Drop me a mail. Feedback is most welcome. Regards, Flo P.S.: yes i will remove the audio rate gain input port in the next realease. I don't need it and i figure it might confuse DSSI host apps that try to figure out themselfes how to hook it up. -- Palimm Palimm! http://tapas.affenbande.org
[linux-audio-dev] www.linuxdj.com - down!
s. subject. down for me for quite a while already. Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] threading in DSSI plugins
On Wed, 10 Aug 2005 14:09:55 +0200 Alfons Adriaensen <[EMAIL PROTECTED]> wrote: Hi, thanks for your answer. > Sorry, I didn't express myself correctly. There is indeed more delay, > but what I'd wanted to say is: there is no need for copying your signal > into one more layer of intermediate buffers. Right. > On output you probably already have a circular buffer to keep the > partial results in between processing calls for one partition. If you > make it one partition size longer than what it would normally be, > and set the initial output pointer one partition size before the end, > there is no need to make extra copies. Yep.. > > The result of this all is that if period_size >= partition_size, you > have one more partition delay than strictly necessary. In that case > just advance the initial output pointer to the start of the circular > buffer. I plan to have two modes. a] 0 - latency, where Partitionsize == periodsize and without any extra buffering, etc.. This is practically zero latency but with the expected cpu usage. b] buffered/threaded - where Partitionsize > periodsize, and the buffering/threading scheme we talk about here is used. The case where periodsize is > partition_size is really not common i'd reckon. The user could use the 0 - latency mode here. > For synchronisation, you need to wake up the worker thread somehow > when a partition of input is available. A Posix sema or condition > variable will do. Assuming the system is not overloaded, there is > no need to sync in the other direction - just assume that your > worker thread has done its job and read the results. Sounds good. Thanks for the info. Flo -- Palimm Palimm! http://tapas.affenbande.org
Re: [linux-audio-dev] threading in DSSI plugins
On Wed, 10 Aug 2005 12:01:17 +0200 Alfons Adriaensen <[EMAIL PROTECTED]> wrote: > On Wed, Aug 10, 2005 at 11:34:39AM +0200, Florian Schmidt wrote: > > > a] is it possible to use threading in a DSSI? > > I've done this in some LADSPAs, it works. Great.. > > > b] would a RT prio of 1 (for the convolution thread) be an OK > > compromise? It will be lower than all audio stuff on a typical jack > > system? What is jackd's default RT prio again? > > When running in JACK, you can obtain JACKS thread id and then > look up its priority. Otherwise, you can query the thread's prio > the first time your process() is called and then create a thread > just below it. Great tips. Thanks. > > [2] - yes, i'm aware that this needs again some extra buffering ;) > > It doesn't need to: the extra buffering can be absorbed into the > buffers you need anyway - zero overhead. I've got a C++ implementation > of this, just drop me a line if you want it. Let's play this through with an example. For simplicity's sake let's assume the host always calls the plugins run() method with a constant buffersize of 1024 frames (there's still no requirement for this though - important to keep in mind) and the internal partitionsize of the convolution is 2048 frames. Let's further assume the convolution thing has internal input and output buffers of 2048 frames each. The input buffer is initially empty and the output buffer is initially full (filled with 0's) - run() 1: 1024 frames are filled into the input buffer. Also 1024 frames are output from the output buffer - run() 2: 1024 frames are filled into the input buffer and 1024 frames are consumed from the output buffer. The input buffer is now full and the output buffer empty. We have enough data now to do tha thang. So we somehow (how? [1]) communicate to the convolution thread that it should start to do process the input buffer now. We don't know yet when it will finish, but to meet any sort of deadline it should finish before another 2048 frames have arrived. - run() 3: 1024 frames are filled into the input buffer -> ouch. The input buffer is what the convolution thread operates on right now. 1024 frames are consumed from the output buffer -> ouch, it's empty, plus this is where the convolution thread puts its output. The problem i see should be clear now. The solution i would use would be to make both input and output buffers twice the partition size (you can probably fill in the blanks :). I have also ignored the remaining synchronization issues. Damn, need to run to university now. I'll read up lateron.. Flo [1] - the typical rt-safe-kicking-off-another-thread-to-do-something-problem raises its ugly head again. What other options are there? sleeping for short intervals and lay down again when nothing to do? -- Palimm Palimm! http://tapas.affenbande.org
[linux-audio-dev] threading in DSSI plugins
Hi, i played around with extra buffering the input/output of libconvolve (new tarball [1] and updated jack_convolve [1] (understands the --partitionsize=frames argument now which makes it use the specified size for the partition size instead of the jack buffersize), and like expected this doesn't do CPU usage any good. Easy to see in this example: jack_buffersize = 1024 partitionsize = 2048 Now the convolution code is executed only every second jack process() cycle. If the previous DSP usage was like 20% in every process cycle then it's ca. 25% in every other cycle now (estimate). The solution to even out the load is to use an extra thread [2]. For best performance i would assume that the DSSI needs an extra thread with RT scheduling (if available) and an RT prio which should be lower than all the other jack and midi threads of i.e. the DSSI host and other jack clients. So i got basically two questions: a] is it possible to use threading in a DSSI? b] would a RT prio of 1 (for the convolution thread) be an OK compromise? It will be lower than all audio stuff on a typical jack system? What is jackd's default RT prio again? Regards, Flo [3] [1] - http://tapas.affenbande.org/?page_id=5 [2] - yes, i'm aware that this needs again some extra buffering ;) But this whole larger-partitionsize-than-jack-buffersize-thing is all about trading latency for cpu niceness. If the convolution is used as non RT effect [like i.e. in a DAW for prerecorded material], then latency doesn't matter as long as the host compensates for it. [3] - i'll probably be offline from the 12th on, as i can't pay my phone bill, so be quick with answers ;) -- Palimm Palimm! http://tapas.affenbande.org
[linux-audio-dev] NPTL hell on debian might come to an end (Fw: Bug#266507 acknowledged by developer (Bug#266507: fixed in glibc 2.3.5-3))
Let's hope for the best :) Begin forwarded message: Date: Fri, 05 Aug 2005 03:35:40 -0700 From: [EMAIL PROTECTED] (Debian Bug Tracking System) To: Florian Schmidt <[EMAIL PROTECTED]> Subject: Bug#266507 acknowledged by developer (Bug#266507: fixed in glibc 2.3.5-3) This is an automatic notification regarding your Bug report #266507: NPTL (0.60) quirks with pthread_create (ignores attributes), which was filed against the libc6 package. It has been closed by one of the developers, namely GOTO Masanori <[EMAIL PROTECTED]>. Their explanation is attached below. If this explanation is unsatisfactory and you have not received a better one in a separate message then please contact the developer, by replying to this email. Debian bug tracking system administrator (administrator, Debian Bugs database) Received: (at 266507-close) by bugs.debian.org; 5 Aug 2005 10:30:42 + >From [EMAIL PROTECTED] Fri Aug 05 03:30:42 2005 Return-path: <[EMAIL PROTECTED]> Received: from joerg by spohr.debian.org with local (Exim 3.36 1 (Debian)) id 1E0zS7-0006zm-00; Fri, 05 Aug 2005 03:29:23 -0700 From: GOTO Masanori <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] X-Katie: lisa $Revision: 1.30 $ Subject: Bug#266507: fixed in glibc 2.3.5-3 Message-Id: <[EMAIL PROTECTED]> Sender: Joerg Jaspert <[EMAIL PROTECTED]> Date: Fri, 05 Aug 2005 03:29:23 -0700 Delivered-To: [EMAIL PROTECTED] X-Spam-Checker-Version: SpamAssassin 2.60-bugs.debian.org_2005_01_02 (1.212-2003-09-23-exp) on spohr.debian.org X-Spam-Level: X-Spam-Status: No, hits=-6.0 required=4.0 tests=BAYES_00,HAS_BUG_NUMBER autolearn=no version=2.60-bugs.debian.org_2005_01_02 X-CrossAssassin-Score: 73 Source: glibc Source-Version: 2.3.5-3 We believe that the bug you reported is fixed in the latest version of glibc, which is due to be installed in the Debian FTP archive: glibc-doc_2.3.5-3_all.deb to pool/main/g/glibc/glibc-doc_2.3.5-3_all.deb glibc_2.3.5-3.diff.gz to pool/main/g/glibc/glibc_2.3.5-3.diff.gz glibc_2.3.5-3.dsc to pool/main/g/glibc/glibc_2.3.5-3.dsc libc6-dbg_2.3.5-3_arm.deb to pool/main/g/glibc/libc6-dbg_2.3.5-3_arm.deb libc6-dbg_2.3.5-3_hppa.deb to pool/main/g/glibc/libc6-dbg_2.3.5-3_hppa.deb libc6-dbg_2.3.5-3_i386.deb to pool/main/g/glibc/libc6-dbg_2.3.5-3_i386.deb libc6-dbg_2.3.5-3_mips.deb to pool/main/g/glibc/libc6-dbg_2.3.5-3_mips.deb libc6-dbg_2.3.5-3_mipsel.deb to pool/main/g/glibc/libc6-dbg_2.3.5-3_mipsel.deb libc6-dbg_2.3.5-3_powerpc.deb to pool/main/g/glibc/libc6-dbg_2.3.5-3_powerpc.deb libc6-dbg_2.3.5-3_s390.deb to pool/main/g/glibc/libc6-dbg_2.3.5-3_s390.deb libc6-dbg_2.3.5-3_sparc.deb to pool/main/g/glibc/libc6-dbg_2.3.5-3_sparc.deb libc6-dev-ppc64_2.3.5-3_powerpc.deb to pool/main/g/glibc/libc6-dev-ppc64_2.3.5-3_powerpc.deb libc6-dev-s390x_2.3.5-3_s390.deb to pool/main/g/glibc/libc6-dev-s390x_2.3.5-3_s390.deb libc6-dev-sparc64_2.3.5-3_sparc.deb to pool/main/g/glibc/libc6-dev-sparc64_2.3.5-3_sparc.deb libc6-dev_2.3.5-3_arm.deb to pool/main/g/glibc/libc6-dev_2.3.5-3_arm.deb libc6-dev_2.3.5-3_hppa.deb to pool/main/g/glibc/libc6-dev_2.3.5-3_hppa.deb libc6-dev_2.3.5-3_i386.deb to pool/main/g/glibc/libc6-dev_2.3.5-3_i386.deb libc6-dev_2.3.5-3_mips.deb to pool/main/g/glibc/libc6-dev_2.3.5-3_mips.deb libc6-dev_2.3.5-3_mipsel.deb to pool/main/g/glibc/libc6-dev_2.3.5-3_mipsel.deb libc6-dev_2.3.5-3_powerpc.deb to pool/main/g/glibc/libc6-dev_2.3.5-3_powerpc.deb libc6-dev_2.3.5-3_s390.deb to pool/main/g/glibc/libc6-dev_2.3.5-3_s390.deb libc6-dev_2.3.5-3_sparc.deb to pool/main/g/glibc/libc6-dev_2.3.5-3_sparc.deb libc6-i686_2.3.5-3_i386.deb to pool/main/g/glibc/libc6-i686_2.3.5-3_i386.deb libc6-pic_2.3.5-3_arm.deb to pool/main/g/glibc/libc6-pic_2.3.5-3_arm.deb libc6-pic_2.3.5-3_hppa.deb to pool/main/g/glibc/libc6-pic_2.3.5-3_hppa.deb libc6-pic_2.3.5-3_i386.deb to pool/main/g/glibc/libc6-pic_2.3.5-3_i386.deb libc6-pic_2.3.5-3_mips.deb to pool/main/g/glibc/libc6-pic_2.3.5-3_mips.deb libc6-pic_2.3.5-3_mipsel.deb to pool/main/g/glibc/libc6-pic_2.3.5-3_mipsel.deb libc6-pic_2.3.5-3_powerpc.deb to pool/main/g/glibc/libc6-pic_2.3.5-3_powerpc.deb libc6-pic_2.3.5-3_s390.deb to pool/main/g/glibc/libc6-pic_2.3.5-3_s390.deb libc6-pic_2.3.5-3_sparc.deb to pool/main/g/glibc/libc6-pic_2.3.5-3_sparc.deb libc6-ppc64_2.3.5-3_powerpc.deb to pool/main/g/glibc/libc6-ppc64_2.3.5-3_powerpc.deb libc6-prof_2.3.5-3_arm.deb to pool/main/g/glibc/libc6-prof_2.3.5-3_arm.deb libc6-prof_2.3.5-3_hppa.deb to pool/main/g/glibc/libc6-prof_2.3.5-3_hppa.deb libc6-prof_2.3.5-3_i386.deb to pool/main/g/glibc/libc6-prof_2.3.5-3_i386.deb libc6-prof_2.3.5-3_mips.deb to pool/main/g/glibc/libc6-prof_2.3.5-3_mips.deb libc6-prof_2.3.5-3_mipsel.deb to pool/main/g/glibc/libc6-prof_2.3.5-3_mipsel.deb libc6-prof_2.3.5-3_powerpc.deb to pool/main/g/glibc/libc6-prof_2.3.5-3_powerpc.deb libc6-prof_2.3.5-3_s390.deb to pool/main/g/glibc/libc6-prof_2.3.5-3_s390.deb l
Re: [linux-audio-dev] Libs for reading/writing midis
On Fri, 05 Aug 2005 11:52:44 +0200 Mario Lang <[EMAIL PROTECTED]> wrote: > Reusability of code is a quite valid point, and I thought the OP's > question was quite interesting. > > We shouldn't define ourselves in terms of what windows does. Frankly, I don't > care anymore, its now 8 years since I switched to Linux completely. > > libmidifile would be cute, is any of the existing codebases flexible > enough so that it could be massaged into a nice lib? vote++ to everything you said. dunno about any existing midi code though flexible enough to be put into a lib.. After taking a glance at rosegarden and muse's source it seems there's always app specifics intertwined. Wasn't a midi file pretty much a simple dump of midi events anyways? Flo -- Palimm Palimm! http://affenbande.org/~tapas/
Re: [linux-audio-dev] IO priorities
On Fri, 29 Jul 2005 13:45:09 -0400 Lee Revell <[EMAIL PROTECTED]> wrote: > Disk IO priorities have been discussed on the list before, and they are > now in the mainline kernel (search LKML for "IO priorities" for > details). I think they're only supported by the CFQ scheduler. > > This might be fun for someone to experiment with, I'm sure any HDR > application would like this feature. Good to hear! This makes linux even more awesome for soft RT work :) Flo -- Palimm Palimm! http://affenbande.org/~tapas/
Re: [linux-audio-dev] [ANN] E-Radium V0.61b
On Wed, 13 Jul 2005 12:44:32 -0400 Eric Dantan Rzewnicki <[EMAIL PROTECTED]> wrote: > I expected something like this. But, I guess my question was more, who > is complaining about HZ=1024? To which I guess the answer would be > everyone who is more concerned about throughput than latency. Though, > somehow I think that everyone needs a good balance between the two. I suppose to make everyone happy this should be runtime configurable. Incorporating which would be quite a task :) Regards, Florian Schmidt -- Palimm Palimm! http://affenbande.org/~tapas/
Re: [linux-audio-dev] [ANN] E-Radium V0.61b
On Wed, 13 Jul 2005 10:49:11 -0400 Eric Dantan Rzewnicki <[EMAIL PROTECTED]> wrote: > > Correct, it's not an issue for apps driven by hardware interrupts like > > JACK, because the sound card consumes data at a constant rate. But for > > MIDI or video where you have to periodically push data to the device it > > matters. > > What is driving the kernel-devs to regress on this issue? Well, i suppose it's a tradeoff between throughput and responsiveness. Larger timeslices increase system throughput (less time is spent in the scheduler) while smaller timeslices increase responsiveness. Flo -- Palimm Palimm! http://affenbande.org/~tapas/
Re: [linux-audio-dev] [ANN] E-Radium V0.61b
On Tue, 12 Jul 2005 22:35:24 +0200 (CEST) "Kjetil Svalastog Matheussen <[EMAIL PROTECTED]>" <[EMAIL PROTECTED]> wrote: > > Can you please explain why 100HZ would be a problem for your app? Right > > now the kernel people are trying to change the default HZ for 2.6 to > > 250. I have told them that this is insane but they seem inclined to do > > it anyway. > > > > The program use poll to sleep. If the resolution of the kernel is 100Hz, > there > would sometimes be a too long delay of up to 10ms (and probably beyond) > before the program is woken up, and before a midi message is sent, > which can cause music to stutter. > > Simple as that. :-) Some semieducated blabbering ahead (might be all wrong): I think i once read that interrupt handling "short circuits" the linux scheduler (in the sense that not only at every timer interrupt but also at the end of finishing any interrupt handler the kernel looks which processes are ready to run etc. and maybe there's a high prio process waiting just for that interrupt (by i.e. polling or reading on a device file). So for all those realtime processes that depend on events that are triggering interrupts (like soundcards' irqs) the timer interrupt really doesn't matter. I'm not sure at all though this applies to midi handling (and especially to alsa_seq when routing from one app to another) or is even correct in any sense at all :) Anyone can shed light? Regards, Florian Schmidt -- Palimm Palimm! http://affenbande.org/~tapas/
Re: [linux-audio-dev] desktop audio resumed
On Wed, 6 Jul 2005 22:43:39 +0200 Christoph Eckert <[EMAIL PROTECTED]> wrote: > > > I know my opinion is unpopular, but afaik OSS is more of a > > standard in the unix (not linux) world than ALSA is. > > There's no way around providing an OSS emulation that works > > (i mean sw mixing, etc.) for all those OSS apps that are > > multiplatform. > > Most of these applications use /dev/dsp. Wouldn't it be > possible to link it to DMIX/DSNOOP so OSS applications can > transparently use DMIX/DSNOOP? Yeah, you can always use the aoss script to preload a shared lib "hijacking" function symbols, so access to /dev/dsp is routed to a typical alsa pcm device (which can be dmix/dsnoop/asym/whatever). I'm not sure though this works in 100% of the cases. MMAP? Regards, Flo -- Palimm Palimm! http://affenbande.org/~tapas/
Re: [linux-audio-dev] desktop audio resumed
On Sat, 02 Jul 2005 00:17:38 -0400 Lee Revell <[EMAIL PROTECTED]> wrote: > No!!! That's exactly the wrong approach, it will only encourage > applications to use the OSS API. Do you really still want to be using > the same ancient binary-only flashplayer/realplayer plugin for 5 more > years? > > Why don't you ask the Skype developers when they plan to support ALSA? > Or figure out why it crashes with aoss? I know my opinion is unpopular, but afaik OSS is more of a standard in the unix (not linux) world than ALSA is. There's no way around providing an OSS emulation that works (i mean sw mixing, etc.) for all those OSS apps that are multiplatform. Flo -- Palimm Palimm! http://affenbande.org/~tapas/
Re: [linux-audio-dev] Arbitrary bufsizes in plugins requiring power of 2 bufsizes, Was: jack_convolve-0.0.10, libconvolve-0.0.3 released
On Wed, 29 Jun 2005 23:07:24 +0200 fons adriaensen <[EMAIL PROTECTED]> wrote: > > Another very useful feature would be tail extension (combine convolution > > with traditional reverb processing to lighten the CPU load) > > I don't think this requires special support from the convolution > engine - it's jsut an application of it. > > Suppose you have the start of a reverb response: > (imagine the y-scale is logarithmic) > >|\ >| \ >| | >| | >| | > --- > > and you feed back the output with the correct gain and delay, > then the result will be a complete exponential decay. > > This is too simplistic, and will probably give a 'echo' like > character to the reverb, but that's the principle. It should > be applied to the first part of the reverb 'tail' only, not > the early reflections. But you only need a short convolution. To avoid the echo like character one could do some preprocessing. Take a chunk of the tail and make it constant volume (if it is decaying exponential, this should be possible). Then fiddle some more with it to make it loop cleanly. Then use that as response and apply gain and do the feedback delay thing. This would allow arbitrary envelopes on the tail. Regards, Flo -- Palimm Palimm! http://affenbande.org/~tapas/
Re: [linux-audio-dev] Arbitrary bufsizes in plugins requiring power of 2 bufsizes, Was: jack_convolve-0.0.10, libconvolve-0.0.3 released
On Wed, 29 Jun 2005 16:19:34 +0200 Alfons Adriaensen <[EMAIL PROTECTED]> wrote: > I wrote a convolver library / JACK app similar to Florian's at > about the same time (which is why it was never released). > Main differences are that the API is a bit more general, it's C++, > and it has the required I/O buffering built-in right into the data > structures of the convolver engine, so there is no extra overhead > in copying. The API it such that the extra delay can be easily > avoided if the conditions permit it. What other goodies does it have? What do you mean by the API being "a bit more general"? Different data types, etc? I'd say, dump mine if it weren't being simple C (which is sometimes preferable over C++). Flo -- Palimm Palimm! http://affenbande.org/~tapas/
Re: [linux-audio-dev] Arbitrary bufsizes in plugins requiring power of 2 bufsizes, Was: jack_convolve-0.0.10, libconvolve-0.0.3 released
On Wed, 29 Jun 2005 13:20:31 +0200 Benno Senoner <[EMAIL PROTECTED]> wrote: > assume we run convolution at 512 samples. > > process(float *input,float *output, int numframes) { > > if(numframes == 512) { > convolve(input, output, 512); > return; > } This has a subtle bug afaict. Let's assume the host called several process() with numframes != 512 first, then one with numframes == 512, i.e.: 1. 123 2. 432 3. 234 4. 512 The 4th process call disregards data already in the ringbuffer which has been put there by previous calls with numframes != 512. There needs to be an additional test for whether the ringbuffer is empty. in the case that the host uses constant buffer size this would work out alright though and is indeed identical to the behaviour with the additional test. I can imagine though that subdividing periods into smaller parts is very very useful especially when automating plugins (and having control data changes which are not at period boundaries), so the fact that the constant buffer size is not garanteed does make very much sense. But in the case of a plugin which operates with fixed buffer sizes (and uses internal buffering as described) this approach wouldn't make sense, as the control data change wouldn't have any effect anyways (or it would be nontrivial to hack it into the algo, even if it is possible (i.e. gain changes could take effect at non buffer size boundaries for partitioned convolution)). So i'm very much in favour of Chris' Proposal to add a hint that makes the host use the same buffer size all the time. I would actually be very much in favour to make this the default behaviour and timestamp control change events as it's done in VST [see below].. [snip] > the first time process() is called the >=512 condition is not satisfied > and thus a 0 filled buffer is returned (silence). > At the second process() call, the >=512 condition is satisfied (there > are exactly 512 frames in the buffer). > And the convolve() function is called, eating 9.2msec of CPU. > Since 9.2msec > 5.5msec ... sh*t happens ... XRUN. exactly. > If numframes supplied by the host is bigger than 512 then there are no > CPU spike problems. > For example if the host supplies 1024 frames, the above code would call > convolve() 2 times outputting 1024 frames. (eating 2x9.2msec out of the > 22msec available) > It would be a bit inefficient because if the plugin knows that the host > supplies at least 1024 frames > then you could run the convolution at 1024 achieving greater efficiency. > > If the host guarantees that it always supplies the same number of frames > then the convolver could adjust > it's internal framesize to to achieve optimal CPU usage. Right. That's why the hint suggested by Chris would be useful.. > If not then a scheme like the above one is unavoidable. > > Just for curiousity, does anyone know that's the current status of the > variable/fixed buffer sizes scenarios > supplied to plugins by hosts on various plugin platforms like VST, AU etc ? Afaik in VST [i heard it somewhere, no garantees about correctness] the plugin knows about the host's buffer size. And the plugin will always be called with that buffer size [dunno if power of two is garanteed, but it would be sensible]. Control params are timestamped and provided as a list of value/frame pairs for the current process buffer, so the need to subdivide the buffer for finer grained automation/etc is not given. At the cost of some extra work on the plugin part. Personally i like this approach better than the LADSPA-not-garanteed-buffer-size approach (but i am biased). > Florian, since we would like to add convolution to LinuxSampler over > time it would be cool if you could add the above > ideas to libconvolve so that one can use the lib without worrying about > supplying the right buffer sizes etc, and > in plugin hosts enviroments it would be handy too since we don't always > know what the host will do. Actually how to solve this problem is application specific: Are cpu spikes preferrable over context switches (which a threading solution (which would even the load) would require (plus some extra latency)? I mean i could add above (nonthreading) mechanism and provide an extra function call for it, but i'd rather not since it hides a fundamental aspect of the partitioned convolution which every user should be aware of and for which a different solution might be more suited to the application at hand.. Plus i personally don't like the cpu spikey non threading solution at all for exactly the reasons you mentioned ;) Flo -- Palimm Palimm! http://affenbande.org/~tapas/
Re: [linux-audio-dev] jack_convolve-0.0.10, libconvolve-0.0.3 released
On Tue, 28 Jun 2005 21:38:32 +0100 Chris Cannam <[EMAIL PROTECTED]> wrote: > It does seem a shame not to end up with a DSSI plugin as well, then, > given that it would then have much the same structure already. Yeah, you definetly are right. I wonder though: Why stop at garanteeing a fixed buffer size for the whole runtime. The thing with the partitioned convolution is that, when used purely as an effect for recorded material (i.e. not playing realtime through it, in a host that can compensate for plugin delay), then large buffers definetly are desirable. Even larger than i.e. the maximum period size of my soundcard (which is 2048 frames). So it would be cool, if the plugin could use a fixed buffer size which also differs from the buffer size used by the underlying audio system (i.e. jack). For this the host would have to do some extra work (setup an extra thread to process the plugin (with a slightly smaller priority than the jack audio callback thread for example, to even the load), ringbuffers to feed/consume data to/from it), latency compensation for tracks sent through it. Or should the plugin do this internally and simply report to the host that it needs a fixed buffer size (which then corresponds to the audio system's buffer size).. Are dssi/ladspa's allowed to do threading? Without i wouldn't know how to do it. And even if it were allowed to do threading, how would the dssi know which priorities to use, etc (on a RP kernel it should have prio higher than i.e. hd and net irq's, but lower than the jack audio thread). Plus i wonder whether the (then fixed) buffer size should be user configurable in any way or would the plugin simply report "16k frames is what i want" :) Sometimes it does make sense to use it in realtime mode (with the same buffer size as the audio system), if you have the cpu power or the responses are short enough. Regards, Flo -- Palimm Palimm! http://affenbande.org/~tapas/
Re: [linux-audio-dev] jack_convolve-0.0.10, libconvolve-0.0.3 released
On Mon, 27 Jun 2005 14:24:36 + GMT "Chris Cannam" <[EMAIL PROTECTED]> wrote: > Florian Schmidt: > > P.S.: I am also currently working on a qt app which > > uses libconvolve > > Any interest in making a DSSI plugin? There are some problems with wrapping partitioned convolution into a ladspa/dssi. One of them is that ladspa/dssi do not garantee a chunk size in which the processing needs to be done. Partitioned convolution, like many other fft based algos operates with a fixed chunk size. It basically looks like this: 0] initialization phase ("loading a response file"): split the response file into chunks. zero pad them and fft them. store the fft'ed chunks somewhere.. then for every chunk of audio: 1] zero pad and fft the input chunk 2] store the fft'ed chunk in a ringbuffer (which has space for as many chunks as the response has, too) 3] multiply the fft'ed input chunks from the input ringbuffer to the corresponding fft'ed response chunks and add all these. 4] ifft the result from step 3] 5] save overlap, add previous overlap So, the partition/chunk size is determined in the initialization phase and all processing later on happens in chunks of this size. Now ladspa/dssi make no garantees about the buffer size with which the processing is done. Thus there would need to be done some extra buffering. But to distribute the load (assuming partition size is bigger than ladspa/dssi processing size) it would be nessecary to either a] split the above algorithm into pieces (which is rather difficult due to the different phases each depending on the result of the previous phase) b] use threading Both methods are kinda ugly or work intensive. Another reason is that it is impossible to dynamically change the number of input/output ports of a ladspa/dssi which would restrict the loadable responses to stereo files if the ladspa/dssi has i.e. two outputs. this would then require seperate versions for different channel counts. All in all and as i am an energy saver (mostly my own though) i will refrain from implementing a ladspa/dssi plugin. libconvolve isn't soo hard to use though, so someone else might give it a shot. Contact me though. The api might still change abit in the future (as i'm adding/removing experimental features like for example looped responses). After a discussion with Mario Lang i think i will OSC'ify jack_convolve though and make the qt gui only an OSC controller for jack_convolve. Flo -- Palimm Palimm! http://affenbande.org/~tapas/
Re: [linux-audio-user] Re: [linux-audio-dev] jack_convolve-0.0.10, libconvolve-0.0.3 released
On Mon, 27 Jun 2005 16:06:54 +0200 fons adriaensen <[EMAIL PROTECTED]> wrote: > On Mon, Jun 27, 2005 at 01:57:34PM +0200, Florian Schmidt wrote: > > > Assume a samplerate of 96khz, then there's quite a bit of signal which > > doesn't need to be processed since it's far out of the range of human > > perception. > > This seems like a sensible idea, but one could wonder why in that case > the sample frequency needs to be 96 kHz (*). Well, the argument i often heard and which IMHO does make sense is that when heavy processing is used the higher samplerate keeps many artefacts out of the audible range for a longer time than with 48khz for example. So, when the output of jack_convolve would be subject to additional heavy processing it would probably make sense to use the whole 48khz bandwidth. But in the case that the convolution is used as a send reverb and the only additional stages of processing afterwards are mixing it back into the sum and then maybe some dynamics, it would make sense to only process i.e. the lower half of the spectrum (this would still leave a bandwidth of 24khz at a samplerate of 96khz). > It's probably possible to minimize the artefacts by using a gentle > cutoff, e.g. a raised cosine from bin k1 to k2, k1 < k2. I will try that in the next few days.. > (*) Recent experiments by prof. Angelo Farina (Univ. of Parma, Italy) > suggest strongly that when the DA conversion is done properly, there is > no audible difference between a sample rate of 48 kHz and any higher value. > OTOH, he only found one type of DAC that was good enough in order to be > completely free of audible artefacts at 48 kHz (by Apogee, and they are > quite expensive). Thanks for the info.. Flo -- Palimm Palimm! http://affenbande.org/~tapas/
[linux-audio-dev] Re: jack_convolve-0.0.10, libconvolve-0.0.3 released
On Mon, 27 Jun 2005 13:57:34 +0200 Florian Schmidt <[EMAIL PROTECTED]> wrote: > i added an experimental feature to libconvolve/jack_convolve which might > be able to preserve some of those precious cpu cycles from being burnt. nyuk nyuk, link here: jack_convolve: http://www.affenbande.org/~tapas/wordpress/?page_id=5 libconvolve: http://www.affenbande.org/~tapas/wordpress/?page_id=9 -- Palimm Palimm! http://affenbande.org/~tapas/
[linux-audio-dev] jack_convolve-0.0.10, libconvolve-0.0.3 released
Hi, i added an experimental feature to libconvolve/jack_convolve which might be able to preserve some of those precious cpu cycles from being burnt. jack_convolve now has commandline switches --min_bin=bin_no and --max_bin=bin_no which can be used to specify which bins of the fourier transformed signal and response to multiply. The range for both is always 0..periodsize+1, but min_bin must always be < than max_bin.. An example: assuming jack periodsize of 2048 and samplerate of 48khz: jack_convolve response_file.wav --max_bin=1200 This leaves out the top 849 bins out of the multiplication and thus reduces cpu load by ca. 2/5. The cost is that the high frequency spectrum is cut off. And it's not even a really clean cutoff which jack_convolve response_file.wav --max_bin=40 would show (you get crackles due to edge effects, etc). This feature is probably most useful to get some preview with degraded quality but less cpu consumption. Although at some samplerates it even makes sense. Assume a samplerate of 96khz, then there's quite a bit of signal which doesn't need to be processed since it's far out of the range of human perception. the min_bin parameter is not really useful and only included for completeness sake :) It cuts out low frequency bins out of the equation. Due to the crackle effects when throwing away bins that are audible one should not use these settings for filtering of the convolution in the audible frequency range.. In one of the previous releases (which went unannounced i think) i also added a gain argument which can be used to boost the level of the convolution output. Example: jack_convolve response_file.wav --gain=4.0 to get a 300% increase in level.. Regards, Flo P.S.: I am also currently working on a qt app which uses libconvolve, but it might still take a while until initial release (due to my time constraints induced by studying). Preliminary screenshot here: http://www.affenbande.org/~tapas/wordpress/?page_id=27 Help would, of course, be appreciated :) Anyone know of waveform or vu meter widgets for qt which are reusable? Flo -- Palimm Palimm! http://affenbande.org/~tapas/
Re: [linux-audio-dev] What parts of Linux audio ....Tempo Maps
On Mon, 20 Jun 2005 22:26:54 +0200 Tim Orford <[EMAIL PROTECTED]> wrote: > > Now there's Apps B and C. B should be in sync with A's first track and C > > should be in sync with A's second track. > > > > Don't know if this is really overkill though. I would find it useful.. > > i wouldnt use it personally, but doing N is not much more work than 1. True :) Except for the management overhead. Plus the added complexity for application developers which would have to present a choice to the user. > > The idea which lingers in the back of my head is not a "tempo map" as a > > structure which has meters/tempi for certain time ranges, but rather > > arbitrary mapping functions which map BBT -> frames and frames -> BBT > > (or even BBT -> time and time -> BBT and another pair od predefined > > mappings time -> frame and frame -> time). With this scheme an app could > > easily do linear or even nonlinear tempo changes, loops, etc.. And i surely was on crack when i wrote this. There's no efficient interprocess function call mechanism on linux (one of the reasons jack uses this fifo thingies to trigger execution of the clients' audio callbacks). > (not sure i fully understood all that.) > Sure its nice for an app to provide a line/curve based gui for this, > but should it be "rendered" to meter/tempo pairs before exporting? This > would depend on how much resolution you need for a smooth change. Yeah, since directly calling the mapping functions is out of question, the mappings need to be "rendered". Hmm, then all the nice properties go out of the window. > I > guess that even as little as a 1/32th would be fine, as long as tempo > information > wasnt used somewhere as the basis of say pitch adjustment, but dont really > know. > As the resolution went up it would become more impractical to share the > map in rendered form, but on the other hand, multiple apps rendering the > same line/bezier might be a bit wasteful. Anyone know how much > resolution is needed? > > possible average case: 32 * 200bars => 6400 meter/tempo pairs > > i assume you dont see a need to share the actual curve description > between apps, for the purposes of, eg, editing? No, as one app provides it and another uses it. The editing has to be done in the app that provides the tempo mapping.. > > ticks_per_beat, but depending on how this calculation is done, different > > apps might come to different results. But this might be a non issue as > > the BBT info is updated by the timebase master in each process cycle. > > ah, thats a very good point. (But still i cant help think that separately > calculated positions will not be completely trusted even if they are in fact > identical:-). And for example, an editor at v high zoom wont be interested > in the current process cycle. Plus, a tempo change can happen mid cycle.) This remark i don't understand.. Flo -- Palimm Palimm! http://affenbande.org/~tapas/
Re: [linux-audio-dev] What parts of Linux audio ....Tempo Maps
On Mon, 20 Jun 2005 20:49:36 +0200 Tim Orford <[EMAIL PROTECTED]> wrote: > > The reason for this opinion of mine is that musical time is simply too > > complex to be handled by one general mechanism. Think of different apps > > using different meters, etc.. Or even a single app using different > > meters on different tracks syncing to another app with yet another > > meter. > > I dont see how a shared tempo map could be useful for these complex > situations, > unless you are arguing for multiple shared tempo maps, which actually is > not much more complicated to design than a single map (although more > confusing for app devs)? I was thinking along the lines of having multiple tempo maps for this. App A is a sequencer having two tracks at two different meters (i'm thinking polyrythmic here, so the meters/tempi might be related (just to make a point about the usefulness of this)). Now there's Apps B and C. B should be in sync with A's first track and C should be in sync with A's second track. Don't know if this is really overkill though. I would find it useful.. Other features i would like to see would be smoothely (or non smoothely changing) tempi, etc.. The idea which lingers in the back of my head is not a "tempo map" as a structure which has meters/tempi for certain time ranges, but rather arbitrary mapping functions which map BBT -> frames and frames -> BBT (or even BBT -> time and time -> BBT and another pair od predefined mappings time -> frame and frame -> time). With this scheme an app could easily do linear or even nonlinear tempo changes, loops, etc.. These mapping functions can be "exported" by apps for other apps to use. (in the above example, A would export two mapping functions, one for each track. In the Apps B and C the user would then choose which of A's mappings to use). Again, no idea, if this is at all doable (fast enough ipc mechanism??) > Usage of the tempo map by an app would be optional of course, so a > "simple" map in Jack would not preclude use of other sharing mechanisms. > > Surely having each app separately calculating its own musical positions > is a recipe for disaster, ie lack of solid sync? Yeah, i agree the mapping must be deterministic. I'm not sure it is atm (the jack_position_t struct carries ticks_per_beat and beats_per_minute info, but not frames_per_tick. It does have frame_rate though so ticks_per_beat could be derived from frame_rate, beats_per_minute and ticks_per_beat, but depending on how this calculation is done, different apps might come to different results. But this might be a non issue as the BBT info is updated by the timebase master in each process cycle. Arr, this whole thing is complex and i might be full of sh*t :) Flo -- Palimm Palimm! http://affenbande.org/~tapas/
Re: [linux-audio-dev] What parts of Linux audio simply suck ?
On Mon, 20 Jun 2005 11:57:24 -0300 Juan Linietsky <[EMAIL PROTECTED]> wrote: > I talked with florian schmidt on irc about this, some time ago. > But my impression from what he said is that most jack > developers wanted to keep tempo map sharing in a > separate library, not inside jack. I think he suggested nah, that was simply my opinion :) I don't think many others agree yet ;) [see my other mail] Flo -- Palimm Palimm! http://affenbande.org/~tapas/
Re: [linux-audio-dev] What parts of Linux audio simply suck ?
On Mon, 20 Jun 2005 09:50:53 -0400 Paul Davis <[EMAIL PROTECTED]> wrote: > > -Jack lack of OSC or any way to do parameter automation from the sequencer > > name one platform that allows this. just one. There's none that i know of, but it would be cool to have anyways :) > > -It is Impossible to do any sort of offline render, or high quality > > render of a song (like, in 32/192khz) using JACK/Alsa > > i think you don't understand jack_freewheel(). Up to now, due to the lack of jack midi it wasn't really possible to freewheel a complex setup (like several transport aware apps of which one is a midi sequencer which drives softsynths). And it will again take a while until jack midi has really been adopted. BTW: does jack midi work in freewheeling mode, too? > > -Saving/Restoring your project is just painfully hard. LASH doesnt help, > > and even when I came up with the idea of it in the first place. > > that's because LASH's initial implementation was wrong and that > following Bob's description of how to do it "right", nobody has found > time to do it right. Is there a design document for the Right Way out there? I would like to take a look.. > > -Adding/Removing softsynths, linking connections, etc takes a while > > having to use qjackctl, etc > > tell me a system in which this not true. i use the patchbay in qjackctl; > if you don't like qjackctl, i'm sorry and i am sure rui is as well. or use one of the console driven patchbay programs, like jack_plumbing or jack_snapshot > > -Lack of send%.. I just cant have a jack client doing a very high > > quality reverb, only as wet processing and have clients send different > > amounts of the signal to it, thus saving CPU > > this is completely ridiculous. the client can attenuate on its inputs. > where would you rather have these controls - distributed across N apps > or on the control interface for just one? I agree with Paul here. Doing send/return/inserts, etc, is the job of a jack mixer app. The ardour mixer is a good starting point for this.. > > -Lack of tempo-map based transport, I cant adapt my midi-only sequencer > > , which works in bars, > > you can't do tempo-map based transport without sharing the tempo map. > nobody has suggested a way to do this yet. please feel free. Here i would like to chime in :) I think the mapping of BBT to frames and the other way around isn't something that jack should be concerned about. IMHO jack should only provide transport based on frame numbers. Everything else, like different apps trying to agree with other apps which frame corresponds to which BBT should really be handled by a different mechanism. The reason for this opinion of mine is that musical time is simply too complex to be handled by one general mechanism. Think of different apps using different meters, etc.. Or even a single app using different meters on different tracks syncing to another app with yet another meter. I know, that dance music people for example would wipe this argument away, but there's more complex music out there :) [sorry, if i offended someone. I do not imply more complex == better]. IMHO the notion of BBT/tempo/etc. should be local to apps and if needed be shared via another lib (which, if it finally settled down and became quasi standard could again become part of jack). Now the question is about how this mechanism should look like. I'm not yet 100% sure, but i do have some ideas which i will ponder some more.. Flo -- Palimm Palimm! http://affenbande.org/~tapas/
Re: What Parts of Linux Audio Simply Work Great? (was Re: [linux-audio-dev] Best-performing Linux-friendly MIDI interfaces?)
On Thu, 16 Jun 2005 23:54:01 +0200 fons adriaensen <[EMAIL PROTECTED]> wrote: > Strange... If you would program a timer using the info available from > jackd's DLL, it would never generate its interrupt before the HW is > ready (i.e. has at least a period available). It would actually trigger > just after the interrupt it is derived from (the small average latency > that is not compensated). So I wonder what problem CoreAudio has with > this. Maybe the timers used aren't precise enough for this.. I don't know. Anyone? Flo -- Palimm Palimm! http://affenbande.org/~tapas/
Re: What Parts of Linux Audio Simply Work Great? (was Re: [linux-audio-dev] Best-performing Linux-friendly MIDI interfaces?)
On Thu, 16 Jun 2005 20:20:41 +0200 fons adriaensen <[EMAIL PROTECTED]> wrote: > > The price for this is afaik an extra period worth of latency. I'm not > > sure this is the way to go. Sure it makes handling of devices easier > > that do not generate irq's like pci soundcards do (all this USB and > > IEEE1394 stuff), but isn't the price too high? > > Why should this take a extra period of latency ? Ah, i remembered slightly incorrectly. Thanks Paul, for setting me straight in #ardour. The thing is that the DLL based client thread wakeup has the ever so slight possibility to do its thing too early. Thus coreaudio waits a bit more (the "safety offset"). It seems this safety offset is driver specific but usually ranges from 64 to 32 frames (i have no definite source for this, just a bit of googling). And with a sufficiently low period size used this accounts for pretty much an extra period of latency.. Flo -- Palimm Palimm! http://affenbande.org/~tapas/
Re: What Parts of Linux Audio Simply Work Great? (was Re: [linux-audio-dev] Best-performing Linux-friendly MIDI interfaces?)
On Thu, 16 Jun 2005 10:30:29 -0400 Paul Davis <[EMAIL PROTECTED]> wrote: > true, but i take it you get the way CoreAudio is doing it: it means you > can drive audio processing from a different interrupt source (e.g. > system timer) because you have very accurate idea of the position of the > h/w frame pointer. In CoreAudio, the "callback" is decoupled from any > PCI, USB or ieee1394 interrupt. Tasty. The price for this is afaik an extra period worth of latency. I'm not sure this is the way to go. Sure it makes handling of devices easier that do not generate irq's like pci soundcards do (all this USB and IEEE1394 stuff), but isn't the price too high? Flo -- Palimm Palimm! http://affenbande.org/~tapas/