Re: [linux-audio-dev] Realtime problems with midi/osc sequencer
You could also try sigaction and setitimer. I've had good timing results with this approach in the past. (I haven't tried it for audio tasks though.) Steve On 3/11/07, Robin Gareus [EMAIL PROTECTED] wrote: Christian wrote: Robin Gareus schrieb: usleep( iTick-( passedTime-startTime ) ); AFAIR usleep is not exact! - did you echo 1024 /proc/sys/dev/rtc/max-user-freq ? try sth like: void select_sleep (int usec) { fd_set fd; int max_fd=0; struct timeval tv = { 0, 0 }; tv.tv_sec = 0; tv.tv_usec = usec; FD_ZERO(fd); if (remote_en) { max_fd=remote_fd_set(fd); } select(max_fd, fd, NULL, NULL, tv); } Interesting timing approach. But I can't find remote_en and remote_fd_set in the man pages. What does these arguments stand for? sorry, cut the 3 if(remote_en) lines - I was too quick with pasting sending the mail - remote_en is some global var. that allows to interrupt the sleep, if some other-event occurs... - actually you'd only needed select (0,fd,0,0,tv); anyway clock_nanosleep seems better; at least it takes less code to set it up. I did not know about it, and it's even POSIX, how cool! #robin
Re: [linux-audio-dev] audiogui
PS: does anyone know where I can 'GPL' an decent OSC server implementation in C++? The LibLo implementation is GPL, very easy to use, and available in many distros including Ubuntu. http://liblo.sourceforge.net/ I'm using it for a project and it seems very good. I think having an OSC-controlled audio back-end is a Good Thing. Steve
Re: [linux-audio-dev] Python
Highly doubtful. Python is fantastic for lots of jobs. This isn't one of them. Python isn't so good at real-time audio jobs, but I think it would be pretty decent as an audio control language. Using it to specify networks of C-code unit generators that run indepedently, then fielding OSC/MIDI messages, etc., changing parameters. I guess there are a lot of languages that do this kind of thing. Snd is an example of a Lisp-like language for these tasks. SuperCollider is pretty nice, and definitely worth exploring. However, for sample-level control, you want C/C++, for example with STK/RtAudio, or a sound language like Csound. Lately I have been exploring Chuck, which so far seems fantastic. The time-based control it gives you is really simple and nice to work with. I think a good project would be to write a Python interface to a Chuck VM. Use Python to program a GUI which modifies variables of a Chuck run-time. Anyways, some things to think about. As Paul said, Python can do lots of things, but real-time audio is not one of them. Right tool for the job, and all... but you have lots of tools available. I recommend exploring them. Steve
Re: [linux-audio-dev] Python
chuck already has its own pure-openGL GUI toolkit, used for things like Audicle, and Tapestrea. i doubt youd get anything similar performance wise with python+canvas-of-choice. not sure how you program the chuck canvas though. i dont think its actualy in chuck the language? Tapestrea and Audicle are C++ programs using GLUT and OpenGL for their interface. Nothing specific about Chuck. They just happen to use Chuck for their sound engine. (Or rather, Audicle is a front-end specifically for Chuck, but it's written in C++.) There's nothing about Chuck that says you have to use OpenGL. Or even Audicle. Personally I find miniAudicle a nicer environment than Audicle. Or even just Emacs. In fact after playing with it for a couple of days I'm seriously starting to consider re-writing some of my apps to use it for the audio back-end instead of my hacky C++ code. You automatically get good timing, OSC support, all the filters you could want, etc... Anyways my post wasn't really intended to promote Chuck, it's just something I've been looking at lately so it came to mind. Steve
Re: [linux-audio-dev] a new patent for us to challenge
here's the uspto page for it: http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO1Sect2=HITOFFd=PG01p=1u=%2Fnetahtml%2FPTO%2Fsrchnum.htmlr=1f=Gl=50s1=%2220060074637%22.PGNR.OS=DN/20060074637RS=DN/20060074637 I see that it is dated April 6, 2006. is this the date of application or the date that it was granted? steve On 1/24/07, Paul Davis [EMAIL PROTECTED] wrote: http://www.freshpatents.com/Low-latency-real-time-audio-streaming-dt20060406ptan20060074637.php?type=description in which Microsoft patents designs partially implemented by OSS 10 years ago and fully implemented by ALSA 5 years ago. Wrapping this up in windows API nonsense obscures the basic fact this design is not innovative in any way unless compared only to existing Windows audio driver architectures.
Re: [linux-audio-user] Re: [linux-audio-dev] LAD/LAU/LAA/Consortium/...
*I* suggest moving to forums as *I* think it's better way to exchange the info than those 90-tish, Mailman powered, mailing lists where you can't even search for posts or whatever. Personally, ever since switching to gmail which handles lists using tags and has excellent searching capabilities, I far prefer lists over forums/boards. However, I think it's sort of silly to have to chose at all. Mailing lists and boards are so similar, there's no reason you can't just have both. For instance, there is a plugin for phpBB that allows people to sign up to the board and treat it like a mailing list. Of course, since mailman is probably a better mailing list handler than phpBB, it would be better to have some sort of web-based interface to the mailing lists (still handled by mailman), accessible through linuxaudio.org. Anyone know of a good solution for that? By the way, there is at least the risk that web-accessible forums will increase spam risks. Usually requiring registration helps with that, but I don't know if its enough. Steve
Re: [linux-audio-dev] an relevant link about Vista
It's as if McDonald's would announce that the new and improved Big Mac comes with shards of broken glass inside. best DRM analogy.. ever. steve
Re: [linux-audio-dev] about MIDI timing...
On the other hand, last night I observed how timidity++ works by using strace and I found no *sleep() (nanosleep, msleep and friends). Does it mean, major MIDI software synthesizers use non system sleep mechanism for the timing? I believe Timidity++ just uses its synthesizer to convert the MIDI information into an audio stream, which it then either writes to a file or plays through the soundcard. So it doesn't need precise timing, since the audio callback it uses is already timed by the sound system. When dealing with real MIDI, timing is more critical, though usually you simply use timestamps which tell the sound system when a MIDI event should be played, instead of dealing with the timing yourself. (At least I know this is the case with PortMidi, I've never programmed with ALSA directly.) By the way, there are more ways to time things than just *sleep(). For example, using sigalrm and setitimer. I also read that not all Linux kernel sound card driver enable the internal card timer, thus the software must rely on system timer. Is it correct? Don't know anything about that. I think all soundcards use an built-in timer for playing their FIFO. As for interrupting the computer to tell it when it needs more data, I guess it's possible that sometimes it uses the card's timer and sometimes uses the system timer. Someone more informed may have something to say. Steve
Re: [linux-audio-dev] MIDI is playing but no sound
What was I doing wrong here? Hi, I'm pretty sure that KMid and aplaymidi are both just simple players that direct MIDI output to your soundcard's MIDI interface. (Someone correct me if I'm wrong.) They are not midi _synthesizers_, so you won't hear any sound unless you have a synthesizer attached to your soundcard. Instead, try Timidity++, which is probably in your distribution. http://timidity.sourceforge.net/#info Steve
Re: [linux-audio-dev] pink noise generation
http://en.wikipedia.org/wiki/Pink_noise http://www.firstpr.com.au/dsp/pink-noise/ On 9/27/06, Andrew Gaydenko [EMAIL PROTECTED] wrote: Hi! Can anybody point me to theoretical and algorithmic fundamentals of real-time (JACK-oriented) (pseudo)pink noise generation at given frequency range? Andrew
Re: [linux-audio-dev] logomania redux
Is it just me, or is the speaker in this image: http://linux-sound.org/th_snd1.gif taken from Windows 2000??? Now THAT would be distasteful. ;-) Steve On 9/3/06, Dave Phillips [EMAIL PROTECTED] wrote: Greetings: I've been adding some logos to the top page at linux-sound.org, and I thought it might be time to make some remarks regarding them. Some are nice, some are very cool, and some are pretty awful. The logo for LilyPond really needs an update, and where is Ardour's bitchin' cool logo ? Some need titles (IMO), such as Aeolus, Common Music, LilyPond, ChucK, Dino, and Khagan. I'm not a graphic artist, and I'm not going to go through the process of overlaying titles. If the devs for those apps are happy with the logos, that's perfectly cool by me, but IMO it's better to have a title. Some other logos are in need of an update, e.g. PlanetCCRMA, LAU, and RTcmix. Is Thorsten our only graphic artist, or does anyone else here have good graphics skills ? I'm sure I'm missing logos for other apps. If you have an application listed on linux-sound.org, please contact me if you'd like to add a logo to the top page. Best, dp
Re: [linux-audio-dev] Linux kernel HZ, audio latency and how to measure?
Audio doesn't use setitimer()-driven sleeping. It's interrupt-driven, not timer-driven. Yes, the driver is interrupt driven, but the driver interrupt handler is only responsible for getting the data off the card's FIFO and storing it in memory. (i.e., initialing a DMA transfer.) It doesn't do anything with the audio data itself, except pass it on to user space. Won't HZ make a difference to the user application code which must wake up to do something with this audio data? Hence the need for SCHED_FIFO and the like.. Steve
Re: [linux-audio-dev] Linux kernel HZ, audio latency and how to measure?
The user application code is woken up by the interrupt from the audio interface, not from a timer firing - in addition to getting data from the card and storing it in memory, the interrupt handler wakes up any processes that are waiting on the audio data. So HZ is irrelevant. SCHED_FIFO is needed because otherwise, if there are multiple runnable processes when the audio interrupt fires, the kernel could decide to run a different process. With SCHED_FIFO it must run the audio process. Ah, interesting. Thanks. I feel like I should have known that. I'm going to go read up on audio drivers now.. Steve
Re: [linux-audio-dev] New version of extreme-time stretching (with real-time support)
The samples sound so good ! I love it :-D That's really quite amazing! I've written a timestretching program before, using simple fft-based phase adjustment, and although it sounded good, there were always some artifacts. I can't believe the quality of this one. Steve
Re: [linux-audio-dev] Re: Akai's MPC4000 Sampler/Workstation Open Source Project
Variants and Solutions sections. There seem to be a few hard-realtime solutions (unlike Molnar's patch, which gives you soft-realtime), but they seem quite hard to implement... haven't tried them, tho'. I have done some development with RTLinux, a hard-realtime Linux solution, and it is a bit of a pain-in-the-ass to program for, since you are working in kernel-space and anything wrong you do will crash the whole computer. however, there is a way of using gdb with it, to catch errors before they cause a system crash. this makes things a little easier. and the fact is you can get VERY low latency using it. However, the project for which I was using it migrated our drivers to Linux 2.6, and when we did so, we found that we were able to get very very good timing out of it. This is because 2.6 incorporated the preemptive kernel locks, and when compiled with an HZ value of 1000 (that is, the basic OS interval timer for task switching), we were able to write code in user mode which had latency of less than 1ms. (This is including some I/O with a PCI board -- a data acquisition card to be precise.) So I would recommend not worrying too much about using a hard realtime system -- these days soft realtime seems to be rather good for most purposes. (The difference is that in soft realtime, you have no *guarantee* of the timing. Something could happen in an unrelated part of the OS that causes your process to wait longer than usual, and you could miss something. However, in practise I've found that running under a properly configured 2.6 kernel, this is rare enough that it's not really worth worrying about!) In this particular case, something to watch out for is whether there is an MMU. I think likely there is not. (Haven't check the linked docs) You'll probably have to run Linux sans-MMU, meaning that processes can step on each other if you're not careful! However, it's not *that* big a problem. Just makes development slightly more annoying. I don't have any information on latency reports for MMU-less Linux. Steve
Re: [linux-audio-dev] [OT] Language fanboys [was Re: light C++ set for WAV]
I'm not so much a specific language fanboy as a languages fanboy. There are so many languages out there that are outside the C, C++, Java and C# bucket that offer features that people in the C/C++/Java/C# camp don't even know about. I agree... Programming languages are amazing tools... just as natural languages affects how we think, programming languages affect how we code. However, to bring this conversation back to a thread from a few weeks ago, I find it interesting, and sometimes frustrating, that most new languages that differ from C/C++ tend to target interpreters and virtual machines. Does anyone know any interesting and powerful languages that can be used just like C? That can link to C libraries, and can be compiled to native machine code, and can express the same low-level concepts as C, but in a more powerful and intuitive way? In short, does anyone know any languages other than C and C++ that would be interesting for audio programming? This list has made of aware of FAUST and some other interesting examples of meta-languages that compile to C code. I do find this interesting, but I would like a more common ground: something that can be used in a more general-purpose way (like C), but is still useful for audio, realtime programming, and maybe even operating systems (like C). I'm not one to argue against C or C++ actually, but having experienced Python and other high-level languages, I find myself wanting to use such a syntax for natively compiled code. I suppose that one could argue that a lot of the power of these interpreted languages comes from the fact that they are often dynamically and loosely typed, which is much more difficult to express in optimized, compiled code. It's precisely the strong typing and well-defined memory usage that makes C useful for things like operating systems and realtime programming. I do understand that. I am only suggesting that maybe there is some middle-ground between the likes of C and Python, that happens to not be C++. Anyone? I have often wondered what I might do if I tried to design such a language, but I think it's just too big a task. (For now anyways.) And I would hate to re-invent the wheel yet again. Steve
Re: [linux-audio-dev] memory-mapped wav files
asked about linus said i know and i intend to keep it that way (paraphrasing). ah. i take it it's not a good idea then.. ;-) thanks for the answers, they were informative. steve
Re: [linux-audio-dev] modular sequencing environment/synth // any projects to dig in?
ah, cool. just curious, what is your dev environment then? (distro, etc.) i might be interested in contributing some code eventually... (partly cause i was once considering re-writing pd from scratch as well, but decided it was too big a project for the amount of time i have right now..) steve On 7/8/06, Tim Blechmann [EMAIL PROTECTED] wrote: I only took a few minutes to try it, I downloaded the source, but I think the version of SCons in my Ubuntu (Dapper) system wasn't new enough to work with the build script... In any case I played around but couldn't get it to compile. pnpd requires a pretty recent version of scons to build ... however, even if you compile it, don't expect too much ... it will only play a test sine wave :) tim -- [EMAIL PROTECTED]ICQ: 96771783 http://www.mokabar.tk I had nothing to offer anybody except my own confusion Jack Kerouac -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.4 (GNU/Linux) iD8DBQBEr3WHNDZZF/Yk3sURAqxyAJ4/JTiV9cgUq2omSeGRtWzkyVuukgCgioSz 9K620s2SJD1tdy/O2tPCpbo= =Da8f -END PGP SIGNATURE-
Re: [linux-audio-dev] modular sequencing environment/synth // any projects to dig in?
I also hadn't heard of pnpd... Sounds really interesting. I only took a few minutes to try it, I downloaded the source, but I think the version of SCons in my Ubuntu (Dapper) system wasn't new enough to work with the build script... In any case I played around but couldn't get it to compile. Steve On 7/7/06, jaromil [EMAIL PROTECTED] wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 hi, On Wed, Jul 05, 2006 at 09:05:36PM +0200, Niklas Kl?gel wrote: wow, thanks! I am going to look into the source of the projects the next days, currently pnpd triggers most of my attention. indeed, an interesting list! are there any plans to make pnpd compatible with pd patches? that would be rally good IMHO. at least *some* layer of compatibility that could support documentation and examples. ciao - -- jaromil, dyne.org rasta coder, http://rastasoft.org -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.2 (GNU/Linux) iD8DBQFErj+he2QxhLU0C14RAjHGAJ95X0UN8kghOJH0mcj2Lhim+6vUMwCg2W9p RdN2+Ha10n9PqPAY8k1z3UE= =ZkLN -END PGP SIGNATURE-
Re: [linux-audio-dev] Envelopes
You might want to check out the STK. It has an object called Asymp, which can generate simple exponential envelopes, and also an ADSR object. It also has tons of other goodies. As a bonus, it's cross-platform. http://ccrma.stanford.edu/software/stk/ Here's the class list: http://ccrma.stanford.edu/software/stk/hierarchy.html Steve On 6/29/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Hi peeps. I've been writing an app that requires volume envelopes. I've implemented the envelope part myself, but I was wondering if there was a library about to do it. I ask because I realised that probably everyone on this list has written something that uses envelopes, and probably written it better and with more features than I have. More generally, is there a library/toolkit of audio bits and bobs about? James
Re: [linux-audio-dev] Envelopes
check out the STK. I don't think this is free software, btw. They aren't too specific about the license but I think it's public domain. I should ask Gary to be more clear about that on the site... Anyways, it is included as a package in Debian (libstk0c2a), and they are one of the best judges on software Freedom I know of. Here's the copyright file: http://packages.debian.org/changelogs/pool/main/s/stk/stk_4.2.0-9/libstk0c2a.copyright - Debianized by Guenter Geiger (Debian/GNU) [EMAIL PROTECTED] on Thu, 22 Apr 2004 10:55:08 +0200. http://ccrma.stanford.edu/software/stk/ Authors: Perry Cook and Gary P. Scavone Copyright: This software was designed and created to be made publicly available for free, primarily for academic purposes, so if you use it, pass it on with this documentation, and for free. --- But you're right in the sense that they aren't specific in their license about derivative works, which is an important part of the GPL. Steve