Re: [LAD] Jitter analysis
On Sun, Sep 17, 2017 at 7:17 AM, benravin wrote: [...] > is indeed a slow varying timing jitter, for example every 400ms, the timing [...] Context and environment...? Is there by any chance sample rate conversion going on somewhere? (Hardware or software; usually behaves in about the same manner.) Since buffer sizes in most environments need to stay fixed, and usually also have further restrictions, this tends to affect buffer/callback timing. As a result, input-to-process and process-to-output latency drift over time, and pop back (buffer drop, or extra buffer) on a regular basis. -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org https://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] Open Source to be or not to be?
On Mon, Jun 30, 2014 at 11:58 PM, Fons Adriaensen wrote: [...] > And what's the point of running a concert hall reverb in a web > browser ? [...] I don't know what *this* particular thing is all about, but generally speaking, the main point of running stuff in the browser is that the exact same build works on any operating system, any CPU and (in a perfect world) any non-ancient standards compliant browser. It's a platform where an easy-to-optimize subset of JavaScript is your target CPU architecture, and the browser APIs serve as your interface to the hardware and the outside world. It's potentially relevant to pretty much anything people actually want to use on a computer or similar device... As to Google and all that, it's no different from distributing binaries for any other platform. These binaries just happen to run on most of them out of the box, requiring no extra VMs, guest OSes, emulators or anything. If you don't like any random company making money off of it, you use a license that doesn't allow that. If you have nightmares about piracy, you're still free to fuck your paying customers over with DRM that the freeloaders will be happily unaware of. Nothing new there. You just reach many more users with less effort. I don't see any (new) problems here, really. But, perspectives... I develop games and stuff - not embedded realtime applic... Actually, I do that too. :-) -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
[LAD] ANN: Audiality 2 now on GitHub
Audiality 2 is now hosted on GitHub! Figured it's about time to make an announcement, now that it's been on there for a few weeks... ;-) Overview: Audiality 2 is a realtime audio and music engine, primarily intended for video games. While it supports traditional sample playback as well as additive, subtractive and granular synthesis, the distinctive feature is subsample accurate realtime scripting. Some of the changes since 1.9.0: * Subvoice spawn and event timing fixed - now truly subsample accurate! (For granular synthesis etc.) * Added generic stream API for wave uploading, audio streaming etc. * 'filter12' no longer blows up at high cutoffs. * 'filter12' supports 2 channels/dual channel mode. * More accurate pitch control in 'wtosc'. * More logical unit autowiring: Now wires outputs to any inputs down the chain by default; not just the next unit. * 'run' statement removed. * Comma as a statement delimiter now deprecated. Next few upcoming changes: * Command line player. * Boolean and comparison operators. * Normalize, crossfade and reverse mix processing for wave uploads. * Render-to-wave, for creating complex waves. * Buffered taps/inserts, for easy implementation of GUI oscilloscopes and the like without realtime callbacks. Official site: http://audiality.org/ GitHub: https://github.com/olofson/audiality2 Related; Kobo II site: http://kobo2.net/ -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
[LAD] ANN: Audiality 2 1.9.0 - Refactored, renamed, rearmed!
Audiality 2 1.9.0 - Refactored, renamed, rearmed! - "Audiality 2 is a realtime audio and music engine, primarily intended for video games. While it supports traditional sample playback as well as additive, subtractive and granular synthesis, the distinctive feature is subsample accurate realtime scripting." Audiality 2 (previously released as ChipSound) is used for sound effects and music in the game Kobo II. The name originates from an old structured audio and sampleplayer engine, originally developed as part of the XKobo port Kobo Deluxe. The old engine is no longer maintained, so the new one, which has similar goals but much greater potential, is now inheriting the name. Key features: * Microthreaded realtime scripting engine * Modular voice structures * Subsample accurate timing * Designed for hard realtime applications * No hardwired voice/channel/bus structures * No hardwired "MIDI-isms" or similar * No hardwired instruments or songs - only programs * Lock-free timestamping C API * Audio I/O drivers: * SDL audio * JACK * Application provided drivers * System drivers: * libc malloc (soft RT) * Application provided drivers * Implemented in portable C * zlib license Official site: http://audiality.org/ Direct download: http://audiality.org/download/Audiality2-1.9.0.tar.bz2 Related; Kobo II site: http://kobo2.net/ -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] [LAU] Linux Audio 2012: Is Linux Audio moving forward?
On Friday 12 October 2012, at 17.41.38, Nils Gey wrote: [...] > > > make more music > > > make it public > > > make other people want to use the same tools as you [...] > > On that note, some stuff I've done for one of my current projects, Kobo > > II; chip themed music and sound effects: > > http://soundcloud.com/david-olofson > > > > No proper home yet, but the latest release as of now is found here: > > http://olofsonarcade.com/2012/03/13/chipsound-0-1-0-released-zlib- > > > > license/ > > It worked! I want to use the same tools as you. > I have searched for somthing like this for a long time. > Downloading the source right now... Awesome! :-D Well, it's still an inhouse tool in development, so the documentation is incomplete and might not be up to date. Also, the JACK support isn't in that release, in case you're looking for that. I'm going to set up a proper web site for it shortly, with some documentation and examples. At this rate I'm going to need it myself, as I'm forgetting details between the times I do some proper work with it! ;-) -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] [LAU] Linux Audio 2012: Is Linux Audio moving forward?
On Friday 12 October 2012, at 10.27.39, Nils Gey wrote: [...] > make more music > make it public > make other people want to use the same tools as you [...] On that note, some stuff I've done for one of my current projects, Kobo II; chip themed music and sound effects: http://soundcloud.com/david-olofson My focus shifted away from music many years ago, and I've more or less been out of the loop ever since The Great API Discussions. (JACK, LADSPA, GMPI, XAP etc.) These days, I'm running my own business, and since part of that is developing games, I'm kind of getting back into music agan. However, I'm pretty much exclusively using weird custom tools (as always!), so I'm not sure I can contribute much to The Cause anyway, I'm afraid... The tracks above are all realtime synthesis on a custom engine, ChipSound, using geometric waveforms and noise only. It's a very simplistic synth from the DSP point of view, but it's driven by a per-voice microthreaded realtime scripting engine, which is how it can still produce somewhat interesting sounds. No pre-rendered waveforms, filters or anything so far, but there's off-line rendering, modular voices and stuff in my development tree. All sounds and music coded in a standard code editor (KDE Kate) so far, but I'm planning on throwing the MIDI master keyboard in the mix later on. No proper home yet, but the latest release as of now is found here: http://olofsonarcade.com/2012/03/13/chipsound-0-1-0-released-zlib- license/ Of course, I'm still developing and running everything on Linux! The ChipSound development tree has JACK support (too many issues with the SDL->PulseAudio- >JACK->ALSA stack), and I'm using mhWaveEdit and JAMin for recording and mastering the demo tracks. -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
[LAD] ANN: ChipSound 0.1.0 under zlib license
ChipSound 0.1.0 under zlib license -- "I’ve decided to officially release the engines (scripting, physics, sound and 2.5D rendering) behind Kobo II as Free/Open Source, in order to make the game available to basically anyone with OpenGL and a C compiler. [...] First out is ChipSound, which is now under the zlib license. [...]" NOTE: ChipSound currently depends on SDL for audio I/O. Full story: http://olofsonarcade.com/2012/03/13/chipsound-0-1-0-released-zlib- license/ Direct download: http://www.olofson.net/download/ChipSound-0.1.0.tar.bz2 Related; Kobo II site: http://kobo2.net/ -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] "bleeding edge html5" has interesting Audio APIs
On Monday 21 November 2011, at 02.44.00, Giuseppe Zompatori wrote: > Wrong. > Android has the highest smart phones market share (Samsung alone became th > largest smart phones producer in the world). It's recent news. ...but, that's not really Java, is it? Different VM (Dalvik) with it's own form of bytecode. Also, Dalvik doesn't (yet) have JIT, so you'll want to use native code for anything seriously demanding on those devices anyway. Doesn't really matter, as you have to compile specific Android builds regardless. -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] RAUL?
r the "big guys"; If the game is great, some "pirates" will convert into customers. (Other statistics show that the kind of people that pirate anything that's released also buy a lot of games. They basically just want uncrippled "demos" to make informed decisions. $50+ price tags probably amplify this, as that's just too much for taking chances.) However, if the game sucks, widely available cracks warns would-be buyers, rendering devious strategies like refusing to provide demos, threatening review sites and whatnot, ineffective. -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] RAUL?
On Wednesday 16 November 2011, at 10.57.47, Louigi Verona wrote: > Thanks for replying. > Allow me to comment on a few things. > > "The concept of property just is artificial in general." > > All ideas and concepts are artificial in a way, however the concepts of > property are based on an inescapable property of things > to be scarce. > [...] I don't know about other people here, but I've only got 24 hours a day, some of which are lost to sleep and other activities, and I've most likely got only a few decades left to live. Time is a very scarce resource indeed. Unfortunately, I've "wasted" most of my life so far on programming, music and various other things that are hard enough to make a living off of regardless of laws and other tools. Selling anything is really just "suggesting" that people pay for it, as only a few percent of the consumers will anyway. That part is not really a problem, though! The *actual* market is only those few percent anyway, the rest concisting mostly of people that would just not bother if they couldn't get it for free. More importantly, that last group gives you free distribution and marketing! So, from that point of view, copyright law is pretty much useless anyway. It's fighting a hopeless battle against the very nature of these things, and trying to enforce it is anything from pointless through devastatingly counterproductive. So, it could definitely be argued that copyright law is irrelevant in this context. However, if just any business was legally allowed take anyone's "intellectual property" and make money off of it, paying no royalties or anything, that would be a problem. How would one prevent that without copyright law or similar tools? Never releasing any source code? Never releasing anything at all? (No significant difference in this context. Comments are mostly garbage anyway, and machine code is just another language.) As to motivation, well sure, there will always be people doing all sorts of stuff just for fun, self-education etc - or because they just need it to get some job done. Unfortunately, in the case of music, video games and various other things, the interesting part of making a polished, thoroughly enjoyable and/or useful product is generally only some 10% of the work. The rest is just hard, boring, frustrating work that will rarely ever get done without some other motivation than the work itself. It's not the kind of work that attracts contributors to a Free/Open Source project either, in cases where that is even applicable. One could argue that "entertainment products" aren't really necessary anyway, so it wouldn't matter if people stopped making them. By that logic however, if you don't need the products, you don't need to pay for them either, so where's the problem...? (Personally, I'll never rely on a proprietary, closed source engine or development tool ever again if I can help it, but that's a different situation altogether.) -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] Conciderations of Design
On Friday 11 November 2011, at 23.19.44, harryhaa...@gmail.com wrote: > Hi All, > > Recent thoughts of mine include changing the "direction" of operations in > real-time program design: > Eg: Don't call a set() on a Filter class to tell it its cutoff, every time > there's a change, > but make it ask a State class what its cutoff should be every time it runs. There are issues with both methods, depending on what you want to do. Function calls add overhead that can become significant if you're doing very frequent parameter changes. Polling, as I understand the latter approach to be, might be a great idea if you're only reading the parameter once per "block" of processing. You'll need to get that number from *somewhere* no matter what; be it a private closure, or some more public structure. However, if there are expensive calculations between that parameter and what you actually need inside the DSP loop, it might not be all that great. Things get even worse if you want to handle parameter changes with sample accurate timing. (Function calls can handle that just fine; just add a timestamp argument, and have plugins/units handle that internally in whatever way is appropriate.) Some sort of event queues can offer some of the advantages of both of these, if designed properly. If they're delivered in timestamp order (either by system design or by means of priority queues or similar), processing them becomes very efficient and scalable the large numbers of "control targets"; you only have to check the timestamp of the next event, then process audio until you get there, and you only ever consider changes that actually occured - no polling. All that said, how far do you need to take it? Unless you're going to throw tens of thousands of parameter changes at your units while processing, this overhead may not be as significant as one might think at first. It might be a better idea to focus on features and interfaces first. Remember, premature optimization is the root of all evil... :-) > Along the same lines, say I have a single linked list of AudioElements, and > the 3rd element needs info, should > it request it, be told it, or have some other system to inform it of > events? I tend to go with "connection" logic and some sort of direct references when designing that sort of things - but again, that depends on the application and usage patterns you're designing for. For example, in a physics engine (game or simulation), you have potentially hundreds or even thousands of bodies moving around, and you have to rely on spatial partitioning of some sort to figure out which bodies *can* potentially collide within the time frame currently being evaluated. In a naïve design, you essentially have to check every body against every other body, every single frame, and that... doesn't scale very well at all. :-) As a more relevant (I think) extreme in the other direction, we have musical synthesis systems: Hundreds or even thousands of units processing audio in various ways (I'm thinking modular synthesis here, obviously - no way any sane person would use that many units otherwise... I think ;-) - but you won't normally see random communication between arbitrary units! What you will normally have is a number of (relatively) long "conversations" between units, usually best abstracted as some sort of persistent connections. Obviously, this saves a lot of time, as there is no overhead for looking units up, except possibly when making new connections. (Probably no need for that if you wire things as you build the graph.) > I'm seeing downsides to each approach: > 1: Tell it on every change -> performance hit How are you going to avoid that anyway? Even if you do want to filter high frequency control data down, you'll need to deal with all data there is, or risk "random" behavior due to aliasing distortion. (Like downsampling audio without filtering or interpolation.) Or, if you're going to use a lot of potentially high frequency control data, why not use audio rate control ports? Or some sort of hybrid, allowing you to switch as needed - but that quickly becomes a complexity explosion... > 2: Request it every time it runs -> Keeping control over the many values & > unique ID's of class instances Well, if designed properly, this should scale with the graph. Basically, each connectable entity should only ever need to know what it's connected to - if even that. (See LADSPA ports.) Also, keeping such connection state data along with the state data of units might be a good idea performance wise, as it can make memory access more cache friendly. But of course, the ultimate answer to all such questions is: Benchmarking! Though having a rough idea about how modern hardware works can help
Re: [LAD] Kobo II: Another song WIP
Hello! Thanks! I'm basically thinking chiptune sound and methods - but I'm using 50-150 voices rather than the usual 3-8 or so. Sort of like what the old grand masters of game music might have done if the C64 had had 16 SID chips instead of one. :-) It's an interesting exercise, trying to create a "full" sound (also including sound effects for the game) using only the built-in waveforms and noise. Sticking with that might make for an interesting item in the Tech Trivia section about the game as well. :-D I was planning on adding a resonant filter early on, but I have yet to come to the point of actually needing it...! Being able to actually script your own waveforms (ie switching waveforms and modulating oscillator parameters at different points during each period), covers a whole lot of things that would normally require specific features in the synth engine. Sure, one could always phatten things up with a 24 dB resonant LPF - but then again, that would just sound like all the other five billion virtual analog synths out there...! ;-) I'll get to samples, filters, effects and all that eventually, but I'm actually more interested in running ChipSound programs off-line to render complex waveforms that are then played by other programs in real time. For example, for the strings/pad sounds, I use a dozen or two "nervously" modulated saw waves. Imagine using a dozen voices, each one playing a pre- rendered loop of saw banks like that. I was planning on using IFFT synthesis for that kind of sounds (like strings that actually sound real), but I don't really see much need for that now... As to the control, that's pretty much defined by how you implement your sounds. The single-argument "Velocity" and "Pitch" entry points are just a convenient "standard" I've been using so far. As of now ("arbitrary" implementation limits, to keep things simple, small and fast), you can have up to 8 entry points (0 being the main program launched when starting a new voice), and each entry point can take up to 8 arguments. What these entry points do is entirely defined by the code; nothing hardwired there. Think of ChipSound as a huge pile of sound chips, each with an MCU running user defined code, but without the noise, artifacts and timing issues. :-) Anyway, thanks for your response! Hopefully, I'll get around to update the documentation and release that thing soon - but now, time to get that game out the door and see what happens! :-) Regards, //David On Sunday 06 November 2011, at 20.40.22, Julien Claassen wrote: > Hello David! > This sounds really old-school 80s. The language looks a little > restrictive in its control, but more than sufficient for this type of > sound. Nice work! Thanks for sharing this! >Warm regards > Julien > > =-=-=-=-=-=-=-=-=-=-=-=- > Such Is Life: Very Intensely Adorable; > Free And Jubilating Amazement Revels, Dancing On - FLOWERS! > > == Find my music at == > http://juliencoder.de/nama/music.html > ......... > "If you live to be 100, I hope I live to be 100 minus 1 day, > so I never have to live without you." (Winnie the Pooh) -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
[LAD] Kobo II: Another song WIP
Figured this might be of some interest to someone around here - and either way, it's all done on Linux, and it will be released on Linux. ;-) (Original announcement post at the end!) - The whole game engine will probably go Free/Open Source eventually; older versions of parts of it already are. The synth engine will be JACKified and open sourced as soon as I get around to it! Going to support JACK in the game as well, as I use it on my devsystem all the time anyway. No idea if anyone will ever understand or care for this strange beast of a sound engine, but anyway... :-D For your amusement, here's the "lead synth" used for the theme and some other melodic features in the song: CuteSaw(P V=1) { !P0 sp, +P sp, w saw, p P, a 0, !er .05, !vib 0, !tvib .01 .rt wg (V + a - .001) { sp (vib * 6 + rand .01) 12 {-sp vib, +p (P - p * .8), +a (V - a * er), d 5 } 12 {+sp vib, +p (P - p * .8), +a (V - a * er), d 5 } +vib (tvib - vib * .1) } a 0, d 5 1(NV) { V NV if NV { vib .005, tvib .005, er .05, wake rt } else { tvib .02, er .02 } } 2(NP) { P (P0 + NP), p P } } Yeah, I was in a neurotically minimalistic kind of mood when I designed that language... But, it Works For Me(TM)! Less typing ==> quicker editing. ;-) (The original version of ChipSound, with a more assembly-like scripting language, was less than 2000 lines of C code. It's slightly below 4500 lines now, compiler included.) When playing a note, a voice with its own VM is started, and set to run this script. The VM runs in unison with the voice, alternating between audio processing and code execution. Thus, timing is sub-sample accurate, allowing the implementation of hardsync, granular synthesis and the like without specific engine support. Timing commands can deal in milliseconds or musical ticks, making it easy to implement rhythm effects, or even writing the music in the same language, as I've done here. Voices (microthreads) are arranged in a tree structure, where each voice can spawn any number of sub-voices it needs. Messages can be sent to these voices (broadcast or single voice), allowing full real time control. Oscillators currently available are wavetable (mipmapping, Hermite interpolating, arbitrary waveform length sampleplayers) and "SID style" S&H noise. It's possible to use arbitrary sampled sounds and waveforms, but so far I've only been using the pre-defined sine, triangle, saw and square waveforms, and the noise generator. - Kobo II: Another song WIP - "Yesterday, I started playing around with a basic drum loop made from sound effects from the game. The usual text editor + ChipSound exercise, using only basic waveforms. I sort of got caught in the groove, and came up with this: [...]" Full story: http://olofsonarcade.com/2011/11/06/kobo-ii-another-song-wip/ Direct download: http://www.olofson.net/music/K2Epilogue.mp3 -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] Internally representing pitch : A new approach
On Thursday 06 October 2011, at 05.43.20, Jens M Andreasen wrote: > On Tue, 2011-10-04 at 09:19 +1300, Jeff McClintock wrote: > > * Support the concept of re-triggering a voice that's already playing, > > this is important for any percussive instrument. E.g. hitting a cymbal > > twice in quick succession should not trigger the sound of two cymbals > > playing together. > > Well, actually it should. The waves generated by the first strike are > unaware of the waves generated by the second strike and will pass > through them as if they did not exist. Compare to dropping two pebbles > in a bucket of water. That is true for a linear system - but is a cymbal linear...? Either way, I don't think re-trig vs new voice is all that important for that kind of sounds. The major problem is that the human ear is annoyingly good at finding patterns in apparent randomness, so it's probably more worthwhile to focus on dynamics and variations, to eliminate that retro sampler feel. However, consider a single string on a guitar. There, you *specifically* want each new note to kill any previous note on that string, or it just won't sound anything like a guitar. Same deal with mono synth style sounds and whatnot. Re-trig, continous pitch and that sort of things are all about being able to handle different kinds of sounds and play styles without cumbersome hacks and workarounds. Thinking about what you can do with your fingers acting directly on various physical objects, and trying to express that information digitally might be a good start. More "mechanical" instruments just limit that freedom a bit. -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] realtime kernel based on linux-3.0-rc7
On Thursday 21 July 2011, at 00.36.55, Philipp Überbacher wrote: [...] > > ..and this latency plot is stunning: > > https://www.osadl.org/Latency-plot-of-system-in-rack-4-slot.qa-latencyplo > > t-r4s6.0.html?latencies=&showno=&slider=57 [...] > The plot really does look stunning, strangly (?) not on other machines. > https://www.osadl.org/Latency-plot-of-system-in-rack-4-slot.qa-latencyplot- > r4s7.0.html > https://www.osadl.org/Latency-plot-of-system-in-rack-4-slot.qa-latencyplot > -r4s8.0.html > > No idea what those plots tell about real world usage. It's good to get > another set of patches though. In terms of worst case figures, these plots look a bit like what I've seen with RT-Linux and RTAI on various hardware (PII/III workstations via Geode SBCs through Intel Core based Celerons on industrial Mini-ITX boards), though with "true" RT kernels, one tends to get a lot of very low latency points, and only the occasional peak. SBCs with lowpower CPUs (Geode and the like) tend to perform a lot worse than "proper" laptop and desktop CPUs. Memory and/or cache bandwidth issues, maybe? -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] a *simple* ring buffer, comments pls?
On Monday 11 July 2011, at 22.32.08, James Morris wrote: > On 11 July 2011 20:19, Olivier Guilyardi wrote: > > Good catch... Multi-core ARM devices are actually arriving massively. > > With Android, there's the Motorola Atrix, the Samsung Galaxy S II, etc.. > > What about my toaster? :-P > > I've ended up going back to Fons's pragmatism. If > non-blocking/lock-free programming is so impossibly difficult, > requiring intimate hardware knowledge of numerous different > architectures then there's only one solution available to people like > me, and that's to code for AMD64/Intel and use the existing ringbuffer > implementations. Also, if/when it's time to port, find the code and/or information you need at the time, and test thoroughly on the actual hardware. These things can usually be done on "anything", one way or another, more or less efficiently. The only thing that's worse than missing support for some new platform is "support" that doesn't actually work properly. Lots of debugging fun to be had that way... :-) -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] [LAU] FW: Frequency Space Editors in Linux
On Tuesday 12 April 2011, at 09.16.36, Philipp Überbacher wrote: [...] > > If your bike has a flat tyre, there's a good chance it'll stay flat > > until you fix it. > > > > You either need to break out some code, or some cash. > > > > Gordon MM0YEQ > > Does someone have some code to fix bikes? The symptoms are flat tires, > shifty gears and funky noises. Unfortunately, the closest I have is the custom dashboard software I hacked for my race car. :-) -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] the future of display/drawing models (Was: Re: [ANN] IR: LV2 Convolution Reverb)
de. > A workstation is not about 3D gaming but about making some work. 2D rendering is just a subset of 3D... Where does "3D" start? Non-rectangular blits? Scaling? Rotation? That's all basic 2D stuff, and a 3D accelerator doesn't add much more than z-buffering and perspective correct transforms on that level - and you can disable that to save the occasional GPU cycle. > 3D cards are very hungry for electricity, and they will be an overkill for > anyone that is not working on some kind of 3D development. The > electricity providers will certainly like them very much, but my wallet > and the environment don't like them. A CPU consumes a lot more power doing the same job. Actually, it can't even do the job properly unless you want it to take forever, and it'll still be much slower. Dedicated hardware makes all the difference in the world when it comes to efficiency. Sure, my (now "ancient") rig burns around 1000 W when working at full speed - but that means blending transformed textures all over my 2560x1600 screen at a few THOUSAND frames per second! Obviously, it only takes a tiny, tiny fraction of that power to get your average GUI application to feel responsive. And, unless you're scrolling or somehow animating major areas of the screen all the time, you don't even need that. > So, I think than a complete discussion on that matter should include > the hardware part, that is how to make power and computational > efficient 2D video cards. I don't see how one could realistically design anything that'll come close to a down-clocked low end 3D accelerator in power efficiency. What are you going to remove, or implement more efficiently...? Also, 3D accelerators are incredibly complex beasts, with ditto drivers. (Part because of many very clever optimizations that both save power and increase performance!) But, hardcore gamers and other power users need or want them, so they get developed no matter how insanely overkill and pointless they may seem. As a result, slightly downscaled versions of that technology is available dirt cheap to everyone. Why not just use it and be done with it? -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] Allocating 96 but reading 16 (multple times)
On Wednesday 16 February 2011, at 00.33.49, Jens M Andreasen wrote: [...] > The same effect can be observed with 64 frames and 16 appears to be the > optimal value in the loop. [...] To quick to post there! This could mean that you simply have too little buffering between input and output. Does your application actually define that? (With read()/write() APIs, you usually have to do this by "pre-buffering" on the output before entering the actual processing loop.) The reason why it works better with 16 frames is probably that the driver starts by waiting for your write()s to fill upp the output buffer, thus adding one DMA buffer's worth of extra buffering between the input and output. -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] Allocating 96 but reading 16 (multple times)
On Wednesday 16 February 2011, at 00.33.49, Jens M Andreasen wrote: [...] > (blocking) 96 frames at a time - makes sense, yes? [...] Theoretically, yes, but 96 is a rather odd value in these circumstances. Most drivers will deal only in power-of-two sized buffers, so that would be 64, 128 or something. So, looking at the input side, you have to wait for *at least* 96 frames to come in before read() can wake up, and as most drivers/sound cards have IRQs only for complete DMA buffers, that probably means you wait for something like two 64 frame buffers to arrive before you get your first block. This means you wake up one "buffer cycle" too late! Obviously, at this point, you're in a real hurry to get the output to write(), not to miss the deadline. If you're doing full duplex with minimal buffering, you've already missed it, and your latency will be shifted to 64 frames higher than intended. What happens when your code deals with smaller numbers of frames is that you're approaching the "one frame at a time" streaming case, which automatically results in blocking at the right places (that is, on DMA buffer or boundaries), only with a lot of overhead. -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] [OT] Richard Stallman warns against ChromeOS
On Thursday 16 December 2010, at 23.39.21, Ralf Mardorf wrote: [...] > Aaargh, I had an amp from an established company, can't remember > this company, everything was connected by wire-wrapping, so indeed a > good discrete circuit, but the wire-wrapping did cause defects. Btw. I > prefer good old leaded solder, but leaded solder in Germany isn't > allowed anymore. We should start to wire-wrap all electronic devices for > our politicians here :p. Wire-wrapping... *hehe* For my university project (some puny 15 years ago ;-), I decided to have some serious fun, and designed a MIDI synthesizer around a 68HC11, a battery backed RAM, some EEPROM and three 8580 (newer SID) chips. 700 pins, all wire-wrapped. And it even worked! :-) Actually, it's quite swift and effective method; it's the cutting and stripping of the wires that's a PITA, unless you have a really good tool - which I didn't at the time! :-D -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] [OT] Richard Stallman warns against ChromeOS
On Wednesday 15 December 2010, at 19.56.04, Arnold Krille wrote: [...] > Some months back fbi had to admit that current encryption is to good for > them. After a year of trying they returned a hard-disk (which Mexican > police asked them to decrypt) admitting they couldn't do anything to get > the data... Went through fefe's blog... ...or maybe the files were just truly random noise from an analog source? ;-) ...or the FBI just *said* they couldn't do it, to lull us all into a false sense of security. -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] panning thoughts
On Saturday 13 November 2010, at 18.07.22, Philipp Überbacher wrote: [...] > One thing I wonder about is the exact value of the center. I've seen > panning in software between -1 and +1 and a center of +/- 0 where it > made a difference whether it was + or -. That sounds like a bug to me. Incorrect sign special-casing, where the signal is inverted in one of the cases...? Although -0.0f is physically different from 0.0f on many platforms, that sign bit shouldn't affect the results in this case, as the value is still 0. -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] What do you do for a living?
Hi, I've spent the last 12-13 years developing software and firmware for lab instruments, and I spent a great deal of that time working on Free/Open Source software that runs on the instruments. Obviously, the generation of instruments I was involved in developing run Linux + RTAI. ;-) However, my former employer is no more (the business side completely broke down as our boss passed away about a year ago), and in the end, nothing came out of the negotiations with the former partners and competitors. Fortunately, I got a small bonus out of it: that Free/Open Source software I worked on (the EEL scripting engine, most importantly) is still mine. So, right now I'm in the process of starting my own business. While waiting for the bureaucracy to get to the point where I can actually start working, I'm working on my new web sites, bringing various projects back to life and stuff like that. I'll be focusing on two main activities: software development consulting services and game development. As to Google, not me... either! That doesn't have much to do with Google in particular, though. It's just that I've been doing this since I was a kid, but I still haven't actually achieved much of what I've wanted to do ever since I started. Well, I did implement my own programming language, which I'm using for various stuff now, but that's about it... As far as game development goes, Kobo Deluxe is about it, and that's not even my own design. That is, it's about bl**dy time to take a step in the right direction...! And just thinking about it while wasting all time and energy working on the "wrong" projects doesn't work. :-) -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] on the soft synth midi jitters ...
On Sunday 10 October 2010, at 10.01.09, Ralf Mardorf wrote: > On Sun, 2010-10-10 at 17:59 +1300, Jeff McClintock wrote: > > I do use licensed software. I am quite anti-piracy > > If so, than pardon :). Anyway strange, a lot of the famous studios did > use Cubase without getting jitter for soft synth, today those studios do > use Nuendo. Are they actually using the softsynths for monitor sound when *recording* "live" MIDI? I don't know how most people work these days, in my experience, one tends to have the MIDI stuff sequenced and arranged already when arriving at the studio, in which case "live" MIDI latency and jitter are no issues. > I never experienced jitter for soft synth, when using Cubase > and I do hear allegedly inaudible jitter when using external MIDI > devices. There has to be quite a bit of jitter before one actually hears it as such, and as to fixed latency, tolerances are even higher. Most people apparently don't even hear the "random" timing that's applied to anything you play on a hardware synth driven via standard MIDI - but if you're used to oldschool trackers and other software with sample accurate timing, you can tell something is "off". (Obviously, this would be next to impossible to notice unless we're dealing with 100% quantized electronic music. "Human feel" would probably mask anything that's off by less than one or two ms or so.) As to live playing, I doubt a normal human being would even know what (s)he's missing before actually trying something with sub 3 ms latency and sub 1 ms jitter. You can't hear the difference, but you can certainly feel it! I suspect drummers would be particularly sensitive to this. -- //David Olofson - Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://olofson.net http://kobodeluxe.com http://audiality.org | | http://eel.olofson.net http://zeespace.net http://reologica.se | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] on the soft synth midi jitters ...
On Tuesday 05 October 2010, at 14.39.25, Arnout Engelen wrote: [...] > > Hence it's impossible to accurately honor the frame/time stamp of a midi > > event. That's what drove drove the experimentation with splitting the > > audio generation down to tighter blocks. > > Yes, that could be an interesting way to reduce (though not eliminate > entirely) jitter even at large jack period sizes. Not only that. As long as the "fragment" initialization overhead can be kept low, smaller fragments (within reasonable limits) can also improve throughput as a result of smaller memory footprint. Depending on the design, a synthesizer with a large number of voices playing can have a rather large memory footprint (intermediate buffers etc), which can be significantly reduced by doing the processing in smaller fragments. Obviously, this depends a lot on the design and what hardware you're running on, but you can be pretty certain that no modern CPU likes the occasional short bursts of accesses scattered over a large memory area - especially not when other application code keeps pushing your synth code and data out of the cache between the audio callbacks. -- //David Olofson - Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://olofson.net http://kobodeluxe.com http://audiality.org | | http://eel.olofson.net http://zeespace.net http://reologica.se | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] Real-time plotting of audio/ oscilloscope.
On Friday 18 June 2010, at 02.09.43, Jeremy wrote: [...] > After doing some more experimenting, I realize that SDL does not in fact > cause any xruns if I don't ask it to refresh. Of course. Depending on how you set up the SDL display, the pixel plotting may actually just be raw memory access in a system RAM shadow buffer. No scheduling/xrun issues with that, obviously. Refreshing, however, is where the real work is done. Whether you're using SDL or the underlying APIs directly is of no relevance here. Realistically, for reliable operation in your average environment (X11, Windows GDI, whatever...), there is only one solution: You need to move the plotting into a separate thread. Could be a global GUI thread. What's critical is that it's NOT the audio thread. You may get away with just passing the coordinates (or raw samples) from the audio thread to the rendering thread via a shared buffer, but the proper, solid solution is to pass it in some synchronized manner, preferably lock- free. -- //David Olofson - Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://olofson.net http://kobodeluxe.com http://audiality.org | | http://eel.olofson.net http://zeespace.net http://reologica.se | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] A small article about tools for electronic musicians
On Saturday 01 May 2010, at 20.57.36, "Tim E. Real" wrote: [...] > I used to be fanatical about floating point (remember the co-processor > days?) But I've grown to dislike it. > Bankers won't use it for calculations. > (Have you ever been stung by extra or missing pennies using a 'NUMBER' > database field instead of a 'BCD' field? I have.) > > So why do we use floating point for scientific and audio work? Dynamic range, performance and ease of use. (However, as most FPUs - apart from SIMD implementations that generally don't have denormals at all - lack a simple switch to disable denormals, the last point is pretty much eliminated, I think...) > Considering audio can have really small values, does it not lead to errors > upon summation of signals? Yes, and no. If you add values in the same general order of magnitude, it's pretty much like adding integers. If you add a very small value to a very large one, and the difference is so large that the mantissas don't overlap, nothing happens! >:-) > Why do we not use some sort of fixed-point computations? I do, sometimes. ;-) However, it's a PITA, and I do it only when the code is supposed to scale to hardware with slow FPUs or no FPUs at all. I suspect floating point implementations would run faster on current PC/workstation CPUs - but then again, *correct* (ie denormal handling) code may not...! I'm not sure. -- //David Olofson - Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://olofson.net http://kobodeluxe.com http://audiality.org | | http://eel.olofson.net http://zeespace.net http://reologica.se | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] successive note on midi events
On Monday 12 April 2010, at 18.51.26, "Pedro Lopez-Cabanillas" wrote: > > i dont see how real instruments and midi events are related. > > this is pretty OT :P > > MIDI has been always about real musical instruments. Do you think that > electronic instruments are not real? The Yamaha Disklavier is not a real > instrument? A MIDIfied pipe organ? ...or any pipe organ at all, for that matter. Look at the labels on the stops. It's a pneumatic synthesizer! :-) -- //David Olofson - Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://olofson.net http://kobodeluxe.com http://audiality.org | | http://eel.olofson.net http://zeespace.net http://reologica.se | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] MIDI jitter - was: automation on Linux (modular approach)
On Thursday 25 March 2010, at 14.55.41, Ralf Mardorf wrote: [...] > Mobo: M2A-VM HDMI > Graphics: ATI Radeon X1250-based graphics, *onboard* > Slot for another graphics is an PCI Express slot, no AGP etc. [...] Have you disabled all BIOS and other power management, CPU clock throttling etc? Are you using fbdev (VESA graphics) or similar, or are you using the "hardware" text mode console? The latter can cause trouble with some integrated graphics solutions, that don't actually *have* text modes, but emulate then through BIOS super-NMIs. This can block any normal IRQs for tens or even hundreds of milliseconds! I doubt a desktop oriented motherboard with ATI or nVidia graphics would have this problem, but you never know. > I suspect the graphics and the USB device causing to much MIDI jitter. [...] Have you tested it with another "known working" computer? Could be an issue with the driver or the hardware... Also, are there any other USB devices connected to the same hub? (Integrated or external.) Are other devices sharing IRQ with the USB hub the MIDI interface is connected to? -- //David Olofson - Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://olofson.net http://kobodeluxe.com http://audiality.org | | http://eel.olofson.net http://zeespace.net http://reologica.se | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] MIDI jitter - was: automation on Linux (modular approach)
On Thursday 25 March 2010, at 13.32.43, David Olofson wrote: > On Thursday 25 March 2010, at 12.49.31, Ralf Mardorf dsl.net> wrote: > [...] > > > Btw. the > > graphics has access to the main memory, unfortunately it's a shared RAM, > > OTOH I used HPET so unwanted interrupts because of a shared RAM > > shouldn't be the cause, if I do understand the workings of HR timers > > correctly. [...interrupts, DMA etc...] BTW, the most common problem with graphics and realtime systems seems to be drivers abusing PCI port blocking as a performance hack. When the command buffer on the video card is full, the PCI bus blocks the CPU (completely - no IRQs, no nothing), instead of the driver going to sleep and waiting for an IRQ or some other proper solution. Might improve the 3D framerates slightly, but kills lowlatency audio... -- //David Olofson - Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://olofson.net http://kobodeluxe.com http://audiality.org | | http://eel.olofson.net http://zeespace.net http://reologica.se | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] MIDI jitter - was: automation on Linux (modular approach)
On Thursday 25 March 2010, at 12.49.31, Ralf Mardorf wrote: [...] > Btw. the > graphics has access to the main memory, unfortunately it's a shared RAM, > OTOH I used HPET so unwanted interrupts because of a shared RAM > shouldn't be the cause, if I do understand the workings of HR timers > correctly. Interrupts can't save you if bus-master DMA is blocking the CPU's access to RAM. Whatever IRQ source and kernel interrupt handling code you're using has no bearing on this. If the interrupts are generated by external or internal (to the CPU) timers shouldn't matter either. (Even if external IRQs are delayed while the bus is busy, it doesn't make a difference, as the CPU can't respond until the bus is free anyway.) Either way, I don't think that should be a problem, as busmaster DMA is normally done in short bursts, rather than large blocks. Unless you're pushing the limits of RT-Linux or RTAI, it should affect nothing but bandwidth from a practical POV. Then again, perhaps not all hardware is that well behaved...? -- //David Olofson - Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://olofson.net http://kobodeluxe.com http://audiality.org | | http://eel.olofson.net http://zeespace.net http://reologica.se | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] automation on Linux (modular approach)
On Wednesday 24 March 2010, at 22.21.53, "Jeff McClintock" wrote: > > From: David Olofson > > > > These issues seem orthogonal to me. Addressing individual notes is just a > > matter of providing some more information. You could think of it as MIDI > > using > > note pitch as an "implicit" note/voice ID. NoteOff uses pitch to > > "address" notes - and so does Poly Pressure, BTW! > > Not exactly note-pitch. That's a common simplification/myth. > MIDI uses 'key-number'. E.g. key number 12 is *usually* tuned to C0, but > is easily re-tuned to C1, two keys can be tuned to the same pitch yet > still be addressed independently. > It's a common shortcut to say MIDI-key-number 'is the pitch', it's > actually an index into a table of pitches. Synths can switch that tuning > table to handle other scales. True. My point was really just that pitch is somehow "hardwired" to the key number on the receiver end, rather than explicitly specified by some CC or similar. (Well, there is the SysEx extension you mentioned, but that's not helping much unless you have the bandwidth to spare, and equipment that supports it - but that goes for the existing and proposed alternatives as well.) [...] > > Virtual voices are used by the "sender" to define and > > address contexts, whereas the actual management of physical voices is > > done on the receiving end. > > You have re-invented MIDI with different nomenclature ;-). Precisely. Or, I just solved the problem in the most straightforward way I could think of. Design and implementation done in a few hours. (Ok; it's a pretty high level language, but still. Maybe I can score a bonus point for having designed and implemented the language too? ;-) I could add SysEx key-based control to the MIDI parser, of course - but I'm still wondering where I'd get such data from anyway! Perpetuating the catch-22, that is; sorry about that. :-) So... How about a library for sending and parsing MIDI with these SysEx extensions? I'm thinking along the lines of providing a nice API, but using an existing standard protocol. The idea would be to use ALSA and/or JACK (would need an extension for SysEx, or does it support that these days?) for transport layer, to avoid reinventing that too. Later, when the API has proved to work, and stuff can be wired, one might add more efficient protocols and transport layers for special cases, such as between JACK clients, or between plugin hosts and plugins. (LV2 extension?) Or, one starts with that, just looking at the feature set of MIDI + SysEx, and add MIDI gateways later, if/when needed. Depends on how widespread support for these extensions is, I guess. Just thinking out loud here... -- //David Olofson - Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://olofson.net http://kobodeluxe.com http://audiality.org | | http://eel.olofson.net http://zeespace.net http://reologica.se | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] automation on Linux (modular approach)
proper APIs, file formats etc in the Linux domain will probably only make it MORE likely that these issues will be solved, actually. Why spend time making various devices work with Linux if you have no software that can make much use of them anyway? A bit of a Catch-22 situation, maybe... > Or do we need to buy special mobos, Yes, or at least the "right" ones - but that goes for Windows too... > do we need to use special MIDI interfaces etc. If you can do cv<->MIDI mapping in the interface, you may as well do it somewhere between the driver and the application instead. If you want to network machines with other protocols, I don't think there's a need for any custom hardware for that. Just use Ethernet, USB, 1394 or something; plenty of bandwith and supported hardware available for any OS, pretty much. Of course, supporting some "industry standards" would be nice, but we need open specifications for that. NDAs and restrictive per-user licenses don't mix very well with Free/Open Source software. > to > still have less possibilities using Linux, than are possible with usual > products of the industry? > We won't deal with the devil just by using the possibilities of MIDI. > Today Linux doesn't use the possibilities of MIDI, I wonder if having a > Linux standard e.g. cv would solve any issues, while the common MIDI > standard still isn't used in a sufficient way. Well, being able to wire Linux applications, plugins, machines etc together would help, but I'm not sure how that relates to what you're thinking of here... > I do agree that everybody I know, me too, sometimes do have problems > when using MIDI hardware, because of some limitations of MIDI, but OTOH > this industry standard is a blessing. Indeed. Like I said, it gets the job done "well enough" for the vast majority of users. So, replacing MIDI is of little interest unless you want to do some pretty advanced stuff, or just want to design a clean, simple plugin API or something - and the latter has very little to do with connectivity to external hardware devices. > Networking of sequencers, sound > modules, effects, master keyboards, sync to tape recorders, hard disk > recorders etc. is possible, for less money, without taking care from > which vendor a keyboard, an effect, a mobo is. Linux is an exception, we > do have issues when using MIDI. But is it really MIDI that is bad? I > guess MIDI on Linux needs more attention. > > Internal Linux most things are ok, but networking with usual MIDI > equipment musicians, audio and video studios have got still is a PITA. > Cv would solve that? Still not quite sure I'm following, but looking at some other posts in this thread, I get the impression that this cv thing is more about application implementation, APIs and protocols, and not so much about interfacing with external hardware. >From that POV, you can think of cv (or some Linux Automation Data protocol, or whatever) as a way of making automation data easier to deal with inside applications, and a way of making applications communicate better. Wiring that to MIDI and other protocols is (mostly) orthogonal; you just need something that's at least as expressive and MIDI. Nice bonus if it's much more expressive, while nicer and simpler to deal with in code. -- //David Olofson - Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://olofson.net http://kobodeluxe.com http://audiality.org | | http://eel.olofson.net http://zeespace.net http://reologica.se | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] Has anyone ever played a plugin in realtime ... [related to:] hard realtime performance synth
On Wednesday 10 February 2010, at 22.45.40, Emanuel Rumpf wrote: > 2010/2/8 Paul Davis : > > not all PCs can do it. but its simply not true that "PCs can't do it". > > Accepted. > > > When running any 32 polyphonic hw synth, > it is able to do those 32 voices anytime. > When running out of the voices, something will > happen (e.g. voice killing/stealing). But it won't start any noise. > > Something to concider > Would a check for guaranteed voices be possible for a soft-synth ? Any polyphonic synth will have to check for voices as part of the "allocate voice" action for any noise started, so the question seems somewhat odd to me. Are you thinking of voice stealing logic? I would think most softsynths have this. The alternative would be to dynamically allocate voices as needed, and I think some actually do this - but this will be troublesome in a realtime implementation, unless you're on a full realtime OS using physical RAM only. You may pre-allocate and lock a "huge" block of memory for the synth, but then you still have a limited voice count of sorts... > That would require an extensive deterministic behavior, I think. > Instead of noise generation, maybe some sort of interpolation/silence > for sample values could be used, when running out of processing power. There are various solutions, such as stealing the "least audible" note, or just grabbing the oldest playing note. Figuring out which note is *actually* the least audible is a lot trickier than it might seem at first, so any real implementation would be an approximation at best. Anyway, I don't think there's much of a difference between "hardware" and software synths in this regard. (They all do an equally bad job of it! ;-) After all, most "hardware" synths are actually one or more DSPs and (sometimes) MCUs running some sort of software, and even "real" hardware synths will have an MCU or logic circuitry to implement this. -- //David Olofson - Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://olofson.net http://kobodeluxe.com http://audiality.org | | http://eel.olofson.net http://zeespace.net http://reologica.se | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] hard realtime performance synth
o wants a DAW. I'd be happy a while with a stable minimoog emulator. Same difference. If you have sufficiently solid scheduling for the realtime processing part, you can build pretty much anything around that. [...] > Well there are affordable synths(mostly wavetable ones) that don't appear > any more sophisticated hardware-wise than a PC. It's not about sophistication. A low cost singleboard computer with an AMD Geode, VIA C7, some Intel Celeron or whatever you need in terms of raw power, will do just fine - as long as the chipset, BIOS and connected devices are well behaved and properly configured. If you, as a manufacturer of synths or similar devices, don't want to try a bunch of different motherboards for every new revision you make, you might decide to design your own board instead. Then again, if your product is low volume and requires enormous CPU power, carefully selected mainstream hardware may still be a better option. > The PC may be such a > "generalized" piece of hardware as to make it impractical as a dedicated > synth(unless it's of a "super" computer variety). I haven't heard anything > yet that quite "put the nail in the coffin" yet. The SMI issue mentioned > earlier might be such an issue. SMI is one of them. In my experience, nearly every motherboard at least has some BIOS features you must stay away from, so even "know good" hardware sometimes need special tuning for this sort of work. General purpose computers just aren't built for low latency realtime work - but most of them can still do it pretty well, with some tweaking. [...] > > ... process a bunch of samples at a time, usually > > somewhere around one millisecond's worth of audio. [...] > Well I understand it from that perspective, but for a performance > instrument I would think no buffering would be the ideal. That's just pointless, as the ADC and DAC latencies are already several sample periods, and the way DMA works on any PCI, USB or 1394 soundcard will add somewhere around 64 bytes' worth of latency or more to that. Also note that your average MIDI synth has anywhere from a few through several tens of milliseconds of latency! You can only send around 1000 messages per second over a standard MIDI wire anyway, so where would you get the timing information to make use of less than 1 ms latency? Actually, going below a few ms only guarantees that the notes in a chord can never be triggered simultaneously. [...] > Well my question is if you took something like a Bristol synth, and > operated multiple control streams(pitch bend, filter sweeps, etc) if you > would experience latency(ie you turn the knob and the pitch bends 1/2 hour > later) For knobs and similar "analog" controls, I'd say it takes at least tens of ms before you start to notice any latency. For keys, I personally think it starts to feel weird if the latency approaches 10 ms. More importantly though, latency must be *constant*! A synth that just grabs all pending events once per buffer cycle won't be playable with more than a few ms of latency, as the "random" response times quickly become very noticeable and annoying as the "average" latency increases. If incoming events are properly timestamped and scheduled, this is much less of an issue, and latency has the same effect as varying the distance to the monitor speakers. -- //David Olofson - Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://olofson.net http://kobodeluxe.com http://audiality.org | | http://eel.olofson.net http://zeespace.net http://reologica.se | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] hard realtime performance synth
On Tuesday 26 January 2010, at 21.15.43, David McClanahan wrote: [...] > 3. I'm a little worried about what some are calling realtime systems. The > realtime system that is part of Ubuntu Studio and others may be more > preemptible than the normal kernel(as in kernel calls themselves can be > preempted), but that's not a hard realtime system. A hard realtime > system(simplistic I know) might entail a task whose sole job is to pump out > a sinusoidal sound sample to the D-to-A on the sound card. A hard realtime > scheduler would run that task at 44Khz no matter what. This would entail > developing code that when the machine instructions were analyzed, would run > in the time constraints(aka the 44Khz). RTLinux appears to be suitable and > RTAI might be. Perhaps others. The relevant definition of "hard realtime system" here is "a system that always responds in bounded time." That bounded time may be one microsecond or one hour, but as long as the system can meet it's deadline every time, it's a hard realtime system. The definition doesn't really imply any specific time frames. Now, in real life, the "every time" part will never be quite accurate. After all, you may see some "once in a billion" combination of hardware events that delays your IRQ a few microseconds too many, or your lose power, or the hardware breaks down, or a software bug strikes... There are countless things that can go wrong in any non-trivial system. Of course, there's a big difference between a DAW that drops out a few times a day, and one that runs rock solid for weeks - but a truly glitch-free system would probably be ridiculously expensive, if it's even possible to build. Triple redundancy hardware, code verified by NASA, various other things I've never even thought of; that sort of stuff... As to the 44 kHz "cycle rate" on the software level, although possible, is big waste of CPU power on any general purpose CPU, as the IRQ and context switching overhead will be quite substantial. Further, even the (normally irrelevant) worst case scheduling jitter starts making a significant impact on the maximum safe "DSP" CPU load. (Double the cycle rate, and the constant jitter makes twice the impact.) Therefore, most low latency audio applications (whether on PCs/workstations or dedicated hardware) process a bunch of samples at a time, usually somewhere around one millisecond's worth of audio. This allows you to use nearly all available CPU power for actual DSP work, and you don't even need to use an "extreme" RTOS like RTAI/LXRT or RT-Linux to make it "reasonably reliable". With a properly configured "lowlatency" Linux system on decent hardware (as in, no BIOS super-NMIs blocking IRQs and stuf; raw performance is less of an issue), you can probably have a few days without a glitch, with a latency of a few milliseconds. I haven't kept up with the latest developments, but I remember stress-testing the first generation lowlatency kernels by Ingo Molnar, at 3 ms latency with 80% "DSP" CPU load. Hours of X11 stress, disk I/O stress, CPU stress and combined stress, without a single drop-out. This was back in the Pentium II days, and IIRC, the fastest CPU I tested on was a 333 MHz Celeron. Not saying this will work with any lowlatency kernel on any hardware, but it's definitely possible without a "real" RT kernel. -- //David Olofson - Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://olofson.net http://kobodeluxe.com http://audiality.org | | http://eel.olofson.net http://zeespace.net http://reologica.se | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] Search: Free, Crossplatform Soundlib for a game
On Saturday 16 January 2010, at 21.47.09, Nils Hammerfest wrote: > Hi, > > I search for a free and open source sound/music lib to integrate sound and > music in a game. The game runs on Lin/Win32/OSX so it has to be > crossplatform. What sort of sound and music formats do you have in mind? mp3, ogg or similar? "Structured music" (MIDI, MODs etc) or streaming? > I know openAL, but this is too big and tons of people have problems with > it. I don't know about Win32 and OS X, but my 64-bit Linux libopenal.so.1.9.563 is just under 180 kB, which doesn't seem all that much for any non-trivial library... > Anything slimmer around? What kind of size do you have in mind (ballpark), and what should it do, more specifically? For comparison, my Audiality sound/music engine (used in Kobo Deluxe) compiles to 205 kB (0.1.2 maintenance branch; 64 bit x86), and provides roughly the following functionality, as configured and built here: * MIDI file loading and playback * "Live" MIDI input * Support for ALSA and SDL audio I/O, and output to file * Off-line modular synthesis, which is driven by a... * ...simple scripting/data language (ancient predecessor to EEL) * Realtime synth with resampling (pitch), looping and envelopes * Mono/poly note/voice management with sustain pedal logic * Mixer with arbitrary routing (send from anywhere to anywhere) * Realtime reverb, chorus, delay, auto-wah and limiter It's not the most well designed piece of code out there (which is the main reason I'm rewriting it from scratch), so one could certainly squeeze some more features in that code size, or shrink the code a bit. If you want support for various audio file formats - especially compressed ones, such as FLAC, mp3, Ogg etc - code size is going to increase a bit, except maybe if you dare rely on CODECs built into the OS where possible. OTOH, you probably won't have much use for a modular synthesizer, so that remove that and add a suitable audio decoder, and you might end up with about the same size. Either way, as long as you're going to use raw or encoded samples (as opposed to structured audio), why worry so much about engine code size...? It'd have to be a pretty bloated engine to get a code size anywhere near the size of the data to be played! Turning it around, using structured audio pretty much eliminates data size from the equation (all sounds of Kobo Deluxe, including two MIDI songs, compress to 15 kB), so it might not be the end of the world if the engine is a bit larger than a trivial WAV player... The downside of this is that you can't just grab (or record) some samples and use them in your game. Sound effects and instruments must be created in about the same way as you would for machines using YM, OPL, SID and similar chips; no sample playback - "pure" synthesis only. However, the virtual "chips" are much more powerful, so it doesn't really have to sound synthesized, unless explicitly intended to. Anyway, although Audiality isn't exactly an optimal example, I think it should give you some idea of what you can expect in terms of features vs code size. I'd personally consider anything that does less and/or is larger to be very bloated, but standards differ...! ;-) For some perspective, people are coding intros and demos in 64 kB and less, TOTAL, with music, graphics and everything. And no, they don't sound, nor look like something running on an old 8 bit computer. Pretty impressive stuff, worth checking out. The bottom line is that there is really no lower (or upper) limit to code or data size here, and no distinct relation between size and quality. It basically comes down to creativity and resources. -- //David Olofson - Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://olofson.net http://kobodeluxe.com http://audiality.org | | http://eel.olofson.net http://zeespace.net http://reologica.se | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/listinfo/linux-audio-dev
Re: [LAD] Atomic Operations
On Monday 14 December 2009, at 21.21.55, Paul Davis wrote: > On Mon, Dec 14, 2009 at 3:04 PM, Nick Copeland > > wrote: > >> guaranteed for 24 bits of a 32 bit value. funny old world, eh? :) > > > > Wasn't that just for 32 bit floats though? They were implemented as a > > structure > > that implied multiple write operations, they only garanteed the mantissa, > > not the exponent. I am pretty sure integer memory IO was guaranteed as > > that is where the 24 bits came from. > > the kernel implementation of atomic integer writes suggested > otherwise, if my memory holds correct. > since the kernel doesn't have atomic float ops, and i did see kernel > code for this that noted > the issue, i think my memory is correct. but its been wrong so many > times before. I have a faint memory of this having to do with the memory subsystem of SPARC SMP systems... I'm not so sure this has much to do with atomic reads and writes, though. A quick search gave me this: http://lwn.net/Articles/71732/ -- //David Olofson - Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://olofson.net http://kobodeluxe.com http://audiality.org | | http://eel.olofson.net http://zeespace.net http://reologica.se | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Open Keyboard: Request for velocity curve information
söndag 04 oktober 2009 19:43:38 skrev Fons Adriaensen: > On Sun, Oct 04, 2009 at 06:50:38PM +0200, David Olofson wrote: > > Well, I don't know about linearity, or how linear MIDI velocity is > > actually supposed to be, but it's common to at least have various ways of > > scaling velocity. Many (most?) synths will support mapping of velocity > > per voice to create "analog" layers (crossfade rather than switch at a > > distinct velocity), and to exaggerate, reduce or disable velocity > > response and things like that. > > The first question to ask is what is acually measured > and how, and with which resolution. Most likely the time elapsed from the first key switch is made to the second. (IIRC from cleaning mine, the Fatars have the usual "bubbles" with conductive rubber switches; two per key. I also had some small Roland controller way back: Same design.) > Then a range of these values must be mapped to 0..127, > so the first thing to be decided is the limits of this > range, I suppose 0 would be somewhere around where a real piano would make no audible noise whatsoever, whereas 127 would be somewhere between where your fingers start hurting and where the hardware breaks. :-) This would be the absolute limits *before* mapping. One would most likely want to map only a part of this range to MIDI velocity in the normal case. One just wants to avoid random zero velocities when playing very softly, as well as an obvious dynamic "brickwall" when playing real hard. > next what kind of function maps it to the MIDI range. The original Studiologic MCU has two integer parameters for this in the user interface; SHAPE [-4,4] and VELOCITY [1-8], where the former selects the mapping function and the latter scales the dynamic range of that function. Unfortunately, I don't know how this is related to actual key velocities, and I'm quite sure one can't make it out from the instruction manual. ;-) -- //David Olofson - Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://olofson.net http://kobodeluxe.com http://audiality.org | | http://eel.olofson.net http://zeespace.net http://reologica.se | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Open Keyboard: Request for velocity curve information
söndag 04 oktober 2009 10:21:09 skrev Albin Stigo: > Hi, > > Many thanks for all feedback on my original posting about what > features you would like to see in an open midi controller... > > Based on your "requests" I've started experimenting a bit with adding > rotary encoders and a 20x4 lcd display. I've changed to a more > powerful (but still cheap and easy to work with) microcontroller > atmega128 (from atmega16). If you're not familiar with > microcontrollers you might be familiar with the arduino platform which > also uses an atmega (but the lesses atmega168). I have a few Atmel MCUs lying around, actually... :-) I should start playing around with them some time. Any year now... :-D > Now to my question: Does anyone have some sample velocity curves, like > a mapping from key velocity to midi velocity for different instruments > (grand piano etc)? There doesn't seem to be any convention on how to > handle velocity data?? Don't know... I would expect the velocity (as in m/s) to have a rather non- linear relation to the MIDI velocity values. I mean, kinetic energy is velocity squared, so one would expect double actual speed (ie m/s) to add 12 dB to the resulting output level, assuming the instrument response is linear. But, I'm only guessing! I suspect it'll come down to trial and error, finding something that "feels good". > Does some synths expect linear mapping and does the "mapping" in software?? Well, I don't know about linearity, or how linear MIDI velocity is actually supposed to be, but it's common to at least have various ways of scaling velocity. Many (most?) synths will support mapping of velocity per voice to create "analog" layers (crossfade rather than switch at a distinct velocity), and to exaggerate, reduce or disable velocity response and things like that. I have a faint memory of the Roland JV-1080 having some sort of programmable exponential velocity mapping as well, but I could be wrong... Don't have mine here, so I can't check. -- //David Olofson - Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://olofson.net http://kobodeluxe.com http://audiality.org | | http://eel.olofson.net http://zeespace.net http://reologica.se | '-' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] "Open midi-keyboard"
lördag 26 september 2009 14:20:50 skrev Ralf Mardorf: [...] > I don't think hat there will be a lot of receiving devices that can > handle release velocity and poly pressure. Uhm... Roland JV-1080 (and probably most of the JV family) and Kawai K4r? I've actually used release velocity a fair bit on the former (eliminates the need for multiple versions of the same patch in many cases), and I *think* the latter understands it too; don't remember actually using it... Very handy for pads and strings where this allows you to control the (usually rather slow) attack and release times individually. //David ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] "Open midi-keyboard"
lördag 26 september 2009 13:35:04 skrev Fons Adriaensen: > On Fri, Sep 25, 2009 at 05:07:26PM +0200, Albin Stigo wrote: [...] > > My questions are, Is there an interest in this sort of thing..? What > > features would you like to see, what are you missing from > > off-the-shelf products? > > I'd be much interested *iff* it would also generate > velocity for key release. This is provided in the MIDI > format but very few keyboars implement it. +1!!! (I have a Fatar Studiologic SL-880 that I've thought of modding one way or another, and I do miss release velocity...) //David ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Tux Paint for music?
On Thursday 07 May 2009, David Olofson wrote: > On Thursday 07 May 2009, Pedro Lopez-Cabanillas wrote: > [...] > > Using SDL sound only (not SDL graphics) but you may want to try > > LMMS: > > > > http://lmms.sourceforge.net/ > > Doesn't exactly look like the ultimate tool for an 8 year old > beginner...! ;-) ...then again, it took me like ten seconds to fire this up and get some noise out of it...! :-D -- //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Tux Paint for music?
On Thursday 07 May 2009, Pedro Lopez-Cabanillas wrote: [...] > Using SDL sound only (not SDL graphics) but you may want to try > LMMS: > > http://lmms.sourceforge.net/ Doesn't exactly look like the ultimate tool for an 8 year old beginner...! ;-) That definitely seems worth checking out for other reasons, though! (Still looking for something to fill my sequencing needs without getting on my nerves.) Thanks! -- //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Tux Paint for music?
On Thursday 07 May 2009, Pelle Nilsson wrote: > Paul Coccoli writes: > > > Well, since you're thinking SDL, you should consider using a > > gamepad as the primary input device. I understand it is a > > somewhat popular interface for kids these days. > > Yes, in particular it would be great fun to have some toy like that > on running on my GP2X. :) I wasn't really thinking that far, but yes, proper support for handheld devices would be nice. :-) Apart from a pure fun toy, it might actually serve as a nice and quick pocket size tool for turning ideas into music, that you can transfer to a proper sequencer later. (Of course, there'll be a stand-alone version of the IFFT synth engine, in case you dial in some nice sounds.) -- //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Tux Paint for music?
On Wednesday 06 May 2009, Paul Coccoli wrote: [...] > > Ideas? > > Well, since you're thinking SDL, you should consider using a gamepad > as the primary input device. I understand it is a somewhat popular > interface for kids these days. I don't know about primary, but... Well, I suppose it should be possible to support such devices if designing with them in mind. > > I'll probably use EEL for all high level code, over a C engine. > > EEL is probably not the most sensible choice for a Free/Open > > Source project, but I'm using EEL for various stuff myself (mostly > > work related), and it could use some more pilot projects to guide > > future development. > > I'm not familiar with EEL. Only one or two guys are, apart from me... :-D It's a VM based high level language, much like Lua, designed for real time applications. > Maybe use fluidsynth so you can have soundfont support? A whole > world of cheesy keyboard sounds at your fingertips. Could do that too, but I'd like to focus on IFFT synthesis (massively additive synthesis implemented over IFFT, more specifically) as the "standard" sound device. More tweakable, more fun, and extremely compact instrument definitions. :-) -- //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Tux Paint for music?
On Wednesday 06 May 2009, Ralf Mardorf wrote: > Dave Phillips wrote: > > David Olofson wrote: > > > >> [snip] > >> > >> In short: Tux Paint for music! :-) > >> > >> > >> Is there something like this already out there? > >> > >> > > > > http://wiki.laptop.org/go/TamTam > > > > Perhaps ? > > > > Best, > > > > dp > > Hi Dave, hi David :) > > it looks like TuxPaint for music :). Doesn't seem all that different from what I have in mind! :-) > I'm short in time, resp. need to check my audio Linux with other > applications, but I tried to get TamTam in passing. I couldn't found > a package for any Linux distro and I couldn't found a valid link to > a source code. If you know what I might have overseen in passing, a > valid link to the source code or a package for any Linux distro, > please post a link. I'm on Gentoo, but the Sugar overlay is masked on x86_64 (which I'm using). I'll see if I can try it some other way... -- //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
[LAD] Tux Paint for music?
(Also posted over at the SDL mailing list; .) Hi! My son is playing around with my little SDL based drum machine, DT-42 again. He seems to be having fun, but I suppose he'd be better off with something more straight-forward (DT-42 is more like a MOD tracker than a conventional drum machine), and something with more obvious ways of creating melodies... At least, that's what I'd like! :-D This brings up some thoughts I've been having for years now: A really simple, yet somewhat useful and educational music toy. An integrated synth/sampler/sequencer, possibly with audio recording facilities down the road... Sort of like a tracker (Amiga MOD style), but with a more visual GUI. Probably some sort of piano roll. A bunch of nice sounds (I'm thinking IFFT synthesis) with some pre-wired intuitive timre controls. Maybe a library of drum patterns... Preferably SDL based and portable to all sorts of computers and devices. In short: Tux Paint for music! :-) Is there something like this already out there? Any interest in this sort of stuff? Ideas? I'll probably use EEL for all high level code, over a C engine. EEL is probably not the most sensible choice for a Free/Open Source project, but I'm using EEL for various stuff myself (mostly work related), and it could use some more pilot projects to guide future development. URLs: Tux Paint: http://www.tuxpaint.org/ DT-42: http://olofson.net/mixed.html EEL: http://eel.olofson.net/ -- //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] basic in-game audio synthesis api?
On Saturday 04 April 2009, james morris wrote: > Taken a quick look. It looks too low-level DSP for me. I was > actually thinking something along the lines of Audiality (but > couldn't recall its name). As of now, Audiality is mostly off-line modular synthesis along with a basic sampleplayer (DAHDSR envelopes, FX sends etc) and a few real time "effects". So far, it's really been focused on creating sounds from "nothing" (ie no samples), though it can import, play and process sampled sounds as well. > Is Audiality dead? Yes and no. The most current release is probably the "unnamed" sound engine of Kobo Deluxe, though I'm just keeping it alive as I'm still using it - no real development. (As of 0.5.1, all sounds are pure modular synthesis - no samples.) http://kobodeluxe.com/ http://www.audiality.org/ The plan is a major rewrite, using EEL for scripting (setup, real time control processing etc) over a properly modular DSP engine (possibly usable as a stand-alone library, without EEL), including modular "massively additive" IFFT synthesis. http://eel.olofson.net/ The goal is to make it all modular, all the way from control data through audio outputs, and optionally off-line/real time, so you can have a "module" adapt to available resources, project requirements, user preferences etc. Think evolved MOD format, including proper sound effect support. You load up a module, which exports a bunch of controls and triggers defined by the module author. Like connecting a studio sampler to the MIDI port and have the "sound guy" programm away on that - only you actually ship with that thing included with the game. When? When I get around to it... :-) > Anything else similar? Couldn't FMOD or some other popular "game sound engine" do this? I know most of them have sample playback (obviously) and effects, but I don't know how flexible they are when it comes to wiring things together... -- //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Car engine sound emulation for future electi c cars. ideas ?
On Saturday 02 August 2008, Fons Adriaensen wrote: [...] > > I think it'll need at least two parameters: rpm and throttle > > position. > > Yes. In fact it's three - road speed determines tire and aerodynamic > noise - but you can probably ignore that here. Right; as long as we're talking about cars with wheels, we won't have to fake that part. ;-) [...] > One could also question why it should be 'combustion engine noise'. > If it's just to make other users of the road aware of the car's > presence and speed, it could as well be the sound of horseshoes > on a hard surface. Probably even better, as this is impulsive > and provides better localisation. I was actually going to suggest some kind of pure synthetic indicator sound - but there is this "What's that weird noise!?" factor. Horse shoes might be a good idea though (at least it's real world sound), but I'm still worried that it might be more confusing than helpful. Either way, my personal opinion on this is that electric cars should act and sound like electric cars. They're not totally silent anyway; at least not the ones I've heard so far. Combustion engines, as implemented in your average car, just sound boring and annoying - maybe even more so to people like me, who actually enjoy the sound of a properly breathing engine. Most of that won't be missed if it just goes away... //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Car engine sound emulation for future electic cars. ideas ?
Hi there! Long time no see, etc... :-) On Saturday 02 August 2008, Benno Senoner wrote: [...] > The sound generator should in theory take only one input variable, > the motor's RPM (which can expressed as 0..100%) and then generate > a sound with a frequency proportional to the RPM. I think it'll need at least two parameters: rpm and throttle position. Even at a fixed rpm, an engine produces different (and usually much more) sound when you "step on it". Extremely tuned engines (long intake/exhaust overlap for maximum breathing at high rpms) also tend to sound pretty aggressive when engine braking in a certain rpm range. There is another problem, though: Electric cars tend to have very few gears (commonly only one!), as they don't really need any, thanks to the extremely wide power band of electric motors. This makes it pretty hard to come up with a realistict combustion engine sounds, as it's not just the sound that's missing; it's the entire driving style! I suppose you could just fake it by simulating shifts based on the speed and torque, but obviously, the sound cannot match the driving exactly. Well... Unless you simulate the whole thing; gears, clutch, narrow powerband and all - which, of course, would be entirely possible to do. :-) (And here I've spent countless hours mating a real Ford Duratec V6 engine with a Honda S2000 6-speed gearbox with an aluminum flywheel and a ceramitallic clutch in between... Oh, well. ;-) As to the actual sound synthesis, I think the best bet would be a combination of methods. For a realistic sound, you *will* need multisampling (unless you go for physical modelling), because most of what makes up the characteristic sound of a properly built engine (including intake and exhaust) is resonances that stay at fixed frequencies, or change only slightly with the exhaust gas temperature. (Temperature changes the speed of sound, which becomes a pretty significant factor with the temperatures we're talking about here. That's part of the reason why it's so hard to calculate optimal headers, BTW...) I've been thinking one might simulate an engine by modelling the headers and exhaust pipes as a number of resonant filters, and exciting that modell with noise bursts similar to what you hear from some old aircraft engines, which tend to have very, very short "stacks". You'd run the noise bursts as granular synthesis based on rpm, while tuning the exhaust resonances based on simulated exhaust gas temperature; that is, basically, higher frequencies when more power is produced. Oh, don't forget that you'll need to decide how many cylinders you want, and give each one a unique sound (slight differences in amplitude and exhaust resonances; smaller differences for finely tuned engines), or it'll sound like a one cylinder engine reving insanely high! :-) Also, typical american V8 engines (with 180° cranks, as opposed to the flat crank you'd find in a Ferrari V8) are pretty special, as they don't fire in a steady "every other bank" pattern, like most other V engines do. They sort of run as if each bank had two small cylinders and one large cylinder. You *can* make them run "properly", with a smooth, clean sound, but that requires a classic "180° exhaust system" (long tubes all around the engine; no good for high revs), or you need to turn the heads around so you can build a miniature 180° system between the heads. The latter can be seen on some old F1 cars, but I don't actually know where 180° systems are used. Low rev offroad applications...? //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] OT.C Library for mounting, processing data
On Sunday 30 March 2008, Frank Barknecht wrote: > Hallo, > Patrick Shirkey hat gesagt: // Patrick Shirkey wrote: > > > Yep, that's also kind of funny that google has almost replaced man > > as a method for finding information about how to use a tool to > > code with. > > And then it gets really funny and recursive if man pages turn up in > the google results, like for "mount" or "ls" ;) They do! :-) So, next well be seeing man pages refer to Google - and we're in recursive hell! :-D //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] "enhanced event port" LV2 extension proposal
On Saturday 01 December 2007, Dave Robillard wrote: [...non audio time timestamps...] > All I ask for is a few measley bits :) That I can point to widely > accepted standards (one of which is very, very close in domain to > this event stuff) that use timestamps of this sort is telling.. Sure, I have no problem with that. I just don't want to make sure we're not confusing this with types of event communication that just doesn't fit in the model of this particular event system. I mean, it's kind of pointless to pollute *this* event system with features that you need a completely separate LV2 extension to actually use. :-) (Well, unless that other LV2 extension can use this one as a transport layer in some sensible way, maybe.) > Anyway, making the frames part 16 bits screws up parity with Jack > MIDI. We already have a uint32_t frame timestamp. We want > fractional, so stick a fractional part in there, voila. The host > just sets the fractional part to 0 from Jack, things can use the > fractional bit (or not) as they please. > > Simple. Yes, that makes sense to me. It seems like the general case (even when making use of sub-sample timing) favors separate integer and fractional parts, and then, why not just use an int for each? //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] "enhanced event port" LV2 extension proposal
On Saturday 01 December 2007, Dave Robillard wrote: [...] > Taking a step back, it would be nice to have these events (being > generic) able to use something other than frame timestamps, for > future dispatching/scheduled type event systems. [...] I'm not sure about the scope of this LV2 event system, but the idea of using timestamps related to anything other than audio time (ie buffers, sample frames etc) seems to be a violation of the idea that events are essentially structured control data or similar - ie anything that's "locked" to the audio stream. The only valid reason I can see for using an unrelated timebase for timestamps is when the timestamps can only be translated to actual (audio) time by the receiving plugin. This is an interesting concept, but how to use it properly in the context of an event system where the event transport is based on the "one buffer per buffer cycle" idea? Sure; you can just send events and assume that the receiver will queue them internally as needed, but how does the sender know how far ahead it needs to be to avoid events arriving late? (If it knows the audio/timestamp relation, why use non audio related timestamps in the first place?) I just don't see how this can work; not in a real time system. Either you need to deal with exactly one buffer at a time - and then you may as well use audio based timestamps at all times - or you need some sort of random access event system or something. [...] > (This may sound a bit esoteric, but to do something like Max right, > you need absolute time stamps). What does max do that requires this, and how does it actually work? I'm probably missing the point here... [...] //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] "enhanced event port" LV2 extension proposal
On Friday 30 November 2007, Krzysztof Foltman wrote: [...several points that I totally agree with...] > If you use integers, perhaps the timestamps should be stored as > delta values. That would seem to add complexity with little gain, though I haven't really thought hard about that... It seems more straightforward to just use sample frame offsets when sending; you just grab the loop counter/sample index. However, in the specific case of my "instant dispath" architechture, you'd need to look at the last event in the queue to calculate the delta - but then again, you need to touch that event anyway, to set the 'next' field... (Linked lists.) No showstopper issues either way, I think. When receiving, OTOH, deltas would be brilliant! You'd just process events until you get one with a non-zero delta - and then you process the number of sample frames indicated by that delta. (Obviously, end-of-buffer stop condition must be dealt with somewhere. Adding a dummy "stop" event scheduled for right after the buffer would eliminate the per-audio-fragment check for "fragment_frames > remaining_buffer_frames".) > Perhaps fractional parts could be just stored in events that demand > fractional timing (ie. grain start event), removing that part from > generic protocol. That's another idea I might steal! ;-) I'm not sure, but it seems that you'd normally not want to drive a sub-sample timestamped input from an integer timestamped output or vice versa. An output intended for generating grain timing would be concerned about generating events at the exact right times, whereas a normal control output would be value oriented. This may not seem to matter much at first, but it makes all the difference in the world if you consider event processors. With pure values, you might want to add extra events or even regenerate the signal completely, but this would break down when controlling something that relies on event timing. Might be worth considering even in non modular synth environments, as you might want to edit these events with in sequencer. This is starting to sound like highly experimental stuff, though. :-) > Perhaps we're still overlooking something. I'd want to try actually implementing some different, sensible plugins using this before I really decide what makes sense and what doesn't. Granular synthesis is about the only application I can think of right now that *really* needs sub-sample accurate timing, so that's the scenario I'm considering, obviously - along with all the normal code that doesn't need or want to mess with anything below sample frames. //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] "enhanced event port" LV2 extension proposal
On Friday 30 November 2007, Dave Robillard wrote: [...] > The current version of LV2 MIDI just uses double. All the precision > you could ask for, or an insane range if you'd prefer. It's a bit > big maybe, but hey, why not? > > We could use float I guess to save a bit of space, but I definitely > prefer floating point. Fixed point is just a PITA, modern CPUs are > much faster at FP anyway, why bother? Well, normally, you'd use the integer part for splitting the "sample loop" internally (that is, you need to calculate loop counts from it), and you'd use the fractional part alone to determine offset within the sample frame. Might be hard to avoid the weak spot of modern CPUs that is FP<->integer conversions. I don't know if this is a real performance issue, though. Besides, you probably have to do some integer->FP conversions to make use of the fractional part of integer timestamps in FP DSP code - but then again, that impacts only the plugins that actually use it. Others just shift the fraction bits out and use the resulting integer sample frame offset. //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] "enhanced event port" LV2 extension proposal
On Friday 30 November 2007, Dave Robillard wrote: [...] > > Sounds like there's a fundamental difference there, then. I'm > > using a model where a port is nothing more than something that > > deals with a value of some sort. There are no channels, voices, > > different events or anything "inside" a port - just a value. An > > output port can "operate" that value of a compatible input port. > > LV2 ports can contain /anything/. It's a void*, completely 100% > opaque in every way. There might be 'channels' or 'voices' or > a 'value' or whatever in there, it's just data. Well... That's really what I have as well, looking at the actual specification. Sorry for just causing confusion here... :-/ There are no explicit limitations to what A2 ports can deal in. It's just that there is no host level concept of channels, voices or anything like that, whatsoever, below the port level. There is 2D addressing *above* the port level, for telling plugins what ports you're talking about, and that's all there is to it, as far as the host is concerned. You could use some port type for "raw" MIDI (all channels on one port), so you can route that around the graph, much like physical MIDI cables. (I'll most probably use that for MIDI<->control mappers and the like in plugin form. The host design is pretty much modelled after the microkernel OS idea; plugins are the processes, ports are the the interface to IPC etc. So, I don't want to force I/O code and whatnot into the actual host, but rather provide that as plugins.) That's as far as you get with those MIDI (or any other multichannel) ports, though. If you want to do modular synth style wiring (control by control), you need other types of ports - like ramped control event ports - if the host is to be of any assistance in wiring them. [...] > > Yes, obviously. I don't quite see what you think I'm trying to say > > here. :-) > [snip] > > Me neither :) In fact, I'm not even sure myself now... That might have something to do with it. ;-) > There are of course infinite ways to do 'events' for plugins (and an > infinite number of interpretations of what 'events' means). Nice > thing about LV2 is you can do them all. Something like your > non-flat event stuff (with queueing and linked lists and dispatching > and such, rather than plain old directly connected buffers) may find > a place in LV2 as well - Right; dispatching and stuff... I think this is about where the confusion starts. How is this supposed to be done here? Are LV2 "enhanced event ports" opaque, all-in-one, somewhat like the "raw" MIDI I'm talking about above, or how are LV2 hosts supposed to deal with them? I've been confusing this with the way I address ports in A2, which doesn't really have anything to do with the API. Again, there are just ports (abstract objects) that can be connected (through plugin callbacks), and that's all we see of it from the host side. Now, the "standard" event model I'll use for most plugins will share queues for simplicity and performance - so events will need to be marked somehow (internal index to something, usually), so the plugin knows which event goes where. This, however, is stuff that only plugins actually using this event model need to deal with. You can't see this on the plugin API level - and some plugins may in fact use some other event system with just a flat buffer for each port->port connection, for that matter. :-) > may have to for certain message-based > (ahem) "programming" modular stuff ala Max. What we have now is the > sample accurate hard realtime sort of 'events' (ala Jack MIDI). > > Havn't quite figured out the bridging of those two worlds yet We are talking "abuse of real time event system for passing of asynchronous messages" or something along those lines...? //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] "enhanced event port" LV2 extension proposal
On Friday 30 November 2007, Dave Robillard wrote: [...] > I do agree we should not be adding crufty features to support > massive buffers, if that's what you mean. It's easier to just split > the cycle anyway. Yes, that's exactly what I mean. Sure, one *could* have a use for really huge buffers (say, running large FFTs without intermediate buffering), but to me, that seems too far out that one should have everyone deal with 32:32 event timestamps for that reason alone. [...cache footprint, buffers etc...] > A clever host can just use the same, say, 2 buffers (stereo audio), I'm assuming any serious host implementation does that, but that doesn't help when some plugins are using more than 1-2 audio inputs. Even so, it shouldn't really be an issue unless huge buffers - like >=65536 samples - are used. //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] "enhanced event port" LV2 extension proposal
On Friday 30 November 2007, Dave Robillard wrote: > > That's why I'm using a Port as the smallest "connection unit", > > much like LADSPA ports, so there is no need for an event type > > field of any kind at all, let alone a URI. > > Ports /are/ the smallest "connection unit". But ports can /contain/ > events, and if we want multiple kinds of events in a single port, > then the events themselves need a type field. Sounds like there's a fundamental difference there, then. I'm using a model where a port is nothing more than something that deals with a value of some sort. There are no channels, voices, different events or anything "inside" a port - just a value. An output port can "operate" that value of a compatible input port. Of course, that "value" could be anything, but I'm not explicitly supporting that on the API level. If plugins want to use MIDI messages, they're on their own when it comes to mapping of channels, CCs etc. That stuff is really beyond the scope of my project, as one wouldn't be able to configure and control such things normally. > > The data in the events *could* be MIDI or whatever (the host > > doesn't even have to understand any of it), but normally, in the > > case of Audiality 2, it'll be modular synth style ramped control > > events. That is, one port controls exactly one value - just like > > in LADSPA, only using timestamped events with ramping info instead > > of one value per buffer. > > The host might not have to (though in practise it usually does), but > other plugins certainly do. You can't process events if you don't > even know what they are. Yes, obviously. I don't quite see what you think I'm trying to say here. :-) > > Extensibility is a non-issue on this level. > > OK, the event extension doesn't define your ramped control events, > so you're not allowed to use them, ever, period. > > ... looks like extensibility is an issue at this level, eh? ;) Right, but that's mostly about Audiality 2 anyway. There, if I for some reason started with control events without ramping, I'd add another "control events v2" port type. If that type happens to be a superset of the first one doesn't really matter, as they're still not compatible. Where it makes sense, one can provide converters to/from other types, but to the host (the low level machinery directly dealing with plugin graphs, that is), those are just ordinary plugins with only one input port and one output port. > > What you do if you want > > more stuff is just grab another URI for a new event based > > protocol, and you get to start over with a fresh event struct to > > use in whatever way you like. (In fact, as it is, the host doesn't > > even have to know you'll be using events. It just provides a LIFO > > pool of events for any plugins that might need it.) > > Sounds like you're thinking too hard. Nah. I'm just in the middle of another project, and the Audiality 2 code isn't in a state where I could post that without just adding to the confusion. And, I think we might have a terminology impedance mismatch. :-) > "Events" here are just a bunch of bytes in a flat buffer. Mine are implemented as linked lists of small memory blocks, for various reasons. (I've had a working implementation for years, so I'll stick with that for now. Not saying it's the best or most efficient way of doing it, but I have yet to figure out how to bend flat buffers around my event routing model - or the other way around.) I did "hardwire" fixed point timestamps as those are closely related to the whole deal with sample frames, buffers etc - but the data area is indeed just a bunch of raw bytes. > There is definitely no protocol here. Please, please don't > say "protocol". That way lies painful digressed conversations, > trust me. I'm open to alternative terminology. :-) What I'm talking about is just the name of "whatever goes on between connected ports." I don't want the term to be too specific, as it also covers LADSPA style audio buffers, shared buffers (which can contain function pointers) and whatever else plugins might use to communicate. //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] "enhanced event port" LV2 extension proposal
On Thursday 29 November 2007, Dave Robillard wrote: [...] > Same with LV2 ports; works perfectly for port types. Problem is, > sticking a URI in each /event/ is far too bloated/slow. That's why I'm using a Port as the smallest "connection unit", much like LADSPA ports, so there is no need for an event type field of any kind at all, let alone a URI. The data in the events *could* be MIDI or whatever (the host doesn't even have to understand any of it), but normally, in the case of Audiality 2, it'll be modular synth style ramped control events. That is, one port controls exactly one value - just like in LADSPA, only using timestamped events with ramping info instead of one value per buffer. Extensibility is a non-issue on this level. What you do if you want more stuff is just grab another URI for a new event based protocol, and you get to start over with a fresh event struct to use in whatever way you like. (In fact, as it is, the host doesn't even have to know you'll be using events. It just provides a LIFO pool of events for any plugins that might need it.) //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] "enhanced event port" LV2 extension proposal
On Thursday 29 November 2007, Dave Robillard wrote: [...] > Well, sure, but big data is big data. In the typical case plugin > buffers are much smaller than the cache [...] Of course, but that's exactly what I'm talking about - large buffers, and why it doesn't make sense to support them. :-) If you're using 65536 samples per buffer, it just takes a plugin with four audio inputs and you're up to 1 MB of intermediate buffers. Even if that does fit in the cache, in a real life situation, with other threads working, most of it will be cold again every time the audio thread starts. So, your processing speed is potentially capped at the memory bandwidth throughout the buffer cycle, or at least until you start reusing buffers in the graph. And what is supposed to be gained by this...? I don't see why a plugin API of this type should support nonsense like that at all, and thus, it shouldn't affect event timestamps either - but well, now it's there, and there isn't really any Right Thing(TM) to do here, I guess. > crunching away on plain old audio here is definitely CPU bound (with > properly written RT safe host+plugins anyway). Last time I looked into this, a reasonably optimized resampler with cubic interpolation and some ramped parameters was memory bound even on a lowly P-III CPU, at least with integer processing. (Haven't actually tested this on my AMD64...) I think floating point should be as fast or faster in most cases, at least on P-III CPUs and better - and with SIMD, you may get another 2x-4x higher throughput at that. Could be way off here, though. Do you have benchmark figures? //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] "enhanced event port" LV2 extension proposal
On Wednesday 28 November 2007, Dave Robillard wrote: [...] > The only problem that needs to be handled is how to get the type in > there. I would like to find a good solution to this problem that's > as extensible as URIs but doesn't actually stick a URI in the event > struct (there are a few other future extensions that have the same > problem. strcmp of URIs in the audio thread is, as you say, > completely out of the question, but so is handing out a flat numeric > space. This is /the/ problem that needs solving here, and I'm > desperately trying to guide the conversation in a direction that > will get it solved nicely ;) I don't know if this is applicable here, but for Audiality 2 I'm dealing with this on the connection level. Each control is a port like any other, meaning it has a name, a protocol URI and a few other parameters that the host needs to know what can and cannot be connected. If two ports have the same URI, they can be connected, and that's it, basically. Event semantics ("structured stream", "commands" etc) and data fields are left to the plugins that implement the ports, so the host doesn't even need to know what the plugins are talking about. (This is a "direct connection" model; data is not normally piped through the host.) On the physical level, I still have ports share event buffers (or rather, queues in this case) so plugins don't have to sort/merge or poll umpteen queues all the time. "What event goes where" is decided by means of filling an address field with an opaque cookie value that the plugin generates upon connection. The cookie can be ignored if there's one queue per port, or it can be a fully specified plugin-wide port index if the plugin uses a single event queue, or anything in between. Multiple queues...? Yes, A2 plugins can use multiple queues when that suits the implementation better. (Multiple inner loops, rather than running the whole synth, all voices, or whatever, one sample at a time.) Thus, a plugin doesn't have to mess around with the events to get them to the right places in the right order. It just creates one queue per voice/strip/section/whatever loop and hands the right queues out when connections are made. This means an "event target" also needs to contain a queue pointer. Of course, one could just use one queue per plugin and use only cookie addressing, but I decided to allow multiple queues to eliminate most of the event dispatching complexity you'll otherwise have in any non-trivial plugin. It seems to be a simple and efficient solution, but I could be missing something, of course. Remains to see when I have some more serious code running. :-) //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] "enhanced event port" LV2 extension proposal
On Wednesday 28 November 2007, Dave Robillard wrote: [...] > > Obviously, this effect is less visible with non-trivial plugins - > > but even how many plugins in a "normal" graph are actually CPU > > bound? In my experience, the effect is very pronounced (several > > times higher CPU load with 512 samples/buffer than with 32) when > > using a bunch of sampleplayer voices (cubic interpolation + > > oversampling) and some simple effects. You have to do some pretty > > serious processing to go CPU bound on a modern PC... > > Not really true for linear scans through memory, which reading > plugin buffers usually is. This is the ideal cache situation, and > you can definitely get full utilization sweeping through things much > larger than the cache this way (this is exactly what the cache is > designed to do, and it does it well; try it). I did, and that's what I'm drawing my conclusions from. ;-) No cache controller logic in the world can avoid the bottleneck that is memory bandwidth. If your data has been kicked from the cache, it needs to get back in, and the only time you're not taking a hit from that is when you're doing sequential access *and* your code is CPU bound, rather than memory bound. > When you're jumping all over the place things do go to hell fast, > but luckily we're not. Right, that's not the issue here, but I'm talking about "cold" memory and raw bandwidth. //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] "enhanced event port" LV2 extension proposal
On Wednesday 28 November 2007, Krzysztof Foltman wrote: [...] > - or introducing a "65536 samples milestone" kind of event similar > to "clear" message in LZ compression format, separating events from > different 65536-sample "eras" :) Why would you need to do this? Timestamps in Audiality 0.1.x are 16 bit and based on a continuously running, wrapping timer. Audiality 2, VST, DSSI and others use offsets from the first sample in the current buffer. Either way works just fine as long as the timestamps cover the buffers safely. Now, this is assuming that we consider events real time data, just like audio. That is, plugins are not allowed to send events for future buffers, and they'll only ever see events for the current buffer. I don't see the point in supporting anything else, unless you go all the way and provide a random access sequencer API, where plugins can just browse around as they wish. > If the plugin does not implement this extension, it cannot handle > buffers of more than 65536 samples - and that should be perfectly > fine in most cases. Well, it's kind of nasty that the base API supports larger buffers if the events don't, but IMNSHO, the mistake is in supporting >65536 sample buffers *at all*... > Hell, max buffer size in Buzz was 256 samples, pitiful by > today's standards, and it was still quite efficient. I'd say it's efficient *because* of this. It may not matter much on a quiescent system with a CPU with megabytes of cache, but if you have serious GUI stuff going on - or a game engine - your audio thread will wake up and find 100% cold memory every time it fires! In that situation, every byte of the graph's footprint will have to be fetched from cold memory at least once per engine cycle. Of course, if your graph footprint is larger than the cache, things get even worse, as you'll have cache misses over and over until the audio thread is done with the buffer. //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] "enhanced event port" LV2 extension proposal
On Wednesday 28 November 2007, Krzysztof Foltman wrote: [...] > I don't think it's a serious problem. Huge processing buffers are > not very useful in practice. [...] Actually, they're directly harmful most of the time! For any graph with more than one plugin, and with plugins that use internal buffers of the I/O buffer size, large buffers means you'll get a gigantic cache footprint. It gets worse in "high" latency real time situations (games, media players etc), where you have other threads fighting for the cache as well. Obviously, this effect is less visible with non-trivial plugins - but even how many plugins in a "normal" graph are actually CPU bound? In my experience, the effect is very pronounced (several times higher CPU load with 512 samples/buffer than with 32) when using a bunch of sampleplayer voices (cubic interpolation + oversampling) and some simple effects. You have to do some pretty serious processing to go CPU bound on a modern PC... //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] [OT] questions re: cross-compiling
On Thursday 15 November 2007, Dave Phillips wrote: > Greetings: > > Simple questions, probably no simple answers : > > Can I compile Audacity for Windows using a Linux tool-chain ? Yes, that's how I made the Win32 binaries for Kobo Deluxe, DT-42 etc. Some libraries (depending on build tools, mostly) may need some "help" to install in the cross-compiler environment, but widely used cross platform libs (SDL, GTK+ etc) usually cross-compile out of the box and/or come with pre-built packages for cross-compiling. Before considering building a cross compiler of your own (which is not terribly hard these days, but still), check what's available for cross-compiling on your distro! The Linux->Win32 case is a rather popular one, but some distros have cross compiler packages for other target platforms as well. [...] > If I can't compile it directly in Linux can I do it in Wine ? If > so, how ? That should work too, but doesn't seem to make much sense if you're already running the "home" platform of the GNU tools. :-) > And last: > > If I can only do it native Windows, what do I need in the way of > compiler etc. ? Cygwin is probably the easiest way to get started with that, whether you're using Wine or Windows: http://www.cygwin.com/ //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Terminology problem
On Wednesday 14 November 2007, Phil Frost wrote: [...] > > Now, what do I call the first dimension, *before* specifying what > > channel I'm talking about? How do I address "the Volume CCs of all > > 16 MIDI channels", or "the PFL buttons of all mixer strips?" I'd > > like a short, logical, non-confusing name for this, but I can't > > seem to think of one. > > Parameter? Maybe... I considered Control as well, but to me, both of those sound more like single instances of something (just like Ports), rather than the Class/Type style (ie "Voice Pitch Controls") term I'm looking for. *Some* plugin API or something somewhere must have something similar to this, I'd think, but I can't remember seeing one... //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
[LAD] Terminology problem
Not sure if this is a hard problem, or if I'm just being extra stupid today... Assume we have a 2D addressing scheme for addressing Ports on a plugin. A Port here can be a connection that sends or receives a single mono audio stream, control data for a single parameter, or something like that. The first dimension is similar to MIDI CCs, or the different types of controls in a mixer strip. The coordinate indicates what kind or group of control/port/whatever we're talking about. Examples: "Master volume control" (probably just one of these) and "channel pitch controls" (one per channel, obviously). On a (normal) studio mixer, we'd be talking about a horizontal row of controls, all of the same kind. The second dimension is similar to MIDI channels, synth voices, or mixer channels, depending on context. I'm calling all this "Channel", as that's the least domain specific name I can think of that still makes sense. Basically, when you have multiple indentical internal objects, this is how you address the instances. Now, what do I call the first dimension, *before* specifying what channel I'm talking about? How do I address "the Volume CCs of all 16 MIDI channels", or "the PFL buttons of all mixer strips?" I'd like a short, logical, non-confusing name for this, but I can't seem to think of one. //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] LV2 format stuff
On Wednesday 14 November 2007, Krzysztof Foltman wrote: > David Olofson wrote: > > I would think that users might want some way of "upgrading" their > > project files to use new plugin versions without just manually > > ripping out and replacing plugins, but even without some API help, > > I'd rather not see hosts trying to do this automatically... > Well, a common solution is to store plugin version identifier (it > could even be a sequence number assigned by plugin author) in the > song. Then, the plugin is able to convert at least the current > parameter values (but not, say, automation tracks) on song load. > > It doesn't solve *all* the compatibility problems, but can solve the > most immediate one, I think. Provided plugins are identified by URIs, and same URI implies 100% compatibility, how do you actually find the new version of the plugin? Then again, in that case, we're really talking about a brand new plugin, but it seems to me that there is some useful grayzone here; plugins that are mostly compatible with their predecessors. New major versions, if you like. Provide a version history in the form of an array of URIs, so hosts can find and deal with this if desired? Just brainstorming a little here... > > (suggesting a newer, partially compatible version of the plugin if > > there is one), but again, no silent automatic upgrades, please. > > Too much risk of the new version not working as expected. > > > Automatic conversion worked with VST and Buzz. Well, considering we're still talking about 100% compatibility (the new version has the same unique ID), it *should* work - but in reality, it all comes down to the quality of the plugins, or rather, how well they actually maintain this claimed compatibility. > But, warning the user about possible incompatibility because of > newer version is a good idea. Yes... Just putting a warning messagi in some log or something could be a very useful "debugging" tool. If it doesn't sound right, you start by having a look at that log. > Maybe a plugin should be able to override it if it's absolutely > certain that no compatibility problems may arise, but that may cause > problems :) Right; everyone *thinks* their bug fixed versions are 101% compatible with the old versions - so next thing, hosts start overriding the override feature. :-D [...] > I love the idea of fixed point 16:16 timestamp (assuming the time > would be relative to current buffer start, not some absolute time). Yep, that's what I had in mind. (Absolute time definitely belongs in some optional dedicated timeline interface.) > Most plugins would just shift timestamps by 16 bits and compare them > to the loop iterator :) Sounds practical. Exactly. And besides, even when you do use the fraction part, you'll normally *only* be interested in the fractional part. Assuming you're implementing sample accurate timing first (why bother with sub-sample otherwise?), you're already at the very sample the event should "land" in, so you just want to know how much to nudge that initial oscillator phace, or whatever you need to do. [...] > I bet most plugins wouldn't support fractional part of timestamps, > and those that would, could report it as a separate feature, for use > in granular synthesis-aware hosts :) Yes, I'm reaching too far ahead > here, but you kind of asked for it :) So, a different interface for control events that just happen to have fractional timestamps? Well, it does the job as far as dedicated granular synth "plugin packs" go, but then you can't mix these ports with other control ports. I was kind of thinking truly modular synthesis here... :-) > > Other than that, I'm not sure it has much value outside of > > marketing... Any other real uses, anyone? > > > Can't think of any. Events for true oscillator "hard sync", perhaps > (phase reset with subsample precision). Yeah, that actually sounds like an interesting application. Just a moment ago, I realized it can be used for things like "multilooping", skipping into samples and the like, similar to the "sampleoffset" command found in some old trackers. That sounds like modular synth stuff again, though. (Implementing advanced looping effects as a separate plugins, instead of building all features you can think of into the sampler, only to still forget half of the ones you actually want.) That is, it's probably going into Audiality 2, but it may not make sense in LV2. [...timeline/transport...] > A separate port type (which would probably be implicitly > auto-connected by most hosts) would perhaps be nice for that, just > so that things aren't scattered too much. Although pla
Re: [LAD] LV2 realtime safe memory pool extension
On Tuesday 13 November 2007, Lars Luthman wrote: > On Tue, 2007-11-13 at 15:52 +, Krzysztof Foltman wrote: > > One more thing - is there any way to communicate lack of data on > > an audio port? For example, a synth with no note plugin could > > communicate the fact that it produced a buffer full of zeros to > > the following plugins, so they don't process those zeros > > unnecessarily? > > No. It sounds very similar to the input-parameter-has-changed > thing Yes, they're very similar as long as control events and audio buffers alike are only "one per run() call". With timestamped events, they start to drift apart, I think. > - maybe both could be solved by defining a new port class > whose buffer pointer points to a bitmask large enough to hold one > bit for each plugin port, and using that to indicate when something > has "happened" at a port (where the definition of a "happening" > depends on the port class) ? Similarly, when using timestamped control events, both features can be implemented that way - or rather, control events *are* "control changed" notifications (no extra logic needed), and some standardized control could be used to provide info on whether or not there is audio on audio ports. However, it's not just about flags, unless all plugins are to be *required* to support silence. If you're sending to an input that doesn't care about silence, that input will need a zeroed out buffer, whereas an input that understands silence won't touch the buffer if it's known to be silent. > There also are extensions in development (I think) that only call > plugin callbacks if there actually is relevant input, but I think > that was more for message passing and things like that. That's a whole lot more complicated than it may sound at first. Whether or not a plugin needs to run can depend on a lot more than just whether or not it has input. Pretty much every effect (even filters) has a tail of some sort, so even in the simplest cases, you'd still have to ask the plugin when it's safe to *stop* calling it. Then you might get away with just waiting for input before starting it again, but I'm not sure that covers everything... Besides, I don't think it buys us much having this level of "intelligence" in the API. All you can save is a function call, but to do that, the host needs to check a bunch of ports for data. I think the only sensible solution is to make it possible for plugins to generate and receive silent buffers, and let the plugins deal with that optimally - or not at all. How about something like this: * A "Buffer" is an object with some flags, a read-only (host managed) pointer to an actual audio buffer, and a pointer that is managed by the connected output port. * Buffers can be marked "SILENT" somehow. * Output ports that support this should set the SILENT flag and point the Buffer at a host provided silent buffer or equivalent, when the output is silent. When there is sound, the flag is removed and the pointer is reset to the value of that read-only pointer. * Input ports that care about this can look for the SILENT flag. > > Sure, it's not trivial to set that flag correctly in plugins that > > deal with any kind of internal state (especially complex one like > > in reverb), but it worked very well for Buzz. The end result was > > the ability to use very complex processing networks - as long as > > not all instruments produced sound all the time. > > This definitely sounds very useful. It is, but the decision should go with the state - that is, inside the plugin. Simpler, cleaner and more efficient, I think. The API (or host, as applicable) should only provide means for plugins to communicate the required information. It appears that this would suggest moving complexity into plugins, but if this is to do much good, I think that's actually the easier way... Besides, people spend countless hours optimising code, writing SIMD code etc. Why not spend a fraction of that time implementing a relatively speaking trivial optimization like this? [...] //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] LV2 realtime safe memory pool extension
tionship too, but I'm not totally sure about the details yet. I'm probably going to try a 2D addressing approach; some ports may have multiple connections wired to different abstract instances of things (mixer voices, synth voices...) in the plugin. The words "dynamic voice allocation" keeps turning up - but I think it's just an illusion. Strictly speaking, MIDI doesn't have that! Each note number is effectively also a virtual voice ID, and there are only 128 of them per channel, implicitly preallocated as soon as you have a MIDI channel ready for input. Is that (not being able to allocate hundreds or thousands of voices at any time, real time safe) an actual restriction to anyone...? A sequencer would generally have some fixed upper limit to the number of voices it can control, and with the exception of live recording, it can even figure out exactly how many voices it needs to control at once, maximum, for the currently loaded performance. I don't see a real problem here. //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] LV2 realtime safe memory pool extension
> > > allocation/deallocation calls? > > > [heresy mode off] > > > > Not quite sure what you mean... Sounds dangerous, though. ;-) > > Nothing very dangerous actually. I just meant that you could use > some more "non standard" parameters to use with allocator functions > to do some "trick" :-) Well, I suppose so, but it seems like a different way of adding some extra information to the memory block. Basically, you offload it to plugins/applications instead of hiding it in the allocator internals - which doesn't seem like a big win to me. ;-) [...] > > I don't like the idea of an extra pointer per block for small > > blacks - > > but if you deal with large numbers of small blocks (for events and > > the like), you should probably use a dedicated manager for that > > anyway. For example, in Audiality I use a per-host LIFO stack of > > blocks, which means no memory overhead at all, as "free()" doesn't > > need to know anything about the block to throw it back on the > > stack. > > Nice solution. I guess, on the top of my ignorance, that those > blocks are fixed size, right? Yep, 32 bytes, IIRC. (Or was it 16 bytes, before it supported 64 bit platforms...?) The fixed size is why you get away with a single LIFO stack and no overhead for allocated blocks. (Well, there *are* ways of doing that for mixed size blocks too, if you use some pointer arithmetic - but then it gets *really* tricky, if at all possible, to resize the pool dynamically.) > (ehm... at first glance Audality seems quite interesting for my > project - http://naspro.sourceforge.net -, can I contact you > privately about that? or maybe just send me an e-mail) Not quite sure in what way it could be of use here, but sure... :-) //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] LV2 realtime safe memory pool extension
t; Yes, that probably does make sense after all, considering that > > it's quite likely that a lot of plugins will deal only with a > > small number of different block sizes, and the fact that a "real" > > RT allocator (even a really good one) has quite a bit more > > overhead than a simple LIFO pool. > > Fine :-) However I would like to hear something from some plugin > developer too. Yeah. Designing around actual use cases usually results in more sensible solutions. //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] LV2 realtime safe memory pool extension
On Friday 09 November 2007, Stefano D'Angelo wrote: [...] > > Well, not really; you could just wrap the memory manager, adding > > a 'manager' field to every chunk. Then you can just throw in > > another TLSF manager instance when you need more memory. > > The 'manager' field will point the free() wrapper to the correct > > manager instance. > > Argh! Nice, eh? ;-) [...] > > Maybe hack TLSF to take a "maximum address span" init argument, so > > you can extend the pool as needed up to that limit? You'd just > > allocate new blocks with malloc() and instruct the hacked TLSF to > > extend to the end of that block, while pre-allocating (throwing > > away) any holes caused by unrelated malloc()s. > > Maybe I just don't understand, but however doesn't this mean, in > practice, allocating all of the memory right from the start? No, you'd just construct the manager's internal structures as if you were going to allocate a pool of the maximum size - but you actually give it a smaller initial amount of memory. Provided the allocator actually *works* without a fully populated pool (I'm not sure about that...), this would allow new memory blocks to added to the pool later, as long as they're within the address range defined by the start of the pool and the "maximum address span" that the allocator was initialized for. > > > And even if you have some background thread doing it when it's > > > needed, I think it's very hard to do that while plugins are running, > > > and so allocating and modifying on that memory. > > > > Yes, that's why you can't realloc() the pool. You really have to > > add new blocks of memory to it. > > Mmmm... in my opinion someone could contact the TLSF authors and see > what can be done. Yeah, or just read the code (again), for starters. (I started hacking an RT memory manager for EEL, then found and studied TLSF a bit, but that must have been one or two years ago... Got away with a small RTAI kernel driver + "firm" real time scripting for that application, and then I've been working on other stuff.) > I was thinking that when the memory is "near to saturation", the > allocator could malloc() a new pool whose size is equal to that of > the first pool (maybe using a background thread) so that the bitmap > for the new pool is actually the same. As I understand it, the bitmap represents the current state of the pool, so if you allocate a new pool, you'd need a new bitmap to go with it, or weird things will happen. > Now the address resolving algorithm > should know in some way that the first pool is full for blocks of > that size or bigger, so the adress should be resolved in the second > pool, and so on. Wiring additional pools to the out-of-memory condition indeed avoids overhead when dealing with the initial pool, but it multiplies overhead when you need to deal with additional pools instead. Also, how do you know where a free()d block belongs? Checking the address against each pool isn't exactly efficient, but perhaps no disaster for small number of pools... Basically, without somehow piggy-backing on the existing logic of TLSF, I have a hard time seeing how you could get away with less overhead than the "wrapped manager" solution I first suggested. (That "only" adds one pointer per allocated block, and an extra level of indirection to use the right manager. Not great, but at least it's simple, lets you add pools of any size, and scales to any number of pools.) > This is just a thought... I never really wrote or worked on an > allocator :-) I did, but only enough to realize that coming up with something really nice and efficient is a lot of work. :-) > Anyway, getting back to the initial question, I think that we should > have two separate extensions, one for RT fixed size allocators and > one for "real" RT-safe allocators. Yes, that probably does make sense after all, considering that it's quite likely that a lot of plugins will deal only with a small number of different block sizes, and the fact that a "real" RT allocator (even a really good one) has quite a bit more overhead than a simple LIFO pool. //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] LV2 realtime safe memory pool extension
On Friday 09 November 2007, Stefano D'Angelo wrote: [...] > > Yes, that's exactly my point, but I think it needs to deal with > > arbitrary size chunks to actually be able to serve that purpose in > > real applications. > > IIRC the TLSF allocator can do that. Yes, that's what it does - which is why I suggested it. ;-) [...] > > Wouldn't it be more useful with some interface that allows plugins > > to request memory in a non-blocking manner, and if instant > > allocation fails, have the host notify the plugin when the memory > > is available? > > Basically, a malloc() call that means "If I can't have this memory > > right away, please expand the pool so I can have it later!" > > Mmmm.. I guess it's not easy to resize the pool in a rt-safe way. Well, not really; you could just wrap the memory manager, adding a 'manager' field to every chunk. Then you can just throw in another TLSF manager instance when you need more memory. The 'manager' field will point the free() wrapper to the correct manager instance. However, a nice, clean, efficient solution might be a bit harder to come up with, I think. IIRC, the current TLSF implementations scale various internal structures and stuff to fit the pool size, so you can't just bump the "brk limit" later. Maybe hack TLSF to take a "maximum address span" init argument, so you can extend the pool as needed up to that limit? You'd just allocate new blocks with malloc() and instruct the hacked TLSF to extend to the end of that block, while pre-allocating (throwing away) any holes caused by unrelated malloc()s. > And even if you have some background thread doing it when it's > needed, I think it's very hard to do that while plugins are running, > and so allocating and modifying on that memory. Yes, that's why you can't realloc() the pool. You really have to add new blocks of memory to it. //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] LV2 realtime safe memory pool extension
Seems like the post I'm replying to below isn't getting through to the list. I got only the CC - which, BTW, I would normally not need at all, as I'm on the list. :-) On Thursday 08 November 2007, Nedko Arnaudov wrote: [...] > > What I meant was, if all the host provides is a pool of uniformly > > sized chunks that are allocated when the plugin is initialized, > > there doesn't seem to be much point in implementing it on the host > > side. > > The naïve host side implementatio would add exactly nothing, > > compared to plugins just allocating their own pools during > > initialization. > > There is point when you have several dynparam plugins that all user > dynparam helper library requesting same sized chunks. Yes... Point taken. Also, there is likely some probability of various other plugins using "common" chunk sizes, or sizes somehow related to the audio buffer size. Then again, something that uses real time scripting would probably not fit into such patterns at all - or maybe it would? Depends on the implementation of the language, and how it's being used... > > A proper real time manager, with arbitrary chunk sizes, would be > > more motivated, as it adds functionality that plugins can't > > implement internally; namely a shared memory pool. > > I'm not against it but I dont see use of it for lv2 zyn plugins / > lv2dynparam / zynjacku. Except for enum value strings that I plan to > make fixed (runtime) max size. And of course, if there's no suitably licensed implementation that works out of the box, I can se why you wouldn't want to go there right now... [...] > > It's really only entirely ROM based synths (or "small fixed sample > > set", in the case of software), virtual analog synths and the like > > that *can* implement real time safe "program change" at all. > > One of ultimate goals of zynjacku is to make this happen with zyn > plugins. Patch loading involves lv2dynparam communication that is > realtime safe. It make take some time during which > synth "parameters" > will be in intermediate state, but this will depend on cpu power and > preallocation size. Things that user has control of. Well, yes; if it's only a matter of doing some calculations and initializations "fast enough", it makes sense to go all the way and make it properly real time safe. > >> and for lv2dynparam plugin library internal > >> operation too. > > > > It needs to "regroup" internally as a response to certain > > parameter changes? > > no idea what you mean by regroup, Sorry; I was thinking modifications of processing graphs (adding/removing units), reallocation of delay buffers and that sort of stuff. > for each parameter, group or > message there are internal fixed size structures. Ok. We're talking "pool of event structs", basically. [...] > Point is whether arbitrary and fixed chunk allocators be in one > extension. I tend to think that they should be separate because > algorythms behind them are quite different and host may choose to > implement only one of them. Well, yes; pools of fixed size chunks are trivial to implement, and very fast and cache effective - whereas a "real" memory manager is not. Even a host that *does* implement the latter might take advantage of knowing when a plugin really just wants a pool of fixed size chunks, even if the systems are somehow connected behind the scenes. > Also most lock free algorithms are patented so this can be point of > host supporting only one feature set. You shouldn't need any lock-free constructs, unless I'm missing something... Is it a requirement that real time safe allocations can be done in other threads than the real time audio thread? I would think that it's sufficient if other threads can make non RT safe allocations. Then you can just have the corresponding call use malloc() and mark the blocks so that the host's RT safe free() will know how to handle these blocks. (They'd probably have to be passed to a background thread for destruction. I simple - AFAIK, non patented - single-writer, single-reader lock-free FIFO could be used for that.) RT reallocations of such "non RT" blocks would just not work, or you'd have to grab a block from the RT allocator and copy the data, even when the original block could theoretically be extended. [...] > The point is how should you define that feature A depends on feature > B. LV2 in its current state, to my knowledge, does not treat this as > bug. Right... It does have to be defined somewhere, no matter what. Probably not a good idea to just leave it to documentation and (the as o
Re: [LAD] LV2 realtime safe memory pool extension
On Thursday 08 November 2007, Krzysztof Foltman wrote: [...] > It would also facilitate sharing memory pools between plugins. On > the other hand, it might sometimes make sense to have separate pools > per plugin, to improve cache locality. Unless you're running the plugins on different cores or CPUs, wouldn't separate pools just hurt, no matter what...? Cache locality is the very reason why you *don't* want statically allocated per-plugin buffers for audio transfer and stuff like that. Do you have a specific example where per-plugin pools would improve things? > But if my opinion counts, please provide a LGPL-ed reference > implementation (based on TLSF or anything else that works :) ) and > test suite as well :) So that host authors can simply use them to > avoid bugs and duplicated effort. Yeah, that's why I suggested TLSF. :-) Now, there are two issues with the TLSF implementations I know of: 1) They don't support resizing the pool on the fly. 2) They don't support 64 bit environments. The latter should be easy to fix, I think (use sizeof(void *) for alignment, basically), but resizing the pool might require some redesign of the algorithm. I'll need a hard RT memory manager that deals with both of these issues for my EEL scripting engine, so I'm rather interested in this particular issue, whether or not it goes into an LV2 extension. [...] //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] LV2 realtime safe memory pool extension
On Thursday 08 November 2007, Stefano D'Angelo wrote: [...] > I'm not a plugin developer and I have nothing to do with LV2, but > anyway I think it can be a useful thing and that it is good to have > one "standard protocol" for this in LV2, instead of letting plugins > rely on external libraries or, even worse, include their own rt-safe > memory allocator. Yes, that's exactly my point, but I think it needs to deal with arbitrary size chunks to actually be able to serve that purpose in real applications. To avoid plugins effectively allocating private pools (just as if they'd implemented it internally), a host would have to use a proper real time memory manager, and then, whey not just make that available directly to plugins? > However, I see the atomic and sleepy version of allocate but only > one deallocate, why? Because it's never needed. When you free a memory block you're really just saying "I don't need this any more", and you don't care if/when the host does anything about it. BTW, are the blocking allocation calls intended for background threads...? Wouldn't it be more useful with some interface that allows plugins to request memory in a non-blocking manner, and if instant allocation fails, have the host notify the plugin when the memory is available? Basically, a malloc() call that means "If I can't have this memory right away, please expand the pool so I can have it later!" The alternative would be to use the blocking version only in "background" threads, but unless you need background threads anyway, to do actual work, this is just moving complexity into plugins for no gain. If a plugin wants some memory to play around with "in the near future", it shouldn't have to implement it's own asynchronous memory management just to avoid stalling the host's audio thread. //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] LV2 realtime safe memory pool extension
On Thursday 08 November 2007, Nedko Arnaudov wrote: [...] > The point is that some plugins need *realtime safe* memory > allocation. Well, yes; that part is quite obvious. What I meant was, if all the host provides is a pool of uniformly sized chunks that are allocated when the plugin is initialized, there doesn't seem to be much point in implementing it on the host side. The naïve host side implementatio would add exactly nothing, compared to plugins just allocating their own pools during initialization. A proper real time manager, with arbitrary chunk sizes, would be more motivated, as it adds functionality that plugins can't implement internally; namely a shared memory pool. > I > need this functionality for lv2zynadd plugin (like arbitrary voices > count allocation) The "standard" solution is to pre-allocate and pre-initialize voice structures for a fixed maximum polyphony. Obviously, this becomes troublesome with modular synths and the like, where the "voice structure" doesn't have a fixed size. One solution is to allocate the voice pool as a response to "program change" events. Of course, this makes the "programe change" operation non real time safe, but it usually is anyway, due to samples being loaded from disk and stuff. Many systems, both hardware and software, are based entirely on the idea that patch loading is part of setup/initialization, as that is often the only practical solution. It's really only entirely ROM based synths (or "small fixed sample set", in the case of software), virtual analog synths and the like that *can* implement real time safe "program change" at all. > and for lv2dynparam plugin library internal > operation too. It needs to "regroup" internally as a response to certain parameter changes? > There were rumors in #lad that such functionality may be useful > without lv2dynparam extension. Well, yes; real time safe dynamic memory management can make life a lot easier for some types of plugins, and/or reduce memory requirements by having a shared pool. However, I think it needs to be more generic than just a pool of fixed size chunks for the "shared pool" part to be viable. [...] > That reminds me that LV2 may need a way to specify optional features > that interdepend. I.e. features (extensions) A and B are both > optional but if A is provided B is required. Of course plugin can > check this explicitly on instantiate and refuse to use feature, but > I'm not sure how vital is such approach. Haven't really thought about this... Isn't it just a matter of plugins and hosts listing *all* extensions? I mean, if you provide or ask for this feature A, but not feature B, you have a bug. An automatic validation tool would trap this right away - but of course, someone/something has to tell the *tool* about these extension interdependencies... //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] LV2 realtime safe memory pool extension
On Monday 05 November 2007, Nedko Arnaudov wrote: > Purpose of this LV2 extension is to standardize realtime safe > allocations in a plugin. Plan is, host to provide functions that > plugin can call. Pointers to functions are provided through host > feature. Only memory pool (fixed chunk size) functionality is > defined. > > Attached is early draft of the header. Doxygen generated from it > documentation is available at: > > http://svn.gna.org/viewcvs/*checkout*/lv2dynparam/website/doxygen/lv2__rtmempool_8h.html > > Any comments are welcome. In particular about whether general > purpose allocator (arbitrary memory chunk size) should be part of > this same extension. I'm not quite sure I see the point of having hosts provide this level of functionality. A pool of fixed size chunks is trivial to implement on the plugin side. The only obvious advantage I see is the potential transparent host side sharing of memory across plugins - but this gets tricky to implement when plugins request different chunk sizes. Sure, the host can split and merge chunks and stuff as needed, but then you may as well go all the way, as you'll need a real memory manager behind the scenes anyway. Something like TLSF for a reference implementation, maybe? http://tlsf.baisoku.org/ //David Olofson - Programmer, Composer, Open Source Advocate .--- http://olofson.net - Games, SDL examples ---. |http://zeespace.net - 2.5D rendering engine | | http://audiality.org - Music/audio engine | | http://eel.olofson.net - Real time scripting | '-- http://www.reologica.se - Rheology instrumentation --' ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev