Re: [linux-audio-dev] Apple's distributed audio processing technology

2004-09-30 Thread Martijn Sipkema
> > > No details about the protocols used. Gigabit ethernet is recommended for
> > > "the best performance". 
> > > 
> > Mmm ... It says:
> > 
> > The protocol is independent of the network hardware, thanks 
> > to the use of the standard TCP.
> > 
> > 
> > So, tcp/ip? 
> 
> More likely is mLAN over TCP. It mentions it needs OS X 10.3, and Apple
> has their mLAN stuff in that. Also, it says it could work over firewire.
> Did not take them long to start using it!

You wouldn't actually need something like mLAN since it is only distributed
audio processing and you don't need wordclock sync or anything; just send
a bunch of samples to another computer and receive the processed samples
back sometime later. All information for processing should be available from
the packet sent for processing and the setup/configuration done before
processing is started.

--ms




Re: [linux-audio-dev] a question re: the MIDI spec

2004-09-14 Thread Martijn Sipkema
> > > > It's not unusable, but IIRC it can get to several ms of jitter.
> > >
> > > Why is that? The USB iso clock is every ms IIRC, so naively you
> > > would expect the maximum jitter to be just under 1ms (if the bus was
> > > saturated by audio transfers), and less in proportion to the degree of
> > > saturation.
> 
> USB transfers always wait for the beginning of the next frame, so the
> maximum jitter is never less than 1 ms, even on an otherwise free bus.
> 
> > Yes, one would expect that (if there are no other bulk transfers), but
> > somehow this does not seem to be the case:
> >
> > http://www-2.cs.cmu.edu/~eli/papers/icmc01-midiwave.pdf
> 
> These measurements include the jitter added by the drivers and by the
> high-quality realtime-capable (yeah :-) Windows 98 scheduler.
> 
> I did some similar measurements under Linux, and it seems the jitter
> isn't bigger than the expected 1 or 2 ms (2 because MIDI through
> involves two USB transfers).

Well, I still think they probably should have used interrupt of isochronous
transfer mode, but 2 ms jitter is quite usable (although a < 1 ms jitter would
have been better and possible even with USB I think).

--ms




Re: [linux-audio-dev] a question re: the MIDI spec

2004-09-11 Thread Martijn Sipkema
> > [...]
> > > > > > The problem here is that class compliant devices suffer bad timing
> > > > > > because they use bulk transfers for MIDI data. The standard for
> > > > > > MIDI over FireWire is much better.
> > [...]
> > > > > Is the timing really that bad?  I don't even think a firewire 8x8
> > > > > rackmount MIDI interface exists, so my options are kinda limited. :/
> > > >
> > > > Timing is especially bad when there is other data being transferred on
> > > 
> > > "Especially bad" is still pretty vague. What might look bad on paper might be 
> > > acceptable in context... 
> > 
> > It's not unusable, but IIRC it can get to several ms of jitter.
> 
> Why is that? The USB iso clock is every ms IIRC, so naively you
> would expect the maximum jitter to be just under 1ms (if the bus was
> saturated by audio transfers), and less in proportion to the degree of
> saturation.

Yes, one would expect that (if there are no other bulk transfers), but
somehow this does not seem to be the case:

http://www-2.cs.cmu.edu/~eli/papers/icmc01-midiwave.pdf

--ms




Re: [linux-audio-dev] a question re: the MIDI spec

2004-09-11 Thread Martijn Sipkema
[...]
> > MIDI streams need a reliable transport with guaranteed bandwidth. If
> > USB can't provide this, then it is not really suitable for MIDI, but I'm not
> > saying it is unusable, just that it may perform worse then traditional
> > serial multiport MIDI interfaces.
> 
> USB can provide this just fine, as long as you don't share host
> controllers between MIDI and other devices.  Common sense, really.  I
> would think it would work fine even with multiple MIDI devices on the
> same bus, as long as you don't expect to run audio and MIDI over the
> same wire and have it work.

I don't think common sense is to expect MIDI timing to degrade when
using audio on the same bus, especially when a single device combines
these features. With FireWire it _is_ possible to have both audio and
MIDI (both use isochronous transfers) on the same bus without it
hurting MIDI timing. USB audio would probably work with bulk
transfers too if the bus wasn't used for anything else, so why did they
choose isochronous transfers for audio? I don't see why a audio stream
should be less reliable than a MIDI stream.

--ms




Re: [linux-audio-dev] a question re: the MIDI spec

2004-09-10 Thread Martijn Sipkema
> > The problem here is that class compliant devices suffer bad timing
> > because they use bulk transfers for MIDI data. The standard for
> > MIDI over FireWire is much better.
> 
> I don't agree on the subject that USB bulk transfers cause bad MIDI timing.  
> Of course, you can't use the same USB host controller at a time with a MIDI 
> interface and some other device like a CD writer and expect both good MIDI  
> timing and fast CD burning. If you can reserve a host controller exclusively 
> for your USB MIDI device, you will get pretty good results, most of the time.
[...]
> - Isochronous transfers send or receive data streams in realtime with 
> guaranteed bus bandwidth but without any reliability.
[...]
> MIDI streams need to be reliable (a single byte lost isn't acceptable), so 
> Isochronous is not an option. Interrupt or Bulk transfers are very similar: 
> they use only the available bandwidth at each moment, so there can be 
> unwanted delays and timing problems. Some manufacturers' proprietary 
> protocols include a timestamp with each USB MIDI packet to enhance the time 
> accuracy, but this can be done either in bulk or interrupt transfers.

MIDI streams need a reliable transport with guaranteed bandwidth. If
USB can't provide this, then it is not really suitable for MIDI, but I'm not
saying it is unusable, just that it may perform worse then traditional
serial multiport MIDI interfaces.

--ms




Re: [linux-audio-dev] a question re: the MIDI spec

2004-09-10 Thread Martijn Sipkema
[...]
> > > > The problem here is that class compliant devices suffer bad timing
> > > > because they use bulk transfers for MIDI data. The standard for
> > > > MIDI over FireWire is much better.
[...]
> > > Is the timing really that bad?  I don't even think a firewire 8x8
> > > rackmount MIDI interface exists, so my options are kinda limited. :/
> >
> > Timing is especially bad when there is other data being transferred on
> 
> "Especially bad" is still pretty vague. What might look bad on paper might be 
> acceptable in context... 

It's not unusable, but IIRC it can get to several ms of jitter.

> > the same USB bus, as is the case with combined audio/midi interfaces.
> 
> Perhaps, but midi takes a lot less bandwidth than audio so how much worse 
> could it get? It sounds like it wouldn't be a problem if you were 
> overdubbing, but potentially in a live recording/performance if you are using 
> the audio ins for a vocal mic or whatnot.

I'm by no means an expert on this, but I think MIDI taking less bandwidth than
audio is not really relevant; it's audio data being isochronous and using a lot
of bandwidth causing the MIDI timing to suffer.

--ms






Re: [linux-audio-dev] a question re: the MIDI spec

2004-09-10 Thread Martijn Sipkema
> On Fri, 2004-09-10 at 07:49, Martijn Sipkema wrote:
> > >> [...] the USB specification. And it even appears like some vendors
> > >> are (finally!) starting to follow suit:
> > >>
> > >>http://midiman.com/products/en_us/KeystationPro88-main.html
> > >>
> > >>- "USB class compliant-no drivers required for
> > >>  Windows XP or Mac OS X"
> > >
> > > M-Audio started following suit only after they hung their engineers
> > > with a USB cable and bought Evolution who had always made
> > > class-compliant devices.
> >  
> > The problem here is that class compliant devices suffer bad timing
> > because they use bulk transfers for MIDI data. The standard for
> > MIDI over FireWire is much better.
> 
> Hmm.. I'm just about to drop $400 on a USB MIDI interface (Edirol
> UM-880), so that's not something I want to hear!
> 
> Is the timing really that bad?  I don't even think a firewire 8x8
> rackmount MIDI interface exists, so my options are kinda limited. :/

Timing is especially bad when there is other data being transferred on
the same USB bus, as is the case with combined audio/midi interfaces.

There are several USB interfaces that don't use the standard
MIDI-over-USB protocol, but I don't think information about these
protocols is available.

Perhaps there are interfaces that support both the standard protocol and
one with better timing...

--ms




Re: [linux-audio-dev] a question re: the MIDI spec

2004-09-10 Thread Martijn Sipkema
>> [...] the USB specification. And it even appears like some vendors
>> are (finally!) starting to follow suit:
>>
>>http://midiman.com/products/en_us/KeystationPro88-main.html
>>
>>- "USB class compliant-no drivers required for
>>  Windows XP or Mac OS X"
>
> M-Audio started following suit only after they hung their engineers
> with a USB cable and bought Evolution who had always made
> class-compliant devices.
 
The problem here is that class compliant devices suffer bad timing
because they use bulk transfers for MIDI data. The standard for
MIDI over FireWire is much better.

--ms




Re: [linux-audio-dev] Audio synchronization, MIDI API

2004-08-14 Thread Martijn Sipkema
From: "Steve Harris" <[EMAIL PROTECTED]>
> On Sat, Aug 14, 2004 at 10:07:06PM +0200, Benno Senoner wrote:
> > >UDP also has unbounded transit time. In practice its OK if you dont want
> > >low latencies (just use RTP), but for low latency you really need one of
> > >the non-IP ethernet protocols that can be relaibly used for audio.
> > 
> > I don't think raw ethernet will buy us anything over using UDP. These 
> > few usecs less simply won't matter.
> > (but with ethernet you would have the disadvantage that you loose 
> > routability)
> > On a 100Mbit network the round trip latency between hosts is about 
> > 100usecs so the one way latency of MIDI would be
> > about half of that. and that's form a MIDI point of view instantaneous 
> > because over serial MIDI cable transmitting
> > a NOTE ON event  (3 bytes) takes about 1.1msec which is 20 times slower 
> > than transmitting it over an ethernet cable.
> 
> No, the roundtrip latency is *at least* 100usecs (or whatever), the hardware
> will keep re-transmitting until the packets get through.
> 
> In pratice people dont really demand hard realtime and it will be OK, but
> the maximum time taken to transmit a UDP packet is unbounded, it uses
> exponential backoff IIRC. 

It is only unbounded if the network can't provide it, and if that is the case you
would lose the ethernet frame, which might be difficult to handle for something
like MIDI. Losing packets is not really hard real time either...

--ms





Re: [linux-audio-dev] Re: desktop and multimedia as an afterthought?

2004-07-14 Thread Martijn Sipkema
From: "Bill Huey (hui)" <[EMAIL PROTECTED]>
> On Tue, Jul 13, 2004 at 11:44:59PM +0100, Martijn Sipkema wrote:
> [...]
> > The worst case latency is the one that counts and that is the contended case. If
> > you could guarantee no contention then the worst case latency would be the
> > very fast uncontended case, but I doubt there are many (any?) examples of this in
> > practice. There are valid uses of mutexes with priority inheritance/ceiling 
> > protocol;
> > the poeple making the POSIX standard aren't stupid...
> 
> There are cases where you have to use priority inheritance, but the schemes that are
> typically used either have a kind of exhaustive analysis backing it or uses a simple
> late detection scheme. In a general purpose OS, the latter is useful for various kind
> of overload cases. But if your system is constantly using that specific case, then 
> it's
> a sign the contention in the kernel must *also* be a problem under SMP conditions. 
> The
> constant use of priority inheritance overloads the scheduler, puts pressure on the
> cache and other negative things that hurt CPU local performance of the system.
> 
> The reason why I mention this is because of Linux's hand-crafted nature of dealing
> with this. These are basically contention problems expressed in a different manner.
> The traditional Linux method is the correct method of deal with this in a general
> purpose OS. This also applies to application structure as well. The use of these
> mechanisms need to be thought out before application.

To be honest, I don't understand a word of what you are saying here. Could you
give an example of a ``contention problem'' and how it should be solved?

> > > > It is often heard in the Linux audio community that mutexes are not realtime
> > > > safe and a lock-free ringbuffer should be used instead. Using such a lock-free
> > > > ringbuffer requires non-standard atomic integer operations and does not
> > > > guarantee memory synchronization (and should probably not perform
> > > > significantly better than a decent mutex implementation) and is thus not
> > > > portable.
> > > 
> > > It's to decouple the system from various time related problems with jitter.
> > > It's critical to use this since the nature of Linus is so temporally coarse
> > > that these techniques must be used to "smooth" over latency problems in the
> > > Linux kernel.
> 
> > Either use mutexes or POSIX message queues... the latter also are not
> > intended for realtime use under Linux (though they are meant for it in
> > POSIX), since they don't allocate memory on creation.
>  
> The nature these kind of applications push into a very demanding space where
> typical methodologies surrounding the use of threads goes out the window. Pushing
> both the IO and CPU resources of a kernel is the common case and often you have to
> roll your own APIs, synchronization mechanisms to deal with these problem. Simple
> Posix API and traditional mutexes are a bit too narrow in scope to solve these
> cross system concurrency problems. It's not trivial stuff at all and can span
> from loosely to tightly coupled systems, yes, all for pro-audio/video.
> 
> Posix and friends in these cases simply aren't good enough to cut it.

I find this a little abstract. Sure, there might be areas where POSIX doesn't supply
all the needed tools, e.g. one might want some scheduling policy especially for
audio, but to say that POSIX isn't good enough without providing much
explanation...

--ms




Re: [linux-audio-dev] Re: desktop and multimedia as an afterthought?

2004-07-14 Thread Martijn Sipkema
From: "Bill Huey (hui)" <[EMAIL PROTECTED]>
> On Tue, Jul 13, 2004 at 01:09:28PM +0100, Martijn Sipkema wrote:
> > [...]
> > > Please double-check that there are no priority inversion problems and that
> > > the application is correctly setting the scheduling policy and that it is
> > > mlocking everything appropriately.
> > 
> > I don't think it is currently possible to have cooperating threads with
> > different priorities without priority inversion when using a mutex to
> > serialize access to shared data; and using a mutex is in fact the only portable
> > way to do that...
> > 
> > Thus, the fact that Linux does not support protocols to prevent priority
> > inversion (please correct me if I am wrong) kind of suggests that supporting
> > realtime applications is not considered very important.
> 
> Any use of an explicit or implied blocking mutex across threads with differing
> priorities can results in priority inversion problems. The real problem, however,
> is contention. If you get rid of the contention in a certain critical section,
> you then also get rid of latency in the system. They are one and the same problem.

The worst case latency is the one that counts and that is the contended case. If
you could guarantee no contention then the worst case latency would be the
very fast uncontended case, but I doubt there are many (any?) examples of this in
practice. There are valid uses of mutexes with priority inheritance/ceiling protocol;
the poeple making the POSIX standard aren't stupid...

> > It is often heard in the Linux audio community that mutexes are not realtime
> > safe and a lock-free ringbuffer should be used instead. Using such a lock-free
> > ringbuffer requires non-standard atomic integer operations and does not
> > guarantee memory synchronization (and should probably not perform
> > significantly better than a decent mutex implementation) and is thus not
> > portable.
> 
> It's to decouple the system from various time related problems with jitter.
> It's critical to use this since the nature of Linus is so temporally coarse
> that these techniques must be used to "smooth" over latency problems in the
> Linux kernel.

Either use mutexes or POSIX message queues... the latter also are not
intended for realtime use under Linux (though they are meant for it in
POSIX), since they don't allocate memory on creation.

> I personally would love to see these audio applications run on a first-class
> basis under Linux. Unfortunately, that won't happen until it gets near real
> time support prevasively through the kernel just like in SGI's IRIX. Multimedia
> applications really need to be under a hard real time system with special
> scheduler support so that CPU resources, IO channels can be throttled.
> 
> The techniques Linux media folks are using now are basically a coarse hack
> to get things basically working. This won't change unless some fundamental
> concurrency issues (moving to a preemptive kernel with interrupt threads, etc..)
> change in Linux. Scattering preemption points manually over 2.6 is starting to
> look unmanable from all of the stack traces I've been reading in these latency
> related threads.

Improving the mutex and mqueue implementations to better support realtime
use would be a significant improvement I think, making Linux quite suitable
for realtime audio use.

--ms




Re: [linux-audio-dev] Re: desktop and multimedia as an afterthought?

2004-07-13 Thread Martijn Sipkema
From: "Paul Davis" <[EMAIL PROTECTED]>
> >Hmm, I've just recently learned about the Priority Ceiling Protocol,
> >an extension to Priority Inversion Protocol, which explicitly prevents
> >deadlocks. And I've learned about both in a RTOS course, so I'm a little
> >surprised by your statement about them not being useful for RT purposes :-)
> 
> solving priority inversion generally means that a low priority thread
> cannot delay the execution of a high priority thread. but in practice,
> what it tends to mean is that a low priority, lock-holding thread
> temporarily runs with high priority until it releases the lock, and
> then high priority thread can continue.
> 
> if there are no bounds on the operations the low priority thread
> carries out while holding the lock, then having it inherit the higher
> priority doesn't really guarantee anything except that everything runs
> as fast as possible. there are no deadlocks, but the low priority
> thread still executes *all* of its code before releasing the lock.
> 
> the real problem with such a design is that lock-based synchronization
> was being used between two threads, one of which is subject to RT
> constraints and one is not. that is what has to be solved, and this is
> an application design issue rather than something that can be solved
> using general methods.

The time that the high priority thread has to wait is bounded (in a well
designed application) and can be calculated; it equals the execution time
of the critical section in the low priority thread plus time for the context
switches. It is unbounded in the case of priority inversion.

In general all synchronization may block participating threads for some
short period; the need for synchronizing means that there are some things
that threads are not allowed to do simultaneously. (the lock-free ringbuffer
is a special case since the only synchronizing that is done, i.e. read/write
access to an int, is done by the processor. I'm not convinced that this will
work on all architectures) It is not a problem to block in a realtime thread
as long as the blocking is bounded.

--ms


the processor




Re: [linux-audio-dev] Re: desktop and multimedia as an afterthought?

2004-07-13 Thread Martijn Sipkema
From: "Christian Henz" <[EMAIL PROTECTED]>
> On Tue, Jul 13, 2004 at 10:55:48AM -0400, Paul Davis wrote:
> > >Thus, the fact that Linux does not support protocols to prevent priority
> > >inversion (please correct me if I am wrong) kind of suggests that supporting
> > >realtime applications is not considered very important.
> > 
> > we went through this (you and i in particular) right here on LAD a
> > year or so ago. while i might agree with you about the priority given
> > to RT-ish apps, my recollection of the end of that discussion is that
> > priority inheritance is neither necessary nor sufficient to allow
> > adequate RT performance. priority inversion generally can be factored
> > out through application redesign, and the protocols i've seen to
> > address it are not useful for RT purposes - they just help deadlock.
> > 
> 
> Hmm, I've just recently learned about the Priority Ceiling Protocol,
> an extension to Priority Inversion Protocol, which explicitly prevents
> deadlocks. And I've learned about both in a RTOS course, so I'm a little
> surprised by your statement about them not being useful for RT purposes :-)

They are meant for realtime use and are part of the POSIX realtime
extensions, so I disagree with Paul. That's not uncommon, though we
have agreed on some things in the past. :)

--ms




Re: [linux-audio-dev] Re: desktop and multimedia as an afterthought?

2004-07-13 Thread Martijn Sipkema
From: "Paul Davis" <[EMAIL PROTECTED]>
> >Thus, the fact that Linux does not support protocols to prevent priority
> >inversion (please correct me if I am wrong) kind of suggests that supporting
> >realtime applications is not considered very important.
> 
> we went through this (you and i in particular) right here on LAD a
> year or so ago. while i might agree with you about the priority given
> to RT-ish apps, my recollection of the end of that discussion is that
> priority inheritance is neither necessary nor sufficient to allow
> adequate RT performance.

I don't recall that that is what was concluded.

Priority inheritance or some other protocol for priority inversion _is_
needed for realtime applications that have threads with different priorities
accessing common data. One could increase the priority of the low priority
thread before taking the mutex and release it afterwards (as in priority
ceiling), but I doubt that's optimal.

> priority inversion generally can be factored
> out through application redesign, and the protocols i've seen to
> address it are not useful for RT purposes - they just help deadlock.

For cases where some form of message passing does not work you
will need shared data with a mutex to serialize access and you _will_
need to prevent priority inversion. Mutexes are part of the POSIX
realtime threading extensions; you can use a semaphore when
priority inversion is not an issue.

IMHO it is the lack of a mutex implementation with priority ceiling
or inheritance and the stories about relying on either being a design
problem that have caused the Linux audio community to not use
mutexes and declare them non-RT safe while in fact they are
required according to POSIX to synchronize memory between
cooperating threads.

--ms





Re: [linux-audio-dev] Re: desktop and multimedia as an afterthought?

2004-07-13 Thread Martijn Sipkema

Benno Senoner wrote:
> Martijn Sipkema wrote:
> >It is often heard in the Linux audio community that mutexes are not realtime
> >safe and a lock-free ringbuffer should be used instead. Using such a lock-free
> >ringbuffer requires non-standard atomic integer operations and does not
> >guarantee memory synchronization (and should probably not perform
> >significantly better than a decent mutex implementation) and is thus not
> >portable.
> 
> Why not portable ? on x86 you have guaranteed atomicity of 32bit 
> read/writes and using the read_ptr / write_ptr
> approach guarantees that you will never get bad values for the 
> ringbuffer pointers. The worst that could happen
> is that the reader reads an "old" value and thus getting a bit smaller 
> available read space from the ringbuffer but
> given the asynchronous nature of multithreaded apps this is completely 
> meaningless.

Indeed, for such a ringbuffer atomicity (as in test-and-set) is not needed,
but there needs to be memory synchronization. According to POSIX, you
can not read/write to a memory location from different threads without
using a function to synchronize memory:

"Applications shall ensure that access to any memory location by more than
one thread of control (threads or processes) is restricted such that no thread
of control can read or modify a memory location while another thread of
control may be modifying it. Such access is restricted using functions that
synchronize thread execution and also synchronize memory with respect to
other threads."

Thus, the lock-free ringbuffer is not portable. (What if the reader gets an
"old" value that is never updated?)

> (the audio thread does not know/care when then disk thread writes data 
> into the ringbuffer)
> PPC guarantees 32bit atomicity too so the atomic macros we are using in 
> LinuxSampler simply translate to a load or a store on
> both x86 and PPC and if future CPUs with non atomic access arise , just 
> add the macro in atomic.h.
> I would not call that "non portable".

I think there are architectures where supporting this might not be trivial, but
I'm no expert. That's why I'd always use mutexes for this...

> posix mutexes are not portable because they don't work on win32 so in
> general applications with a certain complexity always require some
> platform dependent code (either OS or CPU dependent) to
> be portable on that platform.

With portable I mean that it will work on any POSIX compliant operating
system.

[...]
> About performance, nothing can beat lock free ringbuffers because it's 
> simply a few machine instructions that access
> a read_ptr, write_ptr and return the address of the object you want to 
> read/write.

I did not say a mutex would beat lock free ringbuffers, I said that using a
mutex instead shouldn't cause a significant performance penalty for typical
use in a realtime audio application.

--ms





Re: [linux-audio-dev] Re: desktop and multimedia as an afterthought?

2004-07-13 Thread Martijn Sipkema
[...]
> Please double-check that there are no priority inversion problems and that
> the application is correctly setting the scheduling policy and that it is
> mlocking everything appropriately.

I don't think it is currently possible to have cooperating threads with
different priorities without priority inversion when using a mutex to
serialize access to shared data; and using a mutex is in fact the only portable
way to do that...

Thus, the fact that Linux does not support protocols to prevent priority
inversion (please correct me if I am wrong) kind of suggests that supporting
realtime applications is not considered very important.

It is often heard in the Linux audio community that mutexes are not realtime
safe and a lock-free ringbuffer should be used instead. Using such a lock-free
ringbuffer requires non-standard atomic integer operations and does not
guarantee memory synchronization (and should probably not perform
significantly better than a decent mutex implementation) and is thus not
portable.

--ms




Re: [linux-audio-dev] Buffer size settings - Mac/Windows

2004-06-13 Thread Martijn Sipkema
[...]
> interestingly, the design of ASIO only allows 2 interrupts per
> hardware buffer. ALSA is much more flexible in handling this kind of
> thing. 

A huge mistake of ASIO IMHO. On the Audiowerk8 for example,
running 3 interrupts per buffer allows using the input DMA interrupt
only; this interrupt will always occur after the output DMA interrupt
because of buffering. When using only two interrupts per buffer
one would have to wait for the input interrupt also, but be finished
before the output interrupt, hurting performance. The complexity of
the driver also goes up. Due to interrupt latency, very small xruns
are impossible to detect with only two interrupts.

So for this kind of hardware (buffer in host memory, small DMA
FIFOs), I'd always use three interrupts per buffer, and thus
ASIO sucks :)

--ms




Re: [linux-audio-dev] [OT] marketing hype

2004-06-11 Thread Martijn Sipkema
[...]
> > Sorry Fons, but define acceptable! Please!
> 
> I will define as non-acceptable the implication:
> 
>Paul uses a text based mail client
>=>
>this explains why his GUI designs are cluttered.
> 
> It would be acceptable and in this context even funny with a :-),
> but I didn't see that.

While I do in general find Marek posts to be irritating, as in long
an pointless, and thus end up mostly not reading them, I don't think
there is anything wrong with the above.

--ms




Re: [linux-audio-dev] Two or more monitors)Re:Project:modularsynth editor

2004-01-20 Thread Martijn Sipkema
> [...]
> > > Right, but resolution is just a matter of RAMDAC parameters. All
> > > I want is a 3856x1536 framebuffer with one RAMDAC displaying a
> > > 2048x1536 window and the other displaying a 1808x1356 window. I
> > > don't care about one tiny MB of VRAM being invisible.
> >
> > True, this should be possible as long as the pixel format is the
> > same; I misunderstood before, this is indeed a driver limitation.
>
> And it has to be the same pixel format if it's the same (wide) buffer.
> :-)

I know SGI hardware supports multiple pixel formats in one framebuffer,
but I'm not sure it is possible to render to more than visual at a time; I
doubt
it.

> > > BTW, the 8800 is limited to 2048x2048 for OpenGL contexts, but
> > > that seems to be per context, and I'm not inderested in
> > > stretching a single context over both screens anyway. (I'm not
> > > interested in stretching *anything* over both screens; just
> > > moving windows across them, which is not possible with
> > > independent desktops.)
> >
> > That is a problem since there is no way an application can know
> > about this limitation and I don't think X has a way of returning
> > "window too large" on a window config request.
>
> The driver is supposed to just clip at 2048x2048. Some tester
> concluded it does on Windoze, but I haven't tried it on the Linux
> drivers. (Though they even have the same bugs as the Windows drivers,
> so I'd guess they're pretty closely related...)

Perhaps, with a performance penalty, these very large windows could
be rendered in multiple passes...

--ms






Re: [linux-audio-dev] Two or more monitors) Re:Project:modularsynth editor

2004-01-20 Thread Martijn Sipkema
[...]
> > > There is another mode, where a single buffer forms a big desktop,
> > > of which each RAMDAC displays a part. Seems like stupid driver
> > > limitations restrict this mode to using the same resolution for
> > > both heads, but I'm not sure.
> >
> > It is to be expected that a single rendering context has only one
> > framebuffer configuration. "Consumer" hardware does not have a per
> > pixel framebuffer configuration stored in the framebuffer together
> > with the color (and possibly clipping) data; a rendering operation
> > expects a single config.
> 
> Right, but resolution is just a matter of RAMDAC parameters. All I 
> want is a 3856x1536 framebuffer with one RAMDAC displaying a 
> 2048x1536 window and the other displaying a 1808x1356 window. I don't 
> care about one tiny MB of VRAM being invisible.

True, this should be possible as long as the pixel format is the same; I
misunderstood before, this is indeed a driver limitation.

> BTW, the 8800 is limited to 2048x2048 for OpenGL contexts, but that 
> seems to be per context, and I'm not inderested in stretching a 
> single context over both screens anyway. (I'm not interested in 
> stretching *anything* over both screens; just moving windows across 
> them, which is not possible with independent desktops.)

That is a problem since there is no way an application can know about
this limitation and I don't think X has a way of returning "window too
large" on a window config request.

--ms







Re: [linux-audio-dev] Two or more monitors) Re: Project: modularsynth editor

2004-01-19 Thread Martijn Sipkema
[...]
> I just remembered; the Matrox G100/200/400 drivers and/or hardware has
> problems with multiple OpenGL contexts. Matrox have known about it
> for ages, but seem to ignore the problem. This is part of the reason
> why I gave up on my G400.

Most likely a driver problem (or elsewhere in the software). I don't think
there's
_that_ much "context" on a G400 anyway...

> That said, IIRC, you *can* still get multiple windows (different
> processes) accelerated. If so, why not windows on different
> desktops...?

It should not be a problem as far as I can tell really...
I'm not an export though...

I do think we should use OpenGL more. That way the problems that
may exist will eventually get fixed :)
And besides, it _is_ a very nice graphics API.

--ms







Re: [linux-audio-dev] Two or more monitors) Re: Project:modularsynth editor

2004-01-19 Thread Martijn Sipkema
[...]
> > IIRC Matrox cards have a way of making a single framebuffer (with
> > xinerama hints) that appears on two monitors. That way you should
> > get 3d accel on both displays.
> 
> ATI has something similar, but their drivers don't seem to work with 
> Xinerama the normal way. It (sort of) works though, and as long as I 
> don't try to throw extra cards in the mix, I can have two independent 
> desktops (separate buffers, separate pixel formats etc) and still run 
> OpenGL on both.

Most (all?) hardware will support this (and all should).

> There is another mode, where a single buffer forms a big desktop, of 
> which each RAMDAC displays a part. Seems like stupid driver 
> limitations restrict this mode to using the same resolution for both 
> heads, but I'm not sure.

It is to be expected that a single rendering context has only one
framebuffer configuration. "Consumer" hardware does not have a per
pixel framebuffer configuration stored in the framebuffer together with
the color (and possibly clipping) data; a rendering operation expects
a single config.

--ms






Re: [linux-audio-dev] Two or more monitors) Re: Project:modularsynth editor

2004-01-19 Thread Martijn Sipkema
> > [...]
> > > Xinerama _does_ support open GL, at least with my matrox card, I can
have
> > > openGL on one monitor of the two. That is a limitation of the card
> > > hardware, AFAIK, not of X.
> >
> > I doubt this is a hardware limitation. The hardware just renders to AGP
or
> > local memory. I may be wrong though...
>
> IIRC Matrox cards have a way of making a single framebuffer (with xinerama
> hints) that appears on two monitors. That way you should get 3d accel on
> both displays.

Using a single framebuffer for both outputs should be possibble by setting
the
correct pitch (add with of other screen as padding) and start address I
think.
This would require both screens to have the same framebuffer confiuration,
i.e. pixel format.

--ms






Re: [linux-audio-dev] Two or more monitors) Re: Project: modularsynth editor

2004-01-19 Thread Martijn Sipkema

[...]
> Xinerama _does_ support open GL, at least with my matrox card, I can have
> openGL on one monitor of the two. That is a limitation of the card
> hardware, AFAIK, not of X.

I doubt this is a hardware limitation. The hardware just renders to AGP or
local memory. I may be wrong though...

--ms







Re: [linux-audio-dev] Re: gQ for Linux

2003-08-14 Thread Martijn Sipkema
[...]
> What is 'ALport' and 'ALconfig', and where are they
> defined?

Those are part of the SGI audio library and I woudn't expect them
to be available under Linux.

--ms





Re: [linux-audio-dev] Direct Stream Digital / Pulse Density Modulationmusing/questions

2003-07-28 Thread Martijn Sipkema
[...]
> Conventional PCM techniques are unable to reproduce high frequencies
> correctly. And the explanation is very simple.

Actually a correct explanation isn't that simple. Yours is much _too_
simple.
Theoretically a 20 kHz bandlimited signal can be represented _exactly_ as a
40 kHz PCM stream. In order to not have to use a very steep lowpass filter
in
the DAC it is better to use a somewhat higher sampling frequency. 48 kHz
should
be enough most of the time.

> If you record a sound at 44.1
> kss, you get a theorical frequency response of 0 - 22050 Hz. BUT to
describe
> frequencies from 11050 to 22050 Hz, you can only play with a 4-sample long
> period.

These are interpolated using a lowpass filter.

> A 22050 Hz sine could be really accurate (one sample up, one sample down
> every 1/22050th second), and so is 11025. But intermediary frequencies
> introduces temporal aliasing, some metallic feeling due to temporal
> quantization. This is inherent to the very low sampling rate (96 kHz is
just
> a bit better, but no miracle), which is unable to describe waveforms at
high
> frequencies.
>
> Bad high frequencies temporal definition means bad transients. Anyone can
> notice it when he _actually_ hear and compare PCM and DSD.
>
> Stop speculative talking and try to get some real demo...

This has been done and it is very hard if not impossible to hear the
difference.
24bit 96 kHz is most likely better than DSD, _especially_ at high
frequencies.
The demos for SACD often compare normal stereo PCM to surround on a
SACD...

Actually, DSD suffers deteriorating sample resolution
at higher frequencies whereas the resolution doesn't depend on the frequency
for
PCM. Also signal processing will have to be done in PCM. A SACD player with
an EQ has DSD -> PCM and PCM -> DSD convertors and these convertions
are not lossles.

The DACs and monitors you use probably make a lot more difference than
44 kHz PCM or 96 Khz or DSD.

There are also people who buy very expensive oxygen free cables and claim
to hear the difference. There are thousands of religious people who _know_
God exists...

DSD is a bad thing IMHO.

--ms






Re: [linux-audio-dev] kernel 2.6

2003-07-25 Thread Martijn Sipkema
> On Thursday 24 July 2003 13:46, Michael Ost wrote:
> > Is there SCHED_FIFO style priority available in the new kernel, with its
> > new threading model? Realtime audio processing doesn't share the CPU
> > very well. The ear can pick out even the slightest glitches or delays.
> > So for Linux to be usable for audio applications or embedded audio
> > devices it needs something like SCHED_FIFO.
> >
>
> It is a posix standard, so it is unlikely to go away :)

It is optional and 95% of the applications don't need it. It is doomed :)

--ms




Re: [linux-audio-dev] new realtime scheduling policy

2003-03-18 Thread Martijn Sipkema
>  so i've tried to make a new scheduling policy for linux.  i've
> called it SCHED_USERFIFO.  the intent is basically to allow a process
> ask for x amount of processor time out of every y jiffies.  (the user
> part is in the hope that the administrator can set rlimits on the
> amount of percentage of time requested and allow non-priviledged users
> to use this policy without being able to complete hang the box).
> 
> it works just like SCHED_FIFO as long as the process doesn't take
> more than the amount of time it asked for.  if it does try to take
> more time, it is allowed to be preempted until the period is over.

This is somewhat similar to the SCHED_SPORADIC.


On a related note, I found this paper:
http://marte.unican.es/appsched-proposal.pdf

I'm not sure how much work it would be to implement this for Linux,
but it would certainly be nice to have. An EDF scheduler would be
ideal for asynchronous audio processing (large fft blocks, etc.).


--ms





Re: [linux-audio-dev] midi 'resolution'

2003-03-17 Thread Martijn Sipkema
> In a system/application, that recieves external midi data of any kind,
> is there anything one can assume about _when_ some midi data is recieved?
> 
> i mean, with audio data, you have the buffer size of the dac/adc, which 
> (together with sampling rate) enforces some kind of "global clock" 
> impulse/trigger in your system.
> 
> Is there anything similar with midi data?

apart from the fact that a MIDI byte takes 320 usec to transmit there is
not much else known.

--ms





Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Martijn Sipkema
> > Because I think ALSA does too much in the kernel (and it is not
> > well documented eiter).
> 
> Wait a minute, why do you say that?

Because:
- I think ALSA is not that well documented.
- I'd rather see a combination of a device specific kernel driver and
a user-space driver than a common kernel interface.

> ALSA seems to do a lot less in kernel 
> space than OSS (a lot has been moved to alsa-lib), and also much code is 
> commonly shared between drivers, which is very nice.

I'm not comparing ALSA to OSS. And having user-space drivers
doesn't prevent code sharing. I just don't like the common device file
interface.

--ms






Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Martijn Sipkema
[...]
> >I don't think an application should ask for a certain number of frames
> >to wakeup. The driver should dictate when to wake up. This is the way
> >my Audiowerk8 JACK driver works and it would get a lot more
> >complicated if I had to add support for user-space wake up at
> >arbitrary intervals.
>
> thats because you decided to write a simple driver that didn't fit
> into the ALSA scheme.

Because I think ALSA does too much in the kernel (and it is not
well documented eiter).

> the ALSA scheme deliberately offers such
> capabilities, partly to handle devices like USB audio interfaces. if
> you had written your driver as part of ALSA, it would not have to have
> support for this - the "midlevel" ALSA code takes care of that stuff.

One of the reasons I did not write an ALSA driver is because it supports
all this.

> something has to specify when to wake a task blocked on I/O to the
> device. you can wake it up every interrupt, every N seconds, or every
> N frames (with frames-per-interrupt level resolution). ALSA allows you
> to do any of these. which one is most appropriate, and whether it


This makes ALSA unnecessarily complicated and puts too much in the
kernel IMHO.

> should be set by a "control" application (common on windows/macos) or
> by the application using the device driver is subject to reasonable
> disagreement by sensible people.

I think buffer size should be set by a "control" application or just read
from
a file by the user space driver or possibly even set at module loading.

> >> and the interrupts occur at 420,
> >> 840 and 1260 frames, then we should be woken up on the third
> >> interrupt, process 1024 frames of data, and go back to sleep.
> >
> >This will not perform well since the available processing time per
> >sample will fluctuate.
>
> agreed. but by the same argument, if the variability in the block size
> was too great, we would also see too much variation in
> cycles-per-frame due to processing overhead per interrupt, which will
> also kill us.
>
> so the question seems to be: how much variation is acceptable, and
> what should be responsible for handling it? a device which interrupted
> at random intervals would simply not work; one that interrupts at 420
> frames +/- 5 frames might be OK. should the h/w driver hide the
> variation, or should each application be willing to deal with it?

A decent device will not have more than a few % variance, maybe up
to 10-15% when using varispeed, but that's unavoidable (constant size
callbacks will differ in available processing time then).

> most applications would have no problem, but there is an interesting
> class of applications for whom it poses a real problem that i think
> requires a common solution. i'm not sure what that solution should be.

Using asynchronous processing is a solution. A EDF scheduler would
be nice for this.

--ms







Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Martijn Sipkema
> > Well, I'll shut up about it. I still think it is a mistake. I haven't
heard
> > any
> > convincing (to me) arguments why an application should not handle
variable
> > sized callbacks.
>
> Because it makes certain types of processing viable, which they are not
> really in variable block systems (eg. LADSPA, VST). Have a look at an
> phase vocoder implementation in LADSPA (e.g.
> http://plugin.org.uk/ladspa-swh/pitch_scale_1193.xml) or VST and see how
> nasty and inefficient they are.

If I understand that code correctly then you wait for 'FFT frame size'
samples
to be available and then process that entire FFT frame. This will not
introduce
a variable amount of processing time/sample and will not work for large FFT
frames. Adding en extra FFT frame delay and processing asynchronously
would solve this. I'm not saying this is easy, but I don't think an
algorithm
like this should rely on a callback being one (or more) FFT frame long.

> Conversly we haven't heard any convincing arguments about why we should
> have variable block sizes ;) I don't think that allowing (some?) USB
> devices to run with less latency counteracts the cost to block processing
> algorithms.

I think is at least as valid an argument as a possible increase in
performance
for some algorithms on some hardware.

> I dont know what EASI xfer is.

EASI is a hardware abstraction framework from Emagic. It was meant to
be a open alternative to ASIO. It didn't make it and now that Emagic
has been acquired by Apple it is no longer supported by Emagic I
guess, as I can not find anything about it on their site anymore.

http://www.sipkema-digital.com/~msipkema/EASI_99may25.pdf

--ms





Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Martijn Sipkema
> >any
> >convincing (to me) arguments why an application should not handle
variable
> >sized callbacks. VST process() is variable size I think as are EASI xfer
> >callbacks, but clearly JACK needs constant callbacks and there is nothing
> >I can do about that...
>
> as i understand it, VST is only variable to allow for automation. And
> if you follow the discussion here about XAP and elsewhere about PTAF,
> you will see that many people consider this a mistake that comes from
> not using "events" in the correct way.

I agree. Events should be timestamped. But maybe that not the only reason.
Certainly EASI has variable size callbacks because this is what some
hardware delivers.

> i feel that it should be the job of ALSA to handle period sizes. if it
> doesn't do a good job, it should be fixed. if we ask for a wakeup
> every time 1024 frames are available,

I don't think an application should ask for a certain number of frames
to wakeup. The driver should dictate when to wake up. This is the way
my Audiowerk8 JACK driver works and it would get a lot more
complicated if I had to add support for user-space wake up at
arbitrary intervals.

> and the interrupts occur at 420,
> 840 and 1260 frames, then we should be woken up on the third
> interrupt, process 1024 frames of data, and go back to sleep.

This will not perform well since the available processing time per
sample will fluctuate.

> the h/w
> driver should handle this, not JACK. the latency behaviour will be
> just as requested by the user.

IMHO JACK should be able to handle drivers that generate interrupts
with variable available frames by allowing non-const callbacks. There
is no way to only allow const callbacks without adding either large
latency or hurting performance for driver that don't generate interrupts
on available frames. It seems some soundcards, USB and possibly
FireWire audio are all better served with non-const callbacks. And
I still have not seen any convincing arguments that non-const callbacks
are a problem for JCAK client applications.

--ms







Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Martijn Sipkema
>  > > According to the mLAN spec you need a buffer of around ~250us
(depending
>  > > on format) to collate the packets.
>  >
>  > Still there is no guarantee that 10 packets always have exactly the
same
>  > number of samples. You say the mLAN spec says you need a buffer of
>  > around ~250us. Note that is doesn't say a buffer of a number of frames.
>  > The bottom line is these packets are sent at regular time intervals,
not
>  > at a fixed number of frames and thus JACK should support this by
>  > allowing non-const (frames) callbacks IMHO.
>
> As was previously pointed out several times, this is not JACK's
> job.

Well, I think it is. And I've mentioned it a couple of times also.

> The driver should assemble the the data into fixed size blocks.

Why?

> This will not introduce any signifcant latency, unless the periods
> are nearly the same, in which case the latency could double.

This will always introduce a fairly large latency unless you are
willing to accept process time/sample to vary and thus be able
to do significantly less processing.

> The model you propose may be fine when you have *one* HW interface and

Which is the common case. When using more than one interface
then there needs to be buffering. When syncing audio to video there
needs to be buffering also. This should be done in the application
such as in OpenML ( http://www.khronos.org ).

> *one* application, but it does not scale without introducing  a lot
> of complexity.

Is has nothing to do with one or more applications. Non-const size
(frames) callbacks for just as well with more applications (using JACK).

I've made my point, several times. Nobody thinks I'm right, so I'll
shut up about it. I still think it is a mistake...

--ms





Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Martijn Sipkema
> On Wed, Feb 26, 2003 at 12:38:38 +0100, Martijn Sipkema wrote:
> > Still there is no guarantee that 10 packets always have exactly the same
> > number of samples. You say the mLAN spec says you need a buffer of
> > around ~250us. Note that is doesn't say a buffer of a number of frames.
> > The bottom line is these packets are sent at regular time intervals, not
> > at a fixed number of frames and thus JACK should support this by
> > allowing non-const (frames) callbacks IMHO.
>
> Why? Surely its much easier to wait until you have n samples and then send
> them round. The extra 250us of latency is hardly punishing.
>
> You must do that where you have a soundcard<->mLAN bridge in any case, in
> order to sync the graphs.
>
> IMHO if jack makes things hard for app developers by forcing them to deal
> with odd sized data blocks then its not doing its job. As we have
> discussed on the jack list there are a number of situations where you cant
> reliably or efficiently handle variable block sizes.

Well, I'll shut up about it. I still think it is a mistake. I haven't heard
any
convincing (to me) arguments why an application should not handle variable
sized callbacks. VST process() is variable size I think as are EASI xfer
callbacks, but clearly JACK needs constant callbacks and there is nothing
I can do about that...

--ms






Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Martijn Sipkema
[...]
> The bottom level packets are sent at fixed time intervals (obviously,
> corresponding to the frame clock of the bus), but these packets are tiny
> and you get millions of them per second. A useful packet of audio data
> will be made up of a bunch of these.
> 
> According to the mLAN spec you need a buffer of around ~250us (depending
> on format) to collate the packets.

Still there is no guarantee that 10 packets always have exactly the same
number of samples. You say the mLAN spec says you need a buffer of
around ~250us. Note that is doesn't say a buffer of a number of frames.
The bottom line is these packets are sent at regular time intervals, not
at a fixed number of frames and thus JACK should support this by
allowing non-const (frames) callbacks IMHO.

--ms








Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Martijn Sipkema
> > I'm not sure, but it seems the audio transport over FireWire does not
> > deliver a constant number of frames per packet. Does this mean that
> > JACK cannot support FireWire audio without extra buffering?
> 
> ISO packets are a fixed size, so there will be a constant number of
> frames per packet.

No, I don't think so. The packets are a fixed size and they are sent at
a fixed interval which means the number of samples per packet will
differ by one. That what it says in the paper. And that is what JACK
won't support properly because it is considered a 'broken' design.

--ms





Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-25 Thread Martijn Sipkema
> Some folks here might find this of interest. I do...
> 
> http://www.cs.ru.ac.za/research/g99s2711/thesis/thesis-final.pdf

I'm not sure, but it seems the audio transport over FireWire does not
deliver a constant number of frames per packet. Does this mean that
JACK cannot support FireWire audio without extra buffering?

--ms





Re: [linux-audio-dev] BruteFIR + jack = crackling noise?

2003-02-25 Thread Martijn Sipkema
> >IMHO the hardware should dictate the blocksize. For most audio
> >interfaces this will be constant. For some it is not.
> 
> the claim is that designs in which it is not constant are at least
> less than optimal and at worst, just stupid.

Well, I disagree. I don't think it is a stupid design.

> >> Anyway, if you have several HW interfaces
> >> each using their own blocksize or interrupt frequency then this is the
> >> only sane option.
> >
> >This is another issue. JACK doesn't not support more than one device
> >nor should it. If an application wants to use more than one device the
> >extra buffering and possibly synchronisation is up the the application.
> 
> actually, this is not the model at all. the model is that JACK doesn't
> interact with hardware, but with more abstract devices, such as those
> represented by ALSA PCM devices. if such a device maps to multiple
> audio interfaces, JACK doesn't care, but its up to alsa-lib to make
> this work, not JACK and not JACK clients. 
> 
> the same is true of any other app using the ALSA API: you do not need
> any extra code to deal with the specifics of what kind of "thing" an
> ALSA PCM device is: it could be a connection to a JACK server, to a
> network connection, to just the s/pdif ports of a multichannel
> interface, to 4 cards ganged together and clocked via word clock. the
> app just doesn't have to care.

To say it another way: JACK only supports one 'interrupt' source. This
can apparently be several devices made to look like one by ALSA, but
this does not change the fact that JACK itself only supports one driver.

> >> If this means an extra buffering layer, then so be it. That's the
> >> price you pay for sloppy HW design.
> >
> >I really don't think that hardware that doesn't provide a constant
> >number of samples per interrupt is sloppy HW design and I think
> 
> the number of samples per second is constant (except for when changing
> rates or doing something truly wierd like word-clock driven
> varispeed). to keep CPU load even, one wants a constant number of
> interrupts per second. ergo, the number of samples per interrupt
> should be constant.

That is correct. Indeed some devices allow varispeed. This means that
you'll have to have a processing reserve for high speed. Having interrupts
at constant time or frames doesn't really matter. You are correct in that
well designed hardware will provide interrupts at regular intervals.

> look, the point is just this: USB was never designed to handle audio,
> let alone multichannel audio. its been stitched together, patched up
> and made to work, but it doesn't work very well. the CPU wants to
> process samples in chunks of the same size to equalize the load,
> otherwise you have a situation that is impossible to rely on (e.g. you
> are running close the CPU's available cycles at a given
> blocksize. suddenly you get a run of "small" blocks. boom!) now,
> obviously, if the variation is small (e.g. 1-4 samples) it may not
> matter so much, but its not clear that a design that allows variations
> of 1-4 samples can exclude the 1024-4096 case either.

I'm not advocating USB audio. (I'd rather see Firewire being used for
audio more.) And you are correct in saying that the variation in frames
per callback needs to be small. But there are audio interfaces that
do exactly that: they provide nearly constant frames per callback.
And I don't see any reason why JACK should not properly support
such hardware.

> yes, you can move audio over USB. the question is not whether you can,
> but whether you should, and my feeling is that professional or
> semi-professional users should avoid it completely, regardless of what
> Yamaha, Tascam, Edirol and others who want to provide *cheap*
> connectivity to home studio users say in the advertisements.

And thus JACK will not support USB audio devices properly? And
perhaps other hardware that is 'broken'? I think this is a mistake.

And also, I think it would be good algorithm design to make it
independent of the number of frames per callback/process.

--ms





Re: [linux-audio-dev] BruteFIR + jack = crackling noise?

2003-02-25 Thread Martijn Sipkema
[...]
> I'm a newbie to LAD, but I have some years of experience of developing
> and using a system similar to JACK for routing blocks of samples to
> DSP modules of a digital satellite control receiver and transmitter
> system running on Solaris (we are talking about some megasamples per
> second here).
> 
> IMHO, the only sensible way for a system like JACK to operate is by
> using a constant blocksize.

IMHO the hardware should dictate the blocksize. For most audio
interfaces this will be constant. For some it is not.

> Anyway, if you have several HW interfaces
> each using their own blocksize or interrupt frequency then this is the
> only sane option.

This is another issue. JACK doesn't not support more than one device
nor should it. If an application wants to use more than one device the
extra buffering and possibly synchronisation is up the the application.

> If this means an extra buffering layer, then so be it. That's the
> price you pay for sloppy HW design.

I really don't think that hardware that doesn't provide a constant
number of samples per interrupt is sloppy HW design and I think
the audio framework and applications should handle this without
adding unnecessary buffering.

> BTW, can JACK handle several HW interface using different blocksizes
> at a time (assuming sample frequencies are coherent) ?

No, but you could run several instances of JACK.

--ms





Re: [linux-audio-dev] BruteFIR + jack = crackling noise?

2003-02-24 Thread Martijn Sipkema
[...]
> Instead I would suggest a built in poll mode in JACK for audio hardware
> with strange period sizes. Although not the most beautiful solution, it
> will work reliably and will only be needed for the type of hardware
> that cannot provide interrupt rates which is a power of two.

I'm not a JACK developer so my opinion may not really count here, but
I really think it would be a bad decision to criplle support for all
hardware
that is not able to provide constant size power of two (frames) periods.

And it surely wouldn't be bad for clients not to rely a a particular size
callback. I don't know how CoreAudio handles this. I'm sure EASI did
not have constant size callbacks and VST does not either IIRC. (ASIO
probably does, but ASIO also forces double buffering (I think)...).

--ms







Re: [linux-audio-dev] BruteFIR + jack = crackling noise?

2003-02-24 Thread Martijn Sipkema
> > [...]
> > > > Perhaps you would reconsider having JACK use constant (frames)
> > > > callbacks?
> > >
> > > I think a better solution might be to buffer up enough samples so that
> > > jackd can provide a constant number of frames.
> >
> > I don't think that is a better solution. JACK should be close to the
> > hardware and deliver what the hardware is capable of. If a client needs
> > constant (frames) period, it can do the buffering itself.
>
> Not without making the cpu load unpredictable.

Is that so? And why is it that JACK can and a client cannot?

> I think its a bad idea to life hard for (some) clients because of a few
> bad hardware designs.

Hardly a few. And not even necessarily bad.

--ms






Re: [linux-audio-dev] BruteFIR + jack = crackling noise?

2003-02-24 Thread Martijn Sipkema
[...]
> > Perhaps you would reconsider having JACK use constant (frames)
> > callbacks?
> 
> I think a better solution might be to buffer up enough samples so that
> jackd can provide a constant number of frames.

I don't think that is a better solution. JACK should be close to the
hardware and deliver what the hardware is capable of. If a client needs
constant (frames) period, it can do the buffering itself.

The buffering adds latency and complexity. I thought the idea of the
callback based API was that the client has to process the data
available when it is available. For some hardware this might be
constant and for other it is not. As long as the time to process is
roughly the same as the time the callback represents high cpu load
should be possible.

I'm not convinced that because of easier block processing in the
clients support for common hardware should suffer.

--ms






Re: [linux-audio-dev] BruteFIR + jack = crackling noise?

2003-02-24 Thread Martijn Sipkema
[...]
> many USB audio interfaces work in a fundamentally different way than
> other audio interfaces. rather than sending an "interrupt" to the host
> after processing 2^N frames, they send an interrupt every N
> msecs. 

And JACK doesn't support this because it needs a constant size
(frames) period. IMHO the audio interface should dictate the period
size and it need not be constant.

Perhaps you would reconsider having JACK use constant (frames)
callbacks?

[...]
> in closing, let me say that the fact that there are lots of USB audio
> interfaces appearing shouldn't fool anyone into thinking that these
> are suitable for professional audio work. the basic design of USB is
> terrible for this stuff, and i would personally avoid USB audio
> interfaces like the plague.

For good performance the period should be constant, but it doesn't
really matter if it is contant frames of constant time with a possible
very small difference in frames. USB may have more problems, I
don't know really, but the period size is not really a big problem, or
at least it shouldn't be.

> the old joke "eat shit - 600 billion
> flies can't be wrong" seems very to much to apply here, IMHO.

You are probably right. This is also the case, or perhaps even more
so with (the standard) USB over MIDI.

--ms





Re: [linux-audio-dev] crusoe ll problems

2003-02-21 Thread Martijn Sipkema
> I'm trying to run a low latency kernel and audio applications on a
> crusoe processor laptop.
> 
> Yes, I'm crazy. 

You might want to take a look at the following:
http://www.mindcontrol.org/~hplus/aes-2001

In slide 5/6 three different processors are compared and the 400 MHz
Transmeta Crusoe is said to have a 1000 us interrupt latency whereas
a 600 MHz Intel Celeron processor has a 80 us interrupts latency,
both running BeIA/BeOS. This is probably related to code morphing.
Maybe more recent Crusoes aren't affected.

--ms






Re: [linux-audio-dev] MIDI Clock and ALSA seq

2003-02-17 Thread Martijn Sipkema
[...]
> Isn't the pulse of MIDI clock defined by the BPM, that's how all
> instruments synced to MIDI clock work...

24 MIDI clock messages are to be sent per quarter note.

--ms







Re: [linux-audio-dev] (OT) C++ flame war

2003-02-06 Thread Martijn Sipkema
[...]
> If you don't know *every* detail of
> a struct, you can't create an instance of one, because you don't know
> it's *size*.

And the offset of its members.

[...]
> So, the basic problem is that it's not the constructor that allocates
> memory for the instance; it's the code generated by the 'new'
> operator.
>
> There are ways around this, of course. For example, you can "wrap" the
> constructor of a class in a static member of the class, that performs
> the instantiation and returns a pointer. The "fake" constructor would
> actually give you an instance of a "secret" derived class that
> contains the implementation.

When dynamically linking C++ classes that imlement an interface you can
export create/destroy functions that return/take a pointer to a instance.
Note
that you should not use virtual destructors instead of a destroy function
since
the delete function called might not match the new from the dynamically
loaded code.

> BTW, I believe this is about the way normal constructors work in
> Delpih/Object Pascal - which is why you cannot have static instances
> at all. No matter what you type, what you get is always a pointer.

On of the best things about C++ is having auto storage classes, since
it makes resource management much easier/safer. I really don't like
Delphi/Object Pascal at all. I'm not sure but I believe with Java it
is also not possible to create classes on the stack, but Java has a
garabage collector (and that's not always convenient for RT tasks).

--ms






Re: [linux-audio-dev] (OT) C++ flame war

2003-02-06 Thread Martijn Sipkema
[...]
> >No, it requires a pure virtual class per distinct interface (abstract
> >class). And I don't see why this would not scale.
>
> you should try writing ardour :)

It might be me who won't scale :) I know writing large applications is
not easy.

> >A friend is just like a member function, i.e. it can access the
> >class' private data, but it is not in the class' scope and the
> >function is not invoked on
>
> i know what a friend *is* - i was trying to convey what i want a
> friend to be in the best of all possible worlds.

And I was trying to say that C++ has other features for this and that
friend is not the way to go.

[...]
> >Correct. But perhaps you are misusing the friend concept. Are these
> >friend classes so closely related that they cannot use some public
> >interface?
>
> as i said above, the interfaces are not public. they are intended to
> be restricted just to the specified other classes.

This means that all these classes are very tightly coupeled.

> >> none of this helps with the problem erik identified.
> >
> >The problem Erik identified was that one could not seperate the
> >interface from the implementation in C++. I then said this can be
> >done using an abstract class, i.e. an interface.
>
> i would accept that this is true for relatively simply objects. i just
> don't think it scales well. the current ardour editor has about 7000
> lines of code, the object itself has about 800 members (either data or
> functions). splitting this up is a non-trivial exercise.
>
> nevertheless, it could truly be worth doing. the compilation
> dependencies right now are horrendous. changing editor.h requires me
> to wait about 10 minutes for a recompile :)

I think if you want to have less compilation dependencies, than the
access control you say you'd like to see in C++ won't help. You'd
really need abstract classes for that.

This probably isn't easy but I think that it may, besides improving the
compile time, make the code more readable. Also, perhaps a better
seperation from the user interface code would be possible.

[...]
> >It doesn't work that way. You cannot create an class instance without
> >its full declaration. I don't see the problem of having the private part
>
> i don't believe that this is true. the actual code executed by
> constructor is the only thing that needs to know the full
> declaration. the one exception is creating objects on the heap, where
> we need a size so that we can allocate memory, and then run the
> constructor using the returned address. i'd be happy with a language
> where
>
> x = new Foo;
>
> really translates to:
>
>x = Foo::constructor (malloc (Foo::sizeof());
>
> where ::sizeof() is a built-in compiler-generated virtual.

I really don't think it is that simple. In general the layout of a class in
memory is dependant on the declaration. There may be (only a)
private constructor. The location of public member variables may
change. No, I think the only way to have clean seperation of interface
and implementation is using an abstract class.

> >> note that the private stuff would be in a file that looked just like
> >> the class declaration, but had no public (and protected?)
> >> declarations. in other words, the class declaration it includes is
> >> implicitly 100% private, but it can include header files that are
> >> necessary for the private declarations.
> >
> >I don't see what this would solve and I don't think this is even
possible.
> >Changing the private part will break binary compatibility (without
> >changing the public header).
>
> the dependency part is easily solved by the compile system. given the
> new preprocessor directives it would require anyway, you'd just
> generate dependencies that included the private part. then if the
> private part changes, even though its never read by the users of the
> public interface, they are recompiled too.

How would I know to recompile code if none of the header files it uses
have changed?

Apart from the interface/implementation issue, the fine grained access
control for classes based on the calling function/class name you propose
seems to me not without its problems.

--ms






Re: [linux-audio-dev] (OT) C++ flame war

2003-02-06 Thread Martijn Sipkema
[...]
> i love C++. i think its one of the best things ever. but i happen to
> agree with Erik. the solution proposed by martijn doesn't scale well,
> and doesn't really address the issue in a comprehensive way. it
> requires one pure virtual class per distinct set of private members,
> for a start.

No, it requires a pure virtual class per distinct interface (abstract
class). And I don't see why this would not scale.

> the kinds of problems i have with C++ stem from the fact that you
> cannot mark a section of protected members as accessible from only a
> particular set of friends. the only way to say "only class Foo can
> access these member (functions)" is to create a separate class, make
> Foo a friend of that class, and then inherit from it.

This really is a different problem.

A friend is just like a member function, i.e. it can access the class'
private
data, but it is not in the class' scope and the function is not invoked on
an object. Declaring a class A friend of another class B is saying that all
member functions of A have access to the private part of class B. This
should only be done to exporess closely connected concepts.

When possible other  classes should interact using a class' public
interface(s).

> this gets really messy really soon. the editor object in Ardour
> contains many distinct sets of functionality that i would really like
> to partition.

So, derive from various abstract classes that provide interfaces
for these sets of functionality.

> i could create a set of discrete classes that cover each
> aspect of the functionality, and then do MI from all of them. the
> problem is that is each one of these aspects *internally* needs to
> know about the others, which now means that they each have to be
> friends of each other. so what's the point? i'd end up with something
> ludicrous like:
>
>   class EditorFoo {
>  ...
>  protected:
> friend class EditorThis;
> friend class EditorThat;
> friend class EditorTheOther;
> ...
> friend class ObjectThatInteractsWithEditorFoo;
> ...
>  };

Here EditorFoo is not a abstract class.

>
> ...
>
> class Editor : public EditorFoo, EditorThis, EditorThat,
>EditorTheOther  {
>  }
>
> which is really no help at all.

class A { // some interface
public:
virtual void a() = 0;
};

class B { // another interface
public:
virtual void b() = 0;
};

class editor : public A, public B { // the editor provides both interfaces
public:
void a();
void b();
};

and now a class C that wants to use the functionality of the editor that is
exported by interface A can use it like:

void f(A& i) {
i.a();
}

without depending on the implementation.

Note that your problem is still different as with using friends in such a
manner the classes using the editor are dependant on its implementation
and also don't use the editor's public part only.

> the other alternative is to use HAS-A
> instead of IS-A, and its even worse. we end up with lots of code like:
>
> editor->foo->...
> editor->that->...
> editor->theother->...

This is an option.

> all i'd really like it to be able to say:
>
>class Foo {
>   protected:
>   /* scope A */
>   friend class Bar;
>   ...
>   protected:
>   /* scope B */
>   friend class Baz;
>   ...
>};
>
> such that Bar can only access stuff within "scope A" and Baz can only
> access stuff in "scope B". that is, access control keywords
> ("private", "protected", "public" define access scopes). right now, a
> friend class declared anywhere within the class declaration is a
> friend, period.

Correct. But perhaps you are misusing the friend concept. Are these
friend classes so closely related that they cannot use some public
interface?

> none of this helps with the problem erik identified.

The problem Erik identified was that one could not seperate the
interface from the implementation in C++. I then said this can be
done using an abstract class, i.e. an interface.

> i would really
> love a C++ that did this:
>
>  class Foo {
> public:
> ... stuff ...
> private:
>
>  };
>
> and then in other files, you would do:
>
> #use "foo.h"
>
> or
>
> #implementation "foo.h"
>
> or something like that. the first one would just bring in the public
> declarations, and its what you'd use when you're calling or creating
> Foo objects. the second one would be used in the definitions for the
> member functions of Foo, and hence would almost certainly be limited
> to "foo.cc".

It doesn't work that way. You cannot create an class instance without
its full declaration. I don't see the problem of having the private part
of a class in the header. If you want to be seperated from the
implementation, use an abstract class. But then you cannot create
a class instance, you can only use the provided interface on an already
created instance (, but you can delete it when there is a virtual destructor
in the in

Re: [linux-audio-dev] (OT) C++ flame war

2003-02-05 Thread Martijn Sipkema
> > You are not forced to define the private data members and functions at
the
> > same time as the public ones in C++. The way to handle this is to put
the public
> > interface in a pure virtual class:
>
> In my opinion (please note that this IS an opinion) the method you propose
> is at least as ugly as any other way of keeping a class's private data
members
> private. IMO, using C and doing

I don't see what makes the method I propose, and that most C++ programmers
use, ugly. It is much cleaner and explicit that your method below.

> typedef void Object ;
>
> Object * Object_new (/* parameters */) ;

You cannot create an Object on the stack. With C++ classes you can. This
makes resource management much easier.

> int method_1 (Object *object, /* parameters */) ;

This is not type safe. I suppose this function method_1 then calls another
function
via a pointer stored somewhere in *object?

> void Object_delete (void) ;

void Object_delete(Objects *object); ?

> and then using a struct (return a pointer to it from Object_new()) in the
> implementation file is neater and works better.

I think it is less flexible, less readable and error prone.

> If you think C++ is great, then you are entitled to your opinion. In my
> experience, the C++ boosters are far pushier with their form of religion
> than the people who prefer C to C++.

Hmm. I tend to think it's the other way round. (the standard "C++ is slow",
"C++ isn't really OO", etc. arguments are quite often seen.)

> > I think most of the arguments in the article are not valid,
>
> Most of the arguments or just the ones relating to C++? I did talk about
> stuff other than C++ in that article.

I meant only the ones related to C++.

[...]
>2) OO can be done in Standard C.

Sure. But the language does not support it, it merely enables its use.

>3) Some people (me included) might prefer doing OO programing in C
>   rather than C++.

It is not up to me to tell you what language to use. As you in your article
talked
how you think C is a better language, I gave my point of view.

--ms











Re: [linux-audio-dev] (OT) C++ flame war

2003-02-05 Thread Martijn Sipkema
[...]
> for an on-topic rant, see Erik de Castro Lopo's interview on mstation.org:
>
>   http://mstation.org/erikdecl.php
>
> where he discusses the OO design of libsndfile and libsamplerate
> (surely two of the most rock-solid audio libraries ever!)

>From this article:

"I also think that there is a real problem with the way classes are defined
in C++ and this is a problem which really does impact design. The problem is
that you are forced to define the private data members and functions at the
same time as the public ones. This is OK for the trivial C++ examples you
see in textbooks but as soon as you REALLY want to hide the private
information you end up defining the class to have a single private void
pointer which gets something allocated to it in the implementation. This is
basically the same thing you do when doing OO programming in C, so where is
the benefit of C++?"

--

You are not forced to define the private data members and functions at the
same time as
the public ones in C++. The way to handle this is to put the public
interface in a pure
virtual class:

class animal
{
public:
virtual int legs() = 0; // returns number of legs the animal uses when
moving around
};

and then derive from that interface:

class dog : public animal
{
public:
int legs() { return 4; }
};


I think most of the arguments in the article are not valid, but I guess I
really shouldn't
be making my point here on this list.

It can't hurt for to also hear some positive comments on C++ occasionally
though...

--ms







Re: [linux-audio-dev] newest audio server for Linux (yep, yet ano ther)

2003-02-05 Thread Martijn Sipkema
[...]
>   :-) I have exactly the same problem with templates, it pretends to be
> dynamic while it's just statically generated (=similar to preprocessor,
> which I guess is your point)

I think the C++ STL is great and a perfect example of the power of
templates. Much better than GLib, and it's standard. Using stack
allocated containers is exception safe and makes constructing 
complicated objects much cleaner, not having to use lots of
gotos like when doing a Linux kernel driver in C.

I hope we'll see more and more use of C++ in free software. C++
kernel programming would be nice...

--ms






Re: [admin] hold it... Re: [linux-audio-dev] New software synthesizer for Linux (free under GPL v. 2)

2002-09-26 Thread Martijn Sipkema

[...]
> hold it, guys. (i know, i sometimes can't resist, too.)

I was just able to resist this time...

> please stop this thread and respect anybody's choice of "license" or
> whatever conditions they might offer their software under. if you don't
> like it, don't use it.

I think the question as to whether the "I wish to know that my work will be
used to praise the name of Jesus" is part of the license is still valid. The
software
would be non-free if it is, otherwise it would just be bad taste IMHO.

--martijn






[linux-audio-dev] Low level Audiowerk8 driver for Linux (2.4)

2002-09-03 Thread Martijn Sipkema

Hi,

I've written a low level, i.e. it is not ALSA/OSS but a device
specific interface, driver for the Emagic Audiowerk8 audio card.
I still need to implement I2C (for setting the sample rate) and
switching the input (analog/digital) and the buffer size is currently
fixed (set at compile time) at 128 frames.

The interface as it is now is ioctls for START, STOP, XFER and
mmap()'ed buffers. XFER blocks until the device switches buffer half
(double buffered) and returns the position (in frames, since START)
for writing, so xruns can be detected.

I've also written an application that outputs a (loud) test signal.

I'd appreciate any comments. I'd especially like to hear if you have
an Audiowerk8 and the driver (doesn't) work for you. Also I have
not been able to test the driver on a platform other then x86.

I'll probably try writing a JACK driver next so the device can actually
be used. With ALSA not being documented I have no idea where to
start with writing an ALSA driver (and I really don't like having a
common kernel interface for different hardware anyway. This would
really complicate the driver, whereas with the device specific kernel
driver much can be done in user-space).

The driver is available at:
http://www.sipkema-digital.com/~msipkema/

--martijn






Re: [Alsa-devel] Re: [linux-audio-dev] midi events in jack callback / ALSA Sequencer

2002-08-22 Thread Martijn Sipkema

> Why is it important to keep the API simple, shouldn't it be functional in
first place and make the API usage simply?

Who says a simple API can't be functional?

> Anyway (IMHO), there should really be an API which combines audio and MIDI
playback, recording and timing of events and makes it possible to keep MIDI
in sync with audio (ALSA sequencer API seems to support only tick and time
synchronization modes, no audio clock sync mode). Professional API's like
ASIO and VST (and DirectMusic) synchronizes everything to audio clock
(sample position). Even when MTC is used the audio clock is the main clock
source, digital I/O or world clock is used to sync the audio clock to the
audio clock of the external device. There is no need for other sync modes
than audio (_maybe the tick mode could be useful if ALSA takes care of
sending MIDI clock, SPP, MTC etc. data to external devices). Sequencer
application can easily convert sample position from the beginning of the
song to measures / beats, time etc. whatever is needed when the time
signature, tempo and sample rate is known - reading the sample position from
the sound card is no problem at all!

Well, I disagree. I think using UST is better. You might want to use MIDI
without audio. Also
MIDI hardware that supports scheduling will not support scheduling to the
audio position. Using
UST just means mapping different clocks to one common clock.

> , PCI chipset's DMA registers have usually such current play position
register, or if not, the sample position can be calculated by measuring the
time between IRQ's and calculating the time between current time and the
time when last interrupt was received.

The current sample position does not tell me anything about when some
earlier sample occured or
a later sample will occur. In your example you already use two UST/MSC
values to estimate
the current MSC. Allthough it is possible on some hardware to read the
current sample position
(MSC) it is not possible to the OS (or other hardware) to schedule on it and
so you'll have to
convert to another clock at some point.

--martijn








Re: [Alsa-devel] Re: [linux-audio-dev] midi events in jack callback / ALSA Sequencer

2002-08-22 Thread Martijn Sipkema

> > I don't want to support tempo (MIDI clock) scheduling in my MIDI API.
This
> > could be better handled in the application itself. Also, when slaved to
MIDI
> > clock
> > it is no longer possible to send messages ahead of time, and not
supporting
> > this
> > in the API makes that clear to the application programmer.
>
> The concept of the public available alsa timing queues allow outboard
> applications to schedule events in a tempo locked fashion. Think of
> applications like drum machines, appegiators, tempo delay etc. Of course
you
> might be using one big monolithic (cubase?) sequencer application which
does
> do everything you need

No, I like the concept of a lot of small applications. And I want those to
be able
to sync (slave) to MIDI clock.

> Compare it to an external (outboard) drum machine you're using since
> programming a pattern in it so user friendly (hence you end up being more
> creative), and sync that from your favorite sequencer which has the tempo
> map for your song.

For this, the sequencer is master and just sends the MIDI messages including
MIDI clock to a scheduled UST queue. Timing will be as good as is possible
with a MIDI clock slaved external instrument.

> Of course all of this could be done without a queue which supports tempo
> scheduling, but then you'll need to emit MIDI Clock events and rely on
> immediate processing.

Isn't that what will eventually happen using a tempo queue also?

> In case a soft synth (or other potentially high
> latency device) is triggered from the MIDI Clock you loose the ability to
> correct for this.

I don't see how having a MIDI clock scheduled tree will help. When slaving
to MIDI clock you cannot know when a future beat will occur, so a software
synth salved to MIDI clock will behave just like a software synth in real
time,
i.e. it cannot compensate for its own latency. However, if the master sends
the MIDI clock messages ahead of time, and for a sequencer application this
is exactly what it would do, then the software synth can compensate for its
latency and might even interpolate between clocks. This would not be
possible
having a tempo based queue since the UST of the messages in that queue are
not known in advance, right?

> > > leaves room for events not defined in midi spec).
> >
> > ...I'm not sure that is a good idea. What kind of events?
>
> eg.
>
> - system events like announcements of topology changes

I'm thinking of handling these out-of-band.

> - (N)RPNs as a 14 bit value instead of 2x 7bit

This I would like to solve by having the posibility of sending a short
sequence
of MIDI messages that are guaranteed to not be interleaved, e.g. 4 messages,
two for setting the current (N)RPN and 2 for setting its value.

> - SMF like meta data

Is it really necessary to support karaoke :)

> - controls for current sequencer queue: tempo, position, etc.

These aren't needed when a tempo queue isn't used.

I think it is important to keep the API as simple as possible.

--martijn






Re: [linux-audio-dev] midi events in jack callback / ALSA Sequencer

2002-08-20 Thread Martijn Sipkema

[...]
> Within ALSA we have two priority queues, one for tick (bar,beat) scheduled
> events, and one for clock (ns) scheduled events.

As MIDI uses MIDI tick messages for time based sync and MIDI clock messages
for tempo based sync I kind of feel the ALSA sequencer naming is a little
confusing :)

> In case of immediate
> scheduling the priority queue is bypassed and the event submitted in the
> receiver's fifo (which would be your immediate queue).
>
> Due to potential blocking at the receivers you'll need a fifo for every
> destination.

Correct, that's what I've been calling queues.

> Reason for having 2 priority queues with different reference is to cope
with
> tempo/signature changes while remaining in sync. The clock and tick
priority
> queues are in fact parallel in ALSA.

I don't want to support tempo (MIDI clock) scheduling in my MIDI API. This
could be better handled in the application itself. Also, when slaved to MIDI
clock
it is no longer possible to send messages ahead of time, and not supporting
this
in the API makes that clear to the application programmer.

> Since especially for soft synths (but also for some -undocumented!- USB
midi
> interfaces, like Emagic AMT)

Yes, I've repeatedly asked Emagic for documentation on their AMT protocol
without success. :(

> the events need to be scheduled ahead (sort of
> a pre-delay, say 10ms or more) to let the device/softsynth handle the
micro
> scheduling, it would seem a good idea to handle this at the clock based
> queue. Since variable predelay in ms would be not quite friendly to the
tick
> based queue (different units), it might make sense to have the tick based
> queue send events into the clock based queue instead of immediate
delivery).

I'd rather see MIDI clock sync handled in the application. This also keeps
the
API cleaner.

[...]
> A good reason for applications to use (UST/ALSA) scheduling instead of
> taking care of micro scheduling itself and using rawmidi interfaces is
> better support for softsynths to trigger at the right spot in the buffer,
> and for the upcoming smart (eg. USB) midi devices.

Or even MWPP.

[...]
> You can't overcome the limits of the MIDI physical line if that's your
> target transport. However when sending events to soft- or onboard synths
> these limits are different (typically less of an issue).
>
> When using events instead of midi bytes the merging is a no brainer

I was planning on doing that, but even then there are issues with for
example
(N)RPNs.

> leaves room for events not defined in midi spec).

...I'm not sure that is a good idea. What kind of events?

--martijn






Re: [linux-audio-dev] midi events in jack callback (was: Reborn)

2002-08-19 Thread Martijn Sipkema

> >MIDI through and any other 'immediate' type MIDI messages do
> >not need to be scheduled, they can be written to the interface
immediately.
>
> Yes, they could. It would however necessitate different input routes
> for 'immediate' and 'queued' events to the MIDI output handler.

The MIDI I/O API I am working on has 'scheduled' and 'immediate' queues. I
don't think there is a way around this unless 'immediate' messages are not
used
at all and that is clearly not an option.

> This
> would not help make things simpler. It would also mean that a Sysex
> or pitch/CC burst routed through can delay MIDI clocks because of the
> limited bandwidth on the MIDI wire.

Sysex can hurt timing for other events, but that's MIDI. MIDI clock (any
MIDI realtime message) can interleave other messages. And yes, merging
MIDI streams is not easy.

> Thinking about it -- it's hypothetical because we don't have them in
> Linux yet -- I believe a decent MIDI out handler using a firm timer
> would be an order of magnitude more complicated than one based on the
> RTC. Have you coded one yet?

Yes, and it is not that complex I think. Note that this would only have to
be done
in a driver process or a user-space sequencer application and not for every
client
application.

I'll try to get a version of my MIDI I/O API/framework ready, but it will
probably still
take me some time to get finished.

--martijn






Re: [linux-audio-dev] midi events in jack callback (was: Reborn)

2002-08-19 Thread Martijn Sipkema

[...]
> >User space MIDI scheduling should run at high rt priority. If scheduling
> >MIDI events is not done at a higher priority than the audio processing
> >then it will in general suffer jitter at the size of the audio interrupt
> >period.
>
> Jitter amounting to the length of time the audio cycle takes to
> compute, that is (which will obviously be less than the audio irq
> period in a usable configuration).

Yes, the audio interrupt period represents the worst case. But with large
audio
buffers and a jitter of say 70% of the interrupt period size this is still a
very
very large jitter and completely unacceptable (IMHO).

> Another reason to run MIDI behind audio: task switching and the cache
> invalidation it causes. If audio processing is interrupted 1024 times
> per second, audio performance *will* degrade, even more if many Jack
> clients (= separate processes) are involved.
>
> Not sure about the impact though. We definitely need some testing data
> here.

I think that the impact will not very high. Also, that is the price one has
to pay
for decent MIDI timing. Clearly, running MIDI at a lower than the audio
priority
will result in bad timing.

> >Using the RTC is not necessary when using firm-timers.
>
> If you're doing software MIDI through, you'll have to cancel the
> timer you've just set, which is bad. It gets worse when you have
> user functions that generate MIDI events at random. In this case
> I think the use of firm timers is not adequate.

MIDI through and any other 'immediate' type MIDI messages do
not need to be scheduled, they can be written to the interface immediately.

> >Also, when doing hard real-time audio/MIDI, it is not possible to use
> >more than about 70% of the available CPU time and still get reliable
> >results.
> >
> >Would a library for requesting the preferred scheduling policy and
priority
> >for a certain task be a good idea to improve cooperation between
> >applications?
>
> I'd leave it to the user, but I tend to expect a user to be savvy.

More likely it will be left to the developers of various different
applications and
cooperation between these applications might suffer as a result.
Instead a library for priorities could support the user to define his own
priorities
for various tasks, so a user would be able to choose if he wants to use a
high or
low priority for MIDI scheduling.

--martijn






Re: [linux-audio-dev] Re: [Alsa-devel] Forcing an absolute timestamp for every midi event.

2002-08-18 Thread Martijn Sipkema

[...]
> Is there already a commonly available UST on linux? To my knowledge the
only
> thing that comes close is the (cpu specific) cycle counter.

No, not yet. I think we should try to get hard- or firm-timers and POSIX
CLOCK_MONOTONIC into the Linux kernel.

--martijn







[linux-audio-dev] Re: [Alsa-devel] Forcing an absolute timestamp for every midi event.

2002-08-18 Thread Martijn Sipkema

> Hi! I wanted to ask, how about forcing
> an absolute timestamp for _every_ midi event?
> I think this would be great for softsynths,
> so they dont need to work with root/schedfifo/lowlatency
> to have a decent timing. Not allways you are willing
> to process midi at the lowest latency possible.
> I say because you dont really need all that if you
> sequence in the computer and control softsynths and maybe
> some external device. 
> This way, the softsynth gets the event with the timestamp,
> gets the current time, substracts the audio delay (latency) to that
> and just mixes internally in smaller blocks processing each 
> event in the right time.

I'm not sure what you mean, but below is what I think would be the
best approach for audio/MIDI I/O.



UST = unadjusted system time
MSC = media stream count

see also:

http://www.lurkertech.com/lg/time/intro.html

(synchronous) Audio I/O API
- Callback based.
- All buffers in a callback are of the same size.
- All frames with a particular MSC occur at the same time. (I don't think
 OpenML requires this, EASI does have this with its 'position')
- The audio callback provides an MSC value for every buffer corresponding
 to the first frame in the buffer.
- The (constant) latency (in frames) between an input and an output can be
 seen from the difference between their MSC values.
- The audio callback provides an UST value for every input buffer
 corresponding to the end of that buffer/start of the next buffer.
- The UST value for the start/end of an output buffer can be estimated.

MIDI I/O API
- MIDI messages are received with a UST stamp.
- Timestamps are measured at the start of the message on the wire.
- MIDI messages are either sent immediately or scheduled to UST.
- MIDI messages must have monotonically increasing timestamps if
 scheduled.

For a software synthesizer the MIDI messages received at some UST
can be mapped to a corresponding MSC value and then rendered at
a constant offset from this MSC, most likely the audio I/O latency
(the audio I/O latency may not be the same for all inputs/outputs,
also since MIDI messages always arrive late a small extra latency
should be introduced in this case by increasing the MIDI message's
UST by a constant value in order to compensate for the MIDI interface
and scheduling latency so they arrive a little early instead of late
in order to reduce jitter with MIDI messages arriving near buffer
bounderies).
MIDI -> audio output latency would then be slightly higher (say 2ms,
noting that transmitting a note-on message already takes 3*320usec)
then audio I/O latency.



--martijn






Re: [linux-audio-dev] Locking Midi to Audio

2002-08-17 Thread Martijn Sipkema

[...]
> i just want to note my happiness at reading a post from martijn with
> which i agree 100% !! who says there is no such thing as progress ? :))

Indeed Paul, I'd agree you've made some real progress here :)

--martijn







Re: [linux-audio-dev] Locking Midi to Audio

2002-08-17 Thread Martijn Sipkema

> This is an idea I had some time ago and simply have not had the time to
> explore.
>
> Nowadays few people would want to do Midi without doing audio at the same
> time. This potentially leads to a great simplification in the handling of
> Midi.
>
> Why not lock the Midi processing to the audio processing? If the buffer
> sizes for the audio are small, say 128 samples or less at 48kHz sampling
> rate then all Midi messages can be processed at that granularity without
> impacting the Midi timing too much. Midi can then be read and written via
> raw midi ports instead of /dev/sequencer style devices.

There might still be people who would like to use MIDI without audio. Also
this
makes MIDI dependant on a specific audio API/implementation.

> Main benefits :
>
> 1) With all the Midi processing being done with the audio, the two
>can't fall out of sync without the whole system stalling.

Audio/MIDI sync can be done just as well using UST/MSC.

> 2) It removes the need for a separate process for Midi which is
>competeing against the audio thread for the CPU

This too should not be a problem if the system is well designed, i.e. all
processes have suitable priorities depending on their rate and processing
time. MIDI scheduling should run at a very high priority (higher than audio
processing) and with a very short processing time.

> 3) Raw midi devices are far easier to interact with than
/dev/sequencer

That may be, but that can be solved by different means.

> 4) Once the RT Midi handling process is removed from the system, it
>may be possible to reduce the buffer sizes still further.

I'd rather have decent MIDI timing with a slightly higher audio latency. Has
anyone tested how a well designed user-space MIDI scheduler thread
impacts audio latency?

> I realise that some people are going to complain about this screwing up
> Midi timing. Well with audio processing done in 128 sample blocks at
> 48kHz sampling rate, Midi is processed ever 2.666 milliseconds. At 64
> sample blocks its 1.33 milliseconds which is real difficult to complain
> about.

Not all hardware may be running at 64 frames/period. Even so I would like
to see MIDI jitter at around 500usec instead of 1.3msec or more. There
are good reasons to have larger audio buffers when low latency is not
required.

--martijn






Re: [linux-audio-dev] midi events in jack callback (was: Reborn)

2002-08-16 Thread Martijn Sipkema

> So we need something which handles the timing like the DirectMusic(tm) in
> the Linux kernel.

I would prefer not to have this in the kernel. If the kernel provides
accurate
scheduling and CLOCK_MONOTONIC then I think this can and should
be done from user-space. A driver should be able to read
CLOCK_MONOTONIC from the kernel for timestamping though.

--martijn







Re: [linux-audio-dev] midi events in jack callback (was: Reborn)

2002-08-16 Thread Martijn Sipkema


> I find that for sending MIDI to an external device, "resolution = RTC
> Hz" works very well. It is a problem that a realtime audio thread
> 'suffocates' a RTC thread if low-latency is required, and only one
> processor available. It's very hard to find a clean solution in this
> case, but firm timers obviously do not address this particular
> problem.

User space MIDI scheduling should run at high rt priority. If scheduling
MIDI events is not done at a higher priority than the audio processing
then it will in general suffer jitter at the size of the audio interrupt
period.

Using the RTC is not necessary when using firm-timers.

Also, when doing hard real-time audio/MIDI, it is not possible to use
more than about 70% of the available CPU time and still get reliable
results.

Would a library for requesting the preferred scheduling policy and priority
for a certain task be a good idea to improve cooperation between
applications?

--martijn








Re: [linux-audio-dev] midi events in jack callback (was: Reborn)

2002-08-16 Thread Martijn Sipkema

> >Haven't written anything using MIDI and JACK (or LADSPA), but would it be
poss
> >ible to have a such system as with Cubase where the softsynths are
plugins whi
> >ch receive time-stamped MIDI events (time-stamp is an offset from the
block be
> >ginning in samples).

Either this (use audio sample clock) or use a common clock, i.e. have a UST
timestamp
for every MIDI message and audio buffer (audio frame time can then be
estimated).
IMHO the latter is better.

> >The MIDI-through events that come into sequencer from the
> > external MIDI (in) port always have an offset of zero so the synth
renders th
> >e data starting from the first sample in that block.

This is not the right approach as it will cause jitter.

[...]
> there is no way to
> accurately schedule anything under Linux with hard-timers unless by
> accurately you mean either "resolution = HZ" or "resolution = RTC Hz"
> or "resolution = audio interrupt frequency".

I agree, however HZ=1000 should be usable for MIDI even without patches
for improved scheduling. I believe HZ=1000 is in 2.5?

--martijn






Re: [linux-audio-dev] Jack and block-based algorithms (was: Reborn)

2002-08-14 Thread Martijn Sipkema

> How does the pull model work with block-based algorithms that cannot
> provide any output until it has read a block on the input, and thus
> inherently has a lower bound on delay?
>
> I'm considering a redesign of I/O handling in BruteFIR to add Jack
> support (I/O is currently select()-based), but since it is processes in
> blocks, perhaps it is not feasible?

If an algorithm requires blocks of a certain size, it should buffer such
blocks
itself IMHO. This then increases latency by at least one such block, but
more
realistically 2 blocks, one for buffering and one for processing the block
(async). Please correct me if I'm wrong about this.

How do VST plugins handle this?

--martijn






Re: [linux-audio-dev] App intercomunication issues, some views.

2002-07-25 Thread Martijn Sipkema

> >on every callback. If a node does internal buffering that should not affect
> >the MSCs.
> 
> right, because there isn\'t really an MSC anywhere. as you noted, the
> global jack transport time isn\'t really equivalent. nothing in the
> main JACK API says anything about doing anything except handling a
> buffer corresponding to \"now\" (+/- latency). the transport API couples
> \"now\" with a notion of a significant frame position.

I would like to see per buffer (input and output) at least an UST/MSC
pair + transport position. and have node latencies other than 0 handled
seperately from hardware latency.

[...]
> >If you use a hardware compressor with a high delay there is also no way
> >to compensate. It would be nice to have a way to compensate, but JACK is
> 
> VST and most DAW\'s compensate. Its not up to JACK to do it, but a
> client that wants to do this needs the right information.

I agree. But this is then a seperate issue from hardware latency.

> >not the right place for this I think. Anyway, if it is in JACK, it still
> >would have to be at a higher level than the basic intput/output latency,
> >i.e. without taking extra node latency into account.
> 
> there isn\'t any difference in JACK. each port has its own latency
> figure associated with it (zero by default). it doesn\'t matter whether
> the port represents a physical input/output connector or
> not. jack_port_get_total_latency() traverses the connection graph from
> a given port to a terminal point, collecting latency information along
> the way.

But as you indicated the latency then depends on the route. Should JACK use
internal buffering to make every route have the same (longest) latency? This
will add complexity.

> >There is still the problem of getting an accurate estimate of system
> >time of some audio frame. This is needed to calculate at what time
> >to output a MIDI message. 
> 
> I still don\'t see why you need this. If I queue an event by saying
> \"play this in 0.56msecs\", how whatever i queued the event with goes
> about doing delivering it on time is an internal implementation
> detail. there are several mechanisms available, some better than others.

The meaning of \"play this in 0.56msecs\" changes with the time of the
request.
This will create jitter that would not exist when using UST timestamped
buffers
and MIDI output using absolute UST timestamps.

> If I say \"play this at time T\", then yes, some kind of UST is
> needed. This is much harder to do than the relative method I described.

But, IMHO, it is the only correct way.

> >   I think this approach is better then just
> >counting on being scheduled just after the audio hardware interrupt.
> >This is certainly not the case with somewhat large audio buffers and
> >multiple nodes.
> 
> sure, i don\'t think the audio clock is ever going to be suitable for
> MIDI scheduling. you need the firm timers or KURT patches or a Gravis
> Ultrasound interface.

I agree.

> then you just do all scheduling with a relative offset from now. the
> largest times will be on the order a few hundred msecs, and the common
> case will be more like 1-5msecs.

Define now. When is that? At the start of the cycle? No, because since
there are several nodes, \'now\' might be quite some time from the cycle
start when a somewhat larger (say 200ms) audio buffer is used and there
is no way to know this without timestamping the buffers. That is why UST
is needed.

>  delta_till_emit = event.time - transport_position + 
>jack_port_get_total_latency (relevant_port);
> 

Again, we may already be way past the cycle start time here and yet you
calculate the delta time as if we are exactly at the cycle start time.



--martijn





Powered by ASHosting



Re: [linux-audio-dev] App intercomunication issues, some views.

2002-07-24 Thread Martijn Sipkema

[...]
> 
> consider:node B  
>/\\
> ALSA PCM ->  node Anode D -> ALSA PCM
>\\/
>node C
> 
> what is the latency for output of data from node A ? it depends on
> what happens at node B, node C and node D. if node B and node C differ
> in their effect on latency, there is no single correct answer to the
> question. 

Handling this kind of latency is a different story. Perhaps this shouldn\'t
even be in JACK since as you pointed out there is no right way of handling
it. This could perhaps be handled on a higher level if at all.

JACK basically still is:

input buffer -> JACK graph -> output buffer

on every callback. If a node does internal buffering that should not affect
the MSCs.

[...]
> >Transport time is in frames, right? And there is a transport time available
> >for input buffers and output buffers?
> 
> No. Its computable by using jack_port_get_total_latency(). buffers
> don\'t come \"with\" timestamps. the transport time indicates the frame
> time at the start of the cycle. it may move forward, backwards, etc.

cycle? You got me lost...

[...]
> >The current MSC isn\'t global. The MSC is different for input and output
> >buffers.
> 
> I know. I meant that the equivalent of MSC in JACK *is* global.

Well, then it isn\'t really equivalent, is it? :)

[...]
> OK, for a delay line, that\'s true. but for other things .. try saying
> that to authors and users of VST plugins, where setInitialDelay() is
> critical for not messing up the results of applying various effects to
> different tracks. Adding a compressor, for example, typically shifts
> the output by a few msecs, which the user does not want. The output
> latency is clearly not equivalent to the input latency in the general
> case.

If you use a hardware compressor with a high delay there is also no way
to compensate. It would be nice to have a way to compensate, but JACK is
not the right place for this I think. Anyway, if it is in JACK, it still
would have to be at a higher level than the basic intput/output latency,
i.e. without taking extra node latency into account.

> >So, instead of using UST throughout you use a relative time in the API,
> >which then has to be immediately converted to some absolute time to still
> >make any sense later. Also using UST is more accurate.
> >
> >const struct timespec PERIOD;
> >
> >for (;;) {
> >nanosleep(PERIOD);
> >}
> >
> >is less accurate (will drift) then
> >
> >struct timespec t;
> >clock_gettime(CLOCK_MONOTONIC, &t);
> >
> >for (;;) {
> >t += PERIOD; // i know you can\\\'t actually do this with struct
timespec...
> >clock_nanosleep(CLOCK_MONOTONIC, TIMER_ABSTIME, &t, NULL);
> >}
> 
> i agree with you on this. the problem is that i don\'t think that any
> real application works like this - instead, the MIDI clock needs to
> sync to the current audio transport frame, in which case we have:
> 
> --
>  // compute the delta until the MIDI data should be delivered by
>  // checking the current transport time and then:
> 
> const struct timespec t;
> clock_gettime (CLOCK_WHATEVER, &t);
> t += delta;
> clock_nanosleep (CLOCK_WHATEVER, TIMER_ABSTIME, &t, NULL);  
> --
> 
> we could do that with nanosleep() and the effect would be
> indistinguishable. we are constantly re-syncing to the transport time
> every time we iterate, thus removing the drift issue from
> consideration. 

There is still the problem of getting an accurate estimate of system
time of some audio frame. This is needed to calculate at what time
to output a MIDI message. I think this approach is better then just
counting on being scheduled just after the audio hardware interrupt.
This is certainly not the case with somewhat large audio buffers and
multiple nodes.

> >If I tag a MIDI message with a absolute time then the MIDI implementation
> >can at a later time still determine when the message is to be performed.
> >How can this be done without an absolute stamp?
> 
> it can\'t. the question is whether the user-space API should be using
> absolute or relative stamps.

I think it should. What would be a good reason not to? I can think of
several good reasons why I would want to use absolute time.

[...]
> i didn\'t suggest getting rid of snd_rawmidi_write(). i meant adding
> the \"..with_delay()\" function. snd_rawmidi_write() would still be
> available for immediate delivery. MIDI thru should not really be done
> in software, but it if has to be, thats already possible with the
> existing ALSA rawmidi API.

I don\'t think there are any alternatives for doing MIDI through in software.
And having both a snd_rawmidi_write() and a snd_rawmidi_write_with_delay()
really isn\'t that trivial. How can a correct MIDI stream be guaranteed?
rawmidi doesn\'t operate on MIDI messages and th

Re: [linux-audio-dev] App intercomunication issues, some views.

2002-07-24 Thread Martijn Sipkema

> >If I use an absolute sleep there is basically no difference. The drift
> >will be the same, but instead of scheduling events from \'now\' I can 
> >spcify the exact time. So a callback would then be like:
> >
> >- get the UST and MSC for the first frame of the current buffer for input
> 
> MSC implies timestamped buffers.

MSC would be useful even without timestamped buffers I think. It enables the
application to know the input->output frame latency and if overflow/underflow
occured.

> as i\'ve indicated, i think this is a
> bad design. Defining the semantics of MSC in a processing graph is
> hard (for some of the same reasons that jack_port_get_total_latency()
> is hard to implement).

Why? On a audio card interrupt buffers traverse the entire graph right? i.e.
for every \'node\' its process() function is called for accepting audio data
from that interrupt and producing audio data for the buffer available for
writing on that interrupt, right?

> in a system supporting low latency well, the buffer you are working
> should be considered to be related to \"now\" as closely as
> possible. the only gap between \"now\" and when it was actually
> collected or will be delivered to the connectors on the interface is
> defined by the latency of the input or output path. either way, the
> application should consider itself to be working against the \"now\"
> deadline. 

With two applcations doing audio output (40ms buffer, 2 periods) using on
average 50% cpu time the delay between the hardware interrupt and the callback
will be about 5ms for one of those applications.

> but anyway, this is irrelevant, because MSC is not the timebase to use
> for this - you need to use transport time.

Transport time is in frames, right? And there is a transport time available
for input buffers and output buffers?

> >- get the MSC for the first frame of the current buffer for output and
> >estimate the UST for that frame.
> >- calculate the UST values for the MIDI events that are to occur during the
> >output buffer.
> >- schedule the MIDI events (the API uses UST)
> 
> i see no particular difference between what you\'ve outlined and what i
> described, with the exception that the current \"MSC\" is a global
> property, and doesn\'t belong to buffers. its the transport time of the
> system.

The current MSC isn\'t global. The MSC is different for input and output
buffers.

> >This has two advantages:
> >
> >- since you get UST for the input buffer you have a better estimation of
> >when the output buffer will be performed.
> 
> you\'re making assumptions that the output path from
> the node matches the input path. this isn\'t true in a general
> system. the output latency can be totally different from the input
> latency.

I did not make that assumption I think.

> imagine an FX processor taking input from an ALSA PCM source
> but delivering it another FX processor running a delay line or similar
> effect before it goes back to an ALSA PCM sink.

That latency is intended in the effect and has nothing to do with the MSC.

> >- the MIDI messages will be queued and thus will need an absolute
timestamp.
> 
> they don\'t need an absolute timestamp to be applied in user-space:
> they just need a non-adjustable tag that indicates when they should be
> delivered. obviously, at some point, this has be to converted to an
> absolute time, but that doesn\'t need to be part of the API. \"deliver
> this in 1msec\" versus \"deliver this at time T\"  - the latter requires
> UST, the former just requires something with the semantics of nanosleep.

So, instead of using UST throughout you use a relative time in the API,
which then has to be immediately converted to some absolute time to still
make any sense later. Also using UST is more accurate.

const struct timespec PERIOD;

for (;;) {
nanosleep(PERIOD);
}

is less accurate (will drift) then

struct timespec t;
clock_gettime(CLOCK_MONOTONIC, &t);

for (;;) {
t += PERIOD; // i know you can\'t actually do this with struct timespec...
clock_nanosleep(CLOCK_MONOTONIC, TIMER_ABSTIME, &t, NULL);
}

A MIDI scheduler thread would look somewhat like this and would be better of
using absolute times. If the UST is accurate and messages aren\'t scheduled
too
far ahead, then this will IMHO be the best approach.

A common \'wall clock\' is needed to compare the times of events from
different
media. UST provides this.

> >Using MSC for every buffer, the latency is the difference between output
> >MSC and input MSC.
> 
> as indicated above, this isn\'t generally true.

I do not understand this. JACK should be able to know the latency.

> >> now, the truth is that you can do this either way: you can use an
> >> absolute current time, and schedule based on that plus the delta, or
> >> you can just schedule based on the delta.
> >
> >But how can this be done in another thread at a later time?
> 
> thats an implementation issue, mostly for something like the ALSA midi
> layer.

If I tag a

Re: [linux-audio-dev] App intercomunication issues, some views.

2002-07-24 Thread Martijn Sipkema

> nanosleep isn\'t based on time-of-day, which is what is subject to
> adjustment. nanosleep uses the schedule_timeout, which is based on
> jiffies, which i believe are monotonic.

I\'m not sure how nanosleep() is supposed to handle clock adjustment
but I agree it would probably not change its behaviour. nanosleep()
does sleep on the CLOCK_REALTIME.

> i believe that relative nanosleep is better than absolute sleep for
> the simple reason that its how you would avoid drift in
> practice. consider a JACK callback:
> 
> process (jack_nframes_t nframes) 
> {
>   jack_transport_info_t now;
>   
>   /* find out the transport time of the first audio frame
>  we are going to deal with
>*/
> 
>   jack_get_transport_info (client, &now);
> 
>   /* get the set of MIDI events that need to be
>  delivered during the period now.position to
>  now.position + nframes
>  */
> 
>   event_list = get_pending_midi_events 
>   (now.position, now.position + nframes);
> 
>   foreach event in event_list {
>   queue_midi_event (event);
> }
> 
>   ... anything else ...
>   }
>   
> now, what is queue_midi_event() going to do? if you schedule the MIDI
> data for delivery based on an absolute time, you\'re suddenly dealing
> with long term drift again. instead, you just schedule it for an
> offset from \"now\", typically within the next couple of msecs. this
> way, the drift is then limited to whatever variance there is between
> the rate of the clock used to deliver the MIDI data and the audio
> clock, which over that time period (as you noted) will be very, very,
> very small.

If I use an absolute sleep there is basically no difference. The drift
will be the same, but instead of scheduling events from \'now\' I can 
spcify the exact time. So a callback would then be like:

- get the UST and MSC for the first frame of the current buffer for input
- get the MSC for the first frame of the current buffer for output and
estimate the UST for that frame.
- calculate the UST values for the MIDI events that are to occur during the
output buffer.
- schedule the MIDI events (the API uses UST)

This has two advantages:

- since you get UST for the input buffer you have a better estimation of
when the output buffer will be performed.
- the MIDI messages will be queued and thus will need an absolute timestamp.

> of course, the above simple example doesn\'t take audio latency into
> account, but thats left as an an exercise for the reader :)

Using MSC for every buffer, the latency is the difference between output
MSC and input MSC.

> now, the truth is that you can do this either way: you can use an
> absolute current time, and schedule based on that plus the delta, or
> you can just schedule based on the delta.

But how can this be done in another thread at a later time?

> either way will work, but
> the second one works right now without any extra POSIX clock support.

also an added problem with relative sleeps is that they can be less accurate
when because you can get preempted before nanosleep(), allthough in a JACK
callback this is not very likely.

--martijn





Powered by ASHosting



Re: [linux-audio-dev] App intercomunication issues, some views.

2002-07-24 Thread Martijn Sipkema

> >[...]
> >> UST can be used for timestamping, but thats sort of useless, since the
> >> timestamps need to reflect audio time (see below).
> >
> >I\\\'d like to have both a frame count (MSC) and a corresponding system
time
> >(UST) for each buffer (the first frame). That way I can predict when (UST)
> >a certain performance time (MSC) will occur and use this to schedule MIDI,
> >i.e. through a MIDI API also supporting UST.
> 
> but you also need \"transport time\". frame count time is generally
> irrelevant. transport time is non-monotonic.




> >> >But JACK doesn\'t provide timestamps, or does it?
> >> 
> >> it doesn\'t timestamp buffers, because i firmly believe that to be an
> >> incorrect design for streamed data.
> >
> >Why is this an incorrect design? I don\\\'t understand.
> 
> because its based on prequeuing data at the driver level

at the API level, not at the driver level.

>, which (1)
> destroys latency

I agree. The way OpenML works it is harder to get low latency audio.
For video I think it can provide low latency.

> and (2) puts more complexity into the driver. 

That depends on the implementation. Certainly the complexity is not in the
kernel driver, but in the user space part.

> its my belief that if you have an essentially real-time streaming
> hardware interface, then the abstraction exported to the application
> should reflect this reality, even if it hides the complexity of
> controlling the hardware. creating an API that lets you queue up
> things to be rendered at arbitrary times certainly seems useful for
> certain classes of application, but to me, its clearly a high level
> API and should live \"above\" an API that forces the programmer to deal
> with the reality of the hardware model.

I agree that certainly for audio the OpenML API is fairly high level.

> >[...]
> >> CLOCK_MONOTONIC doesn\\\'t change the scheduling resolution of the
> >> kernel. its not useful, therefore, in helping with this problem.
> >
> >Not useful right now. CLOCK_MONOTONIC scheduling resolution will get
> >better I hope. 
> 
> How can it? UST cannot be the clock that is used for scheduling ...

Why not?

> >For MIDI output this resolution is of importance whether
> >you use a UST/MSC approach or not.
> >Is the clock resolution for Linux in clock_gettime() also 10ms right now?
> 
> I don\'t know anybody who uses this call to do timing. clock_gettime()
> could have much better resolution, since it can use the same timebase
> as gettimeofday(), which is based (these days) on the cycle counter.

Then that will give the same result as a clock_gettime() using CLOCK_REALTIME.
There is nothing wrong with clock_gettime/clock_nanosleep, they are the modern
POSIX clocks and I think they are best for RT applications.

> >What is the correct clock to use for timestamping if not CLOCK_MONOTONIC?
> 
> there isn\'t one. thats why i agree with you that UST will be a
> valuable addition to linux. i just don\'t agree on the scope of its
> contribution. 

Then what is the difference between an accurate CLOCK_MONOTONIC and UST?
I know a UST isn\'t CLOCK_MONOTONIC, but CLOCK_MONOTONIC can be the UST,
given it is accurate enough, rihgt?

[...]
> >No, and I\'ve tried firm timers patch and it performs great. But it
doesn\'t
> >add CLOCK_MONOTONIC IIRC, and thus using CLOCK_REALTIME you still run the
> >risk of having the clock adjusted.
> 
> don\'t use either. sigitimer, or nanosleep, use the kernel jiffies
> value, and when i last looked, this is monotonic. i could be wrong
> about that, however.

I would rather use an absolute sleep using CLOCK_MONOTONIC and clock_nanosleep
than a relative nanosleep. I\'m not sure how nanosleep is supposed to behave
when the system clock is adjusted.

--martijn





Powered by ASHosting



Re: [linux-audio-dev] App intercomunication issues, some views.

2002-07-24 Thread Martijn Sipkema

[...]
> if you find the link for the
> ex-SGI video API developer\'s comment on the dmSDK, i think you may
> also see some serious grounds for concern about using this API. i\'m
> sorry i don\'t have it around right now.

is it http://www.lurkertech.com/linuxvideoio.html ?

this is about the older SGI audio and video APIs, not about dmSDK i
think. The way I read this article it is not against but in favor
of dmSDL/OpenML.

--martijn




Powered by ASHosting



Re: [linux-audio-dev] UST

2002-07-24 Thread Martijn Sipkema

> their stuff has never been widely (if at all) used for low latency
> real time processing of audio.
[...]
> ...it doesn\'t get used this way
> because (1) their hardware costs too much (2) their API for audio
> doesn\'t encourage it in any way (3) their API for digital media in
> general is confused and confusing.

I agree that the DM audio API does not seem suitable for RT audio. It
is more oriented towards video and it suitable for RT video i think, 
because with video the frame size is kown.

> i or someone else posted a link
> here a few months ago from someone who worked in the DM group at SGI
> which described the problems that have beset their Video API. a
> similar set of problems exists for the Audio side of DM, IMHO.

This article described the problems of the old video API and concluded
that a different approach was needed. The approach described was dmSDK,
at least that is the way I read it.

> SGI\'s definition of \"realtime\" is great when discussing their OS, but
> it more or less fades into the background when using the DM API and
> UST. 
> 
> >> SGI tried to solve the problem of the Unix read/write API mismatch with
> >> realtime streaming media by adding timestamps to data. CoreAudio solves
it
> >> by getting rid of read/write, and acknowledging the inherent time-basis
of
> >> the whole thing, but for some reason keeps timestamps around without
using
> >> them in many cases. JACK follows CoreAudio\'s
> >> lead, but gets rid of the timestamps.

CoreAudio MIDI uses timestamps also. You\'ll need some relation between audio
time and the time for MIDI timestamps to be able to use these timestamps.

> >Timestamps are needed in hard-realtime and parallel-computing systems,
> 
> timestamps are not needed to deal with streaming media when it is
> handled without pre-queueing.

There is always some pre-queueing unless you handle audio per frame.

And how to handle MIDI? Have the application schedule itself? I think a
common timebase for timestamping/scheduling (in the API) would be better.

> Pre-queuing destroys latency, and so is
> best left to a higher level API and/or the application itself.

If you only pre-queue one buffer you basically have ASIO...


--martijn




Powered by ASHosting



Re: [linux-audio-dev] App intercomunication issues, some views.

2002-07-24 Thread Martijn Sipkema

[...]
> Its worth noting that SGI\'s \"DM\" API has never really taken
> off, and there are lots of reasons why, some technical, some
> political.

Perhaps. See http://www.khronos.org/ for where SGI\'s dmSDK might
still be going. I think this API might be good for video. So maybe
it is not that good for low latency audio. It still would be nice
to be able to use it together with the other Linux APIs and all that
would be required for that would be the use of UST in all APIs.

> the most fundamental problem with SGI\'s approach to audio+video is
> that its not based on the desirability of achieving low latency for
> realtime processing and/or monitoring. its centered on the playback of
> existing, edited, ready-to-view material. the whole section at the end
> of that page about \"pre-queuing\" the data makes this point very clear.

I agree that the API does not seem to be designed with realtime
processing/monitoring in mind. For video I do think the API is capable
of that, for audio probably not. I still think it is a nice API and
perhaps there is not one API for all...


--martijn




Powered by ASHosting



Re: [linux-audio-dev] App intercomunication issues, some views.

2002-07-24 Thread Martijn Sipkema

[...]
> UST can be used for timestamping, but thats sort of useless, since the
> timestamps need to reflect audio time (see below).

I\'d like to have both a frame count (MSC) and a corresponding system time
(UST) for each buffer (the first frame). That way I can predict when (UST)
a certain performance time (MSC) will occur and use this to schedule MIDI,
i.e. through a MIDI API also supporting UST.

> UST cannot (on my
> understanding) be used for scheduling.

That\'s correct. The application doesn\'t need to. But in the case of a MIDI
API that accepts UST stamped messages and transmits them at the correct time,
the implementation might use scheduling on UST directly or it might create
a mapping between UST and the system clock used for scheduling. With UST
implemented as CLOCK_MONOTONIC the implementation can actually schedule on
that.

> >But JACK doesn\\\'t provide timestamps, or does it?
> 
> it doesn\'t timestamp buffers, because i firmly believe that to be an
> incorrect design for streamed data.

Why is this an incorrect design? I don\'t understand.

> however, it does provide (if a
> client volunteers to do this) a time base. see jack/transport.h. the
> API is not in the main header because we don\'t want to frighten
> users. most applications don\'t need this stuff at all.

I see it is possible to get the current frame position? Is it not possible
to get the position of the first frame on every callback?

[...]
> CLOCK_MONOTONIC doesn\'t change the scheduling resolution of the
> kernel. its not useful, therefore, in helping with this problem.

Not useful right now. CLOCK_MONOTONIC scheduling resolution will get
better I hope. For MIDI output this resolution is of importance whether
you use a UST/MSC approach or not.
Is the clock resolution for Linux in clock_gettime() also 10ms right now?
What is the correct clock to use for timestamping if not CLOCK_MONOTONIC?

> what
> you need is an API that says \"do this at time T\", and have some
> good expectation that the jitter on \"T\" is very very small (right now,
> its generally huge).

I agree. But the question is, in what timebase should time T be given if
not UST?

> the \"firm timers\" patch helps with this immensely (and before it, the
> KURT patches did the same thing). they don\'t involve a change to libc.

No, and I\'ve tried firm timers patch and it performs great. But it doesn\'t
add CLOCK_MONOTONIC IIRC, and thus using CLOCK_REALTIME you still run the
risk of having the clock adjusted.


--martijn





Powered by ASHosting



Re: [linux-audio-dev] App intercomunication issues, some views.

2002-07-23 Thread Martijn Sipkema

[...]
> UST = Unadjusted System Time

I believe this is a good introduction to UST/MSC:

http://www.lurkertech.com/lg/time/intro.html


--martijn





Powered by ASHosting



Re: [linux-audio-dev] App intercomunication issues, some views.

2002-07-23 Thread Martijn Sipkema

[...]
> UST = Unadjusted System Time
> 
> I haven\'t seen any implementations of UST where you could specify a
> different source of the clock tick than the system clock/cycle timer.

Well, no. Is this needed. The UST should just be an accurate unadjusted
clock that can be used for timestamping/scheduling events.

> >audio by scheduling it on UST. An application doing JACK audio output
> >and MIDI output would most likely estimate the UST for the output buffer
> >using the UST of the input buffer and schedule MIDI messages for that
> >buffer in the callback also. So this then looks much like your proposal.
> >But there is also still the ability to use send MIDI messages for
> >immediate transmission.
> 
> Well, thats actually what we have in JACK already, if the client uses
> the ALSA sequencer in another (non-JACK) thread. its precisely the
> design i have in my head for Ardour, for example.

But JACK doesn\'t provide timestamps, or does it?

> >Using UST would also enable syncing to video or some other media
> >stream without it all residing in the same API.
> 
> Something has to connect the UST to the applications, and I have not
> seen anything in the definition of UST that would put UST in user
> space.

I don\'t really understand. For a POSIX system, UST is CLOCK_MONOTONIC. Now
I know Linux does not yet support this, but it will eventually. Apparently
adding CLOCK_MONOTONIC to libc will change its ABI.
If CLOCK_MONOTONIC is accurate enough, then it can be used to sync
audio/midi/video by associating the performance time (e.g. with audio as
the master, MSC) with the UST.

So instead of having a MIDI API that lets you schedule MIDI messages on a
dozen different timebases, you can only schedule to UST. The application
will now the relation between the audio time (frames) and UST, and allthough
both will drift this doesn\'t really matter for the short period MIDI messages
are scheduled ahead (i.e. << 1s, probably more like 100ms or less).

Or am I missing something?

--martijn








Powered by ASHosting



Re: [linux-audio-dev] App intercomunication issues, some views.

2002-07-23 Thread Martijn Sipkema

> >Does that mean that MIDI output can only be done from a callback? 
> 
> No, it would mean that MIDI is only actually delivered to a timing
> layer during the callback. Just as with the ALSA sequencer and with
> audio under JACK, the application can queue up MIDI at any time, but
> its only delivered at specific points in time. Obviously, pre-queuing
> leads to potential latency problems (e.g. if you queue a MIDI volume
> change 1 second ahead of time, then the user alters it during that 1
> second, you\'ve got problems).

The only problem i have with this is latency. For applications that
only do MIDI output this is fine. For a software synth, only taking
MIDI input, there is also no extra latency, since you already need it
to avoid jitter.

For a complex MIDI application that does MIDI input -> MIDI output,
this adds latency. I am working at the moment on a low level MIDI I/O
API and a daemon for IPC routing. This will support sending either
immediate ro scheduled MIDI messages. It will take probably some time
still to get a working version. All scheduling is done to UST (, 
allthough Linux does not support this yet).

[...]
> >Why not have a seperate API/daemon for MIDI and
> >have it and JACK both use the same UST timestamps?
> 
> you can\'t use any timestamp unless its derived from the clock master,
> which UST by definition almost never is. the clock on your PC doesn\'t
> run in sync with an audio interface, and will lead to jitter and drift
> if used for general timing.

The mapping between UST and audio time (frames) is continuously updated.
There is no need for the UST to be the master clock. If JACK would
provide on every process() callback a UST time for the first frame
of the (input) buffer, then MIDI could be very accurately synced to JACK
audio by scheduling it on UST. An application doing JACK audio output
and MIDI output would most likely estimate the UST for the output buffer
using the UST of the input buffer and schedule MIDI messages for that
buffer in the callback also. So this then looks much like your proposal.
But there is also still the ability to use send MIDI messages for
immediate transmission.

Using UST would also enable syncing to video or some other media
stream without it all residing in the same API.


--martijn







Powered by ASHosting



Re: [linux-audio-dev] App intercomunication issues, some views.

2002-07-23 Thread Martijn Sipkema

On 23.07.2002 at 15:35:58, Paul Davis <[EMAIL PROTECTED]> wrote:

> >On Tue, Jul 23, 2002 at 07:48:45 -0400, Paul Davis wrote:
> >> the question is, however, to what extent is it worth it. the reason
> >> JACK exists is because there was nothing like it available for moving
> >> audio data around. this isn\'t true of the MIDI side of things, where
> >
> >If you actaully want to deal with raw MIDI (you\'d be mad, but...) then its
> >OK, as the maximum ammount of data per jack unit time is pretty small, but
> >I agree, it\'s better dealt with via the alsa api.
> 
> well, what i was thinking was something more like this:
> 
>   struct jack_midi_buffer_t {
>unsigned int event_cnt;
>WhateverTheALSASequencerEventTypeIsCalled events[0];
>   };
> 
> a JACK client that handles MIDI as well would do something like this
> in its process() callback:
> 
>   jack_midi_buffer_t* buf = (jack_midi_buffer_t*) 
> jack_port_get_buffer (midi_in_port, nframes);
>   
>   for (n = 0; n < buf->event_cnt; ++n) {
>process_midi_event (buf->events[n]);
>   }
> 
> a real client would probably look at the timestamps in the events too.

Does that mean that MIDI output can only be done from a callback? Is the
callback the same one as for the audio hardware? MIDI being sporadic just
doesn\'t seem to fit in JACK. Why not have a seperate API/daemon for MIDI and
have it and JACK both use the same UST timestamps?

--martijn





Powered by ASHosting



Re: [linux-audio-dev] App metadata intercomunication protocol..

2002-07-23 Thread Martijn Sipkema

> Yep, think of 0-127 ranges for controller data :(
> That is too coarse;

MIDI provides 14bit controller resolution by having controller
pairs. That should be enough for controller since most sliders/knobs
on hardware have much less than that.
Pitch bend is 14bit also, allthough there is a lot of hardware that
only uses the MSB.
For what MIDI is meant for it isn\'t that bad actually. A higher
transmission rate would solve most problems users are having. As long
as MIDI is only used for IPC you can easily have a higher transmission
rate.

--martijn



Powered by ASHosting



Re: [linux-audio-dev] LADPSA v1.1 SDK Provisional Release

2002-07-12 Thread Martijn Sipkema

> >Well, it\'s not _that_ important, but there are a few good reasons...
> >
> >1) The LADSPA API was not designed for ABI changes (most notably the
> >   interface version is not exported by plugins). This means that 
> >   old plugins that you didn\'t remember to delete/recompile can 
> >   cause segfaults in hosts. And unfortunately when you get a seg.fault,
> >   you probably manage to try at least n+1 other things, send 
> >   bug reports, drive the host developers insane, etc, etc before 
> >   you notice that you had an old pluing lying around. ;)
> 
> So, one vote for adding the version
> to the API ?

I would like to add that old LADSPA plugins can be easily identified
because they lack the \'version\' symbol, so there really is not segfault
problem as far as I can see.

--martijn





Powered by ASHosting



Re: [linux-audio-dev] priority inversion & inheritance

2002-07-11 Thread Martijn Sipkema

> Let\'s say we have threads A (SCHED_FIFO/50) and B (SCHED_FIFO/90). If both
> threads are runnable, B will always be selected and it will run until
> (quoting the man page) \"... either it is blocked by an I/O request, it is
> preempted by a higher priority process, or it calls sched_yield\".
> 
> But if B blocks on a mutex currently owned by A, B will then become 
> blocked and A will be selected to run. A can now run until it unlocks 
> the mutex, B becomes unblocked and once the next scheduling decision 
> occurs, B will again be chosen to run.
> 
> But it\'s true that if we have more threads the situation gets complex
> pretty quickly. Let\'s say we have C (SCHED_FIFO/70) that holds a resource
> needed by B, and is currently blocked by a long i/o operation. Then all
> three threads will be stuck for an unbounded time.
> 
> Do correct me if this seems wrong to you.

This is correct. Even if the thread C is running in another application
and doesn\'t share any resources with A and B it will still be able to 
preempt A (whilst holding the mutex) and thus block C indefinately. This
is called priority inversion. In any realistic environment there will
always be a thread C.

This is why I said the threads should be the same priority.

--martijn



Powered by ASHosting



Re: [linux-audio-dev] priority inversion & inheritance

2002-07-11 Thread Martijn Sipkema

> >> For instance if you have
> >> a mixer element in the signal graph, it is just easier if all the inputs
> >> deliver the same amount of data at every iteration. 
> > Hmm, why? I can see that it is a requirement that at every iteration there
> > is the same data available at the input(s) as is requested on the
output(s).
> > I don\\\'t see what makes a mixer that much easier to implement if the
amount
> > of data to process is the same for every iteration.
> 
> If all JACK inputs have x samples of audio and non-JACK input y samples,
> \'x != y\' and you need to mix to a JACK output with x samples of space, you
> have a problem. Redesign is needed to make sure that this never happens.

Oh, you mean a mixer that doesn\'t only mix jack ports?
Is that a common situation?

> If using a large FIFO between a non-rt and rt threads, then the best 
> solution is to make the FIFO nonblocking using atomic operations. This is
> an essential technique when making robust user-space audio apps. Real-life 
> implementation cans be found from ardour (disk butler subsystem), 
> ecasound (classes AUDIO_IO_BUFFERED_PROXY and AUDIO_IO_PROXY_SERVER) 
> and EVO (formerly known as linuxsampler).

Actually using mutexes (or better spinlocks as Victor noted) is a portable
way to build an atomic operation. POSIX does not offer atomic_add stuff
(yet) AFAIK.

> >> other subsystems block without deterministic worst-case bounds. No amount
> >> of priority (given by priority inheritance) will save your butt if the
> >> disk head is physically in the wrong place when you need it. On a
> > When the disk is not able to supply the samples in time, then there is a
> > problem :)
> > Using a fast disk and buffering will normally be sufficient.
> 
> Aa, but that is a different issue. The question is about response time, 
> not bandwidth.

When using a large enough buffer, the question is again only bandwidth.

> For instance ecasound\'s current disk i/o subsystem (run 
> in a non-rt thread) sometimes stalls for multiple seconds (!) on my 
> machine (two IDE-disks on the same bus), but still the audio processing 
> keeps on going without xruns. The disk i/o system just has to buffer 
> huge amounts of data to cover even the longest delays. Of course if you 
> are really running out of disk i/o capacity, then fancy locking mechanisms 
> won\'t save you.

That\'s right.

> With full-blown priority inheritance and mutual exclusion between the 
> threads, the rt-thread would then block for seconds in the above 
> example and who knows about the worst-case upper bound!

I don\'t fully understand this.

> >> The correct solution is to partition your audio code into real-time
> >> capable and non-realtime parts and make sure that the non-real-time part
> >> is never ever able to block the real-time part. In essence this very
close
> > As I see it, it isn\\\'t a problem when the non-realtime part blocks the
> > real-time part, as long as there is a worst case bounded block time for
it.
> 
> But as is is, theoretically speaking the worst-case time is \'infinity\'. ;)

That is a long time. :)
I think even Linux can do better than that for worst case latency.

[...]
> I admit, this is a real problem. If the sampling_rate/interrupt_period is
> fractional, the only way for a JACK driver to keep up is to set 
> JACK\'s buffersize to ceil(srate/iperiod) and then alternate between 
> process(nframes) and process(nframes-1).

It is also (almost) impossible to know what \'nframes\' will be. So
determining
an upper bound and just using what is available is the best solution for
this kind of hardware.

> Ok, I guess here\'s the first real case against const-nframe.

I thought i had already mentioned this several time :)

> On the hand
> at least with ALSA you\'d be in trouble anyway as ALSA will wake your
> driver only when period_count samples are available. If you set
> period_count to floor(srate/iperiod) you will be woken up on every
> interrupt but you will slowly fall behind and eventually issue two
> process() calls per iteration (as you described).

Perhaps this could be changed/added in ALSA?

[...]
> So like Paul said, do we need to support these soundcards...? For 
> JACK-style operation both the above scenario are really, really bad.

I don\'t have hardware that behaves like this :)
But still, the yamahas are a common hardware. Is the korg card the 1212?
I think it would still be nice to support these.

> >> Not a problem as there\\\'s no 2^x limitation.
> > Isn\\\'t there? for FFT?
> 
> The trick is that with majority of available soundcards the user is able 
> to set period_count 2^x samples. JACK clients using FFT are free to raise 
> on error if a non 2^x buffersize is active. This is pretty good situation 
> from both developer and user point of view.

I would like it if the application just used a larger latency for the FFT but
still worked.

For the general case FFT (if I am right) the latency is at least
the FFT size + hardware buffer size (a

Re: [linux-audio-dev] priority inversion & inheritance

2002-07-11 Thread Martijn Sipkema

> >> The two threads must run with SCHED_FIFO as they both need to complete
> >> their cycle before the next soundcard interrupt.
> > And even if they both run SCHED_FIFO, they should then also run at the
same
> > priority.
> 
> Not needed. The SCHED_FIFO protects against from other tasks taking the
> CPU. If we have two SCHED_FIFO threads communicating with a mutex
> protected queue, different priorities are not a problem as for priority
> inversion to happen, one of them needs to block on the shared mutex. And
> if the thread is blocked, it then isn\'t in the run queue, and thus cannot
> get the CPU.

Not true as far as I know. FIFO only means that the thread won\'t be preempted
after a certain time in case a thread of the same priority is ready to run.
AFAIK all threads running either SCHED_FIFO or SCHED_RR will always be
preempted by higher priority SCHED_* threads. Please correct me if I am wrong
in this.

--martijn



Powered by ASHosting



Re: [linux-audio-dev] priority inversion & inheritance

2002-07-11 Thread Martijn Sipkema

> On Thu, Jul 11, 2002 at 04:31:18PM +0200, Martijn Sipkema wrote:
> > When implementing a FIFO that is read by a low priority thread and written
> > by a higher priority thread (SCHED_FIFO) that is not allow to block when
> > writing the FIFO for a short, bounded time. Then if access to the FIFO is
> > controlled using a mutex without some means to prevent priority inversion,
> > the high priority thread can block indefinately on the mutex.
> 
> There are better solutions. For example, in RTLinux, fifos shared
> between Linux (non-rt) processes and RT threads are asymmetric: the
> RT thread never blocks, the non-RT thread blocks. In many cases
> it is best to optimize the data operations and perform them under
> a spin_lock with interrupts disabled. In RTLinux pthread_spin_lock
> disables irqs and, in SMP also sets the lock
>   pthread_spin_lock(&myq.spin);
>   myq.tail->next = new;
>   new->next = 0;
>   myq.tail= next;
>   if(!myq.head)myq.head = new;
>   pthread_spin_unlock(&myq.spin);
> 
> Worst case delay for higher priority task is small and easily
> calculated.

Yes, this is better.

> > > And, yes, it does make the system slower even when not using it, for
> > > several reasons - mentioned in the paper. As one example, all your
> > > wait queues need to be atomically re-orderable.
> > 
> > I know too little of the implementation to verify this, but if it is true,
> > than that is a good argument against priority inheritance.
> 
> Think about what happens as you pass inherited priority down a chain
> of blocked tasks: 
>   if High blocks on m1 which is help by T1 which is blocked
>   on m2  down to m_n.
> 
> High must promote the priority of every task T1 .. Tn and each one
> is on a wait queue that may be priority ordered!

Actually, if Linux had priority ceiling and spinlocks, I think imight be able
to do without priority inheritance :)


--martijn





Powered by ASHosting



Re: [linux-audio-dev] priority inversion & inheritance

2002-07-11 Thread Martijn Sipkema

> I must have explained things quite poorly in the article you said you
> read. Having a live scheduler allows you to _not_ understand all the
> complex interactions between blocking operations in your system because
> the liveness means that eventually whatever thread you are waiting for 
> will proceed. A priority driven Real-time scheduler is not live so if
> you do not understand all blocking relationships, your system may die.

I understand deadlocks and that when using lots of threads/mutexes this
gets hard to analyse. I\'m not in favor of just using priority inheritance
to get away with bad software design.

> I have no idea whether you should be using RTLinux, but it is absolutely
> correct tht Liux is not suitable for hard realtime use.

Not right now and not without kernel patches. But it should be possible
to get a worst case for scheduling latency with Linux, perhaps not a very
good one right now.

> I\'ve yet to see an example where it was both needed and effective. 
> Perhaps you can give me one.

When implementing a FIFO that is read by a low priority thread and written
by a higher priority thread (SCHED_FIFO) that is not allow to block when
writing the FIFO for a short, bounded time. Then if access to the FIFO is
controlled using a mutex without some means to prevent priority inversion,
the high priority thread can block indefinately on the mutex.

> And, yes, it does make the system slower even when not using it, for
> several reasons - mentioned in the paper. As one example, all your
> wait queues need to be atomically re-orderable.

I know too little of the implementation to verify this, but if it is true,
than that is a good argument against priority inheritance.

--martijn




Powered by ASHosting



Re: [linux-audio-dev] priority inversion & inheritance

2002-07-11 Thread Martijn Sipkema

> [constant nframes]
> > But why is this needed? The only valid argument I heard for this is
> > optimization of frequency domain algorithm latency. I suggested a
> > capability interface for JACK as in EASI where it is possible to ask
> > whether nframes is constant. The application must still handle the case
> > where it is not.
> 
> For freq domain stuff its not an optimisation, if the constant CPU time
> per process() requirement is to be met it is a requirement.

But is constant CPU time per process() a requirment? It is only an optimum
IMHO.
When doing large (more than one period) FFTs this will not be possible. So
the requirement is never in any callback use more CPU time than is available.
This also means that hardware that would use callbacks for which the time
available for process() is mucj smaller than the time (data) process() is
called for is a bad design. Note that this is not the case for the constant
rate interrupt audio hardware (allthough it still arguably is a bad design).

> Eqally we could add a buffering FIFO to hypothetical cards which produce
> non constant frames per period. I think this would affect far fewer
> people.

Perhaps, but it would affect them all the time. Also this buffer would have
to be fairly large relative to the hardware buffer in order to not have the
samples to process/time to process fluctuate too much.

--martijn





Powered by ASHosting



Re: [linux-audio-dev] priority inversion & inheritance

2002-07-11 Thread Martijn Sipkema

> One simple reason, whether a valid design or not, is that there\'s a lot of
> code that handles audio in constant size blocks.

Ok. I give up. It\'s clear that most find this more important that the
issue with hardware.

> For instance if you have
> a mixer element in the signal graph, it is just easier if all the inputs
> deliver the same amount of data at every iteration. 

Hmm, why? I can see that it is a requirement that at every iteration there
is the same data available at the input(s) as is requested on the output(s).
I don\'t see what makes a mixer that much easier to implement if the amount
of data to process is the same for every iteration.

> In ecasound the trickiest part is that the i/o subsystem uses buffer 
> objecs to store audio. Each buffer object contains nframes of audio.
> These audio slots are allocated before real-time processing is started and 
> cannot be resized on-the-fly. 
> 
> But the important point is that for low-latency processing, the design
> described above has no real negative sides and I see no need to change it.

And my problem is that if JACK goes over to constant nframes there will be
no need to change it. Even worse, new applications will also use assume
nframes == constant.

> With the current JACK implementation this design delivers optimal results
> both in terms of efficiency and latency... _if_ I ignore the
> non-const-nframes issue.

And you could still for all the cards that have nframes == const. Just add
some buffering for nframes != const.

> If I want to add correct support for the current API, I either have to a)
> change the engine design (basicly from using ringbuffers of audio blocks
> into ringbuffers of audio samples), which involves making changes to
> majority of the interfaces in ecasound\'s codebase (multiple MBs of code!),
> or b), make a compromise on efficicient&latency and add a intermediary
> buffer between the JACK process() and ecasound engine.

Or, only use the intermediate buffer when nframes != constant, assuming there
will be a way to determine that.

[...]
> >> read/write ops or driven by select/poll. In this case the easiest way to
> >> add JACK support is to put a FIFO between the engine and the process()
> >> callbacks. Although priority inheritance could be used here, it\\\'s
doesn\\\'t
> > If the FIFO uses a mutex, it should use some priority inversion prevention
> > mechanism, unless both threads run at the same priority. Otherwise there
> > is a potential unbounded block on the mutex.
> 
> The two threads must run with SCHED_FIFO as they both need to complete
> their cycle before the next soundcard interrupt.

What I meant was a traditional application not running SCHED_FIFO but using
a large FIFO to communicate with JACK\'s process() callback.
And even if they both run SCHED_FIFO, they should then also run at the same
priority.

> As Linux is not a
> real-time OS (and probably even if it was), priority inheritance would
> only solve half of the problem. Calls to disk i/o, network, user i/o and
> other subsystems block without deterministic worst-case bounds. No amount
> of priority (given by priority inheritance) will save your butt if the
> disk head is physically in the wrong place when you need it. On a
> dedicated system you can reserve a separate disk for the audio i/o or
> prevent other processes from using the disk, but in a GPOS like Linux, it
> is always possible that some other process can affect the kernel
> subsystems (for instance, access a file and cause the disk head to move at 
> the worst possible time).

When the disk is not able to supply the samples in time, then there is a
problem :)
Using a fast disk and buffering will normally be sufficient.

> The correct solution is to partition your audio code into real-time
> capable and non-realtime parts and make sure that the non-real-time part
> is never ever able to block the real-time part. In essence this very close
> to the RT<->non-RT separation advocated by RTLinux, just done on a
> different level (between interrupt-driven SCHED_FIFO code and
> timer-interrupt/scheduler driver SCHED_OTHER code).

As I see it, it isn\'t a problem when the non-realtime part blocks the
real-time part, as long as there is a worst case bounded block time for it.

> > There is hardware that just interrupts at a constant rate. With this
hardware
> > the frames that or ready isn\'t exactly constant. You might assume some
value,
> > but if it isn\'t exactly correct then you\'ll drift.
> 
> Yes, the interrupt intervals and how much data actually is available when
> the software is woken up are two different things. But as the nframes
> count in any case has an upper bound, you are not free to directly use the
> avail_samples count anyways. And natural choice is to always use the
> period_count. I\'ve posted one alternative approach to this to
> jackit-devel, but at least to me it really didn\'t seem like a viable
> approach.

It is not that easy. Say period_count is constantly 

Re: [linux-audio-dev] priority inversion & inheritance

2002-07-11 Thread Martijn Sipkema

so, to summarize:

nframes != constant
- (better) support for various (stupid) hardware.

nframes == constant:
1- Easier porting of read/write applications.
2- Optimized latency of FFT; FFT done on period size. This would
also require a power of 2 sized period.


So not having nframes constant gives better support for some
(stupid) hardware. Drawback number 1 is not really a problem I think.
The applications will have to be ported to a callback based API
anyway for optimal performance. There exist solutions that enable
these applications to be ported without too much effort, but with
larger latency.

Only in the case that the (constant) period size is a power of 2 will
it be possible to have one period less FFT latency than the general case.
This is an optimization. If the applicaiton supports the case where
nframes != constant and not a power of 2 in size, then it could possibly
optimize FFT latency in the case that nframes happened to be constant
and ^2 sized. If JACK supported a call to request whether 
nframes == constant, then this would be possible.

ASIO is not that good an example I think. ASIO also assumes double
buffering, which not all hardware might use. In the case of ASIO this
requires extra copying.

It is not my decision to make, and there is something to be said for
either, but I\'d choose to stick with a non-constant nframes and possibly
add the interface for requesting nframes constness.


--martijn





Powered by ASHosting



Re: [linux-audio-dev] priority inversion & inheritance

2002-07-11 Thread Martijn Sipkema

> >Again, I agree with you. That\'s also why I am against a constant nframes,
> >because there is hardware that really doesn\'t want nframes constant.
> 
> such as?

How should I know? :)
I heard some yamaha soundcards generate interrupts at a constant rate not
depending on the sample rate. Perhaps there is even hardware that doesn\'t
generate interrupts itself?
I am not saying this is good hardware design. Having large fluctuations
in nframes will limit the effective available cpu time for processing
(processing time != time to be processed).
But hardware with small variations in the frames per interrupt might exist
and I don\'t see that many good reasons to not support it.

--martijn




Powered by ASHosting



Re: [linux-audio-dev] priority inversion & inheritance

2002-07-11 Thread Martijn Sipkema

> There\'re two separate problems here. Constant nframes might be required
> even if the application supports engine iteration from outside sources
> (ie. all audio processing happens inside the process() callback).  
> Ecasound is one example of this.

But why is this needed? The only valid argument I heard for this is
optimization of frequency domain algorithm latency. I suggested a
capability interface for JACK as in EASI where it is possible to ask
whether nframes is constant. The application must still handle the case
where it is not.

> Of course with no locking involved,
> there\'s no need for priority inheritance.

That\'s right.

> A different problem is apps that have their own loop based on blocking
> read/write ops or driven by select/poll. In this case the easiest way to
> add JACK support is to put a FIFO between the engine and the process()
> callbacks. Although priority inheritance could be used here, it\'s doesn\'t
> buy as much.

If the FIFO uses a mutex, it should use some priority inversion prevention
mechanism, unless both threads run at the same priority. Otherwise there
is a potential unbounded block on the mutex.

> First of all, due to the extra context switch involved, this approach just
> is not efficient. Secondly, the application loop must still be
> deterministic (to avoid missing deadlines caused by blocking on paging or
> disk i/o) and there\'s thus no reason to run it without SCHED_FIFO. In
> other words priority inheritance is not needed.

If the threads run at the same priority, then priority inheritance is not
needed.

> > Again, I agree with you. That\'s also why I am against a constant nframes,
> > because there is hardware that really doesn\'t want nframes constant.
> 
> What hardware? You\'ve mentioned this quite a few times on jackit-devel,
> but so far without examples. D/A or D/A hardware always runs at a constant
> speed (at least I haven\'t heard about variable rate sampling).

This is not true. There is hardware that runs with variable sample rate. This
can be used for sync to tape. (The hardware I know that supports this does
generate interrupts on a constant number of samples though)

> Either the
> software continuously polls the audio hw buffers (not really an option) or
> is waken up periodically using interrupts.

There is hardware that just interrupts at a constant rate. With this hardware
the frames that or ready isn\'t exactly constant. You might assume some value,
but if it isn\'t exactly correct then you\'ll drift.

> And nframes should be equal to
> the period size. 

This hardware doesn\'t have a period size in samples, but time based. And so I

think it is wrong to have the driver export an interface that is not used
by the hardware.

And besides that, I\'m pretty sure there is hardware that doesn\'t use power
of
2 sized periods. Should that be a requirment too?

--martijn




Powered by ASHosting



Re: [linux-audio-dev] priority inversion & inheritance

2002-07-11 Thread Martijn Sipkema

> >An examples situation for using priority inheritance
> >might be porting a read/write audio i/o application to a
> >callback based interface without too much effort. This can\\\'t in the
> >general case be done without adding latency, if there is no blocking
> >allowed in the callback function. But why not block? It takes time
> 
> because the main reason for doing this is that the r/w app expects to
> be able to generate N1 samples at a time, whereas the callback wants
> only N2, and N1>N2. so when you block-to-get-r/w-thread-to-run, you
> run the r/w-thread for long enough to generate N1 samples. this
> instantly violates real-time constraints based on N2.

That\'s true. This hasn\'t got to do with using a mutex in the process()
callback though.

> if the code was willing to generate any number of samples at a time,
> and write them in any sized chunk, then there would be no need for
> such a hack. 

I agree with you that the only real solution is to rewrite the application.
Still, as others said that was not going to happen, the mentioned solution
will work as long as the hardware period size isn\'t too different from the
number of samples the application processes per read/write. I mentioned
this before because I did not think these (read/write) applications are
a good reason to have the process() callback use a constant nframes (during
jackd lifetime).

> it therefore doesn\'t solve the underlying problem:
> 
>* a device driver API that was written to hide the real-time 
>nature of the device behind the Unix open/read/write/close model
>* an application that used the API.
>* a desire to run the application in a low latency environment.

The latency of these applications can never be lower as the N1 (from above)
anyway. But on some of these applications N1 might be set very small at
compile time. N1 < N2 isn\'t really a problem.

It\'s best to just rewrite these applications to support a callback based
audio i/o api natively though. There will always be other uses of
priority inheritance :)

> its the API that is central here. you\'ve got a device that absolutely
> wants a certain amount of data read and/or written every N
> msecs. creating an API that allows the app to pretend this is not true
> is the root of the problem, IMHO.

Again, I agree with you. That\'s also why I am against a constant nframes,
because there is hardware that really doesn\'t want nframes constant.

--martijn




Powered by ASHosting



Re: [linux-audio-dev] priority inversion & inheritance

2002-07-11 Thread Martijn Sipkema

> Linux is not real-time, it has a scheduler that, generally, makes sure
> eventually everyone gets to run.  I think people often understimate
> how useful a \\\"live\\\" scheduler is and how limited a real-time priority
> scheduler is.

I agree. That\'s why it is needed. And for realtime threads SCHED_FIFO/RR
is used. I fail to see what this has to do with priority inheritance. Are
you stating that Linux isn\'t suitable for any realtime use anyway and that
I should in fact be using RTLinux?

> > If the article is saying the programmer should be protected from missuse
> > of priority inheritance by not supplying it, isn\'t that like Pascal? (I
> > never liked Pascal).
> 
> I think any use of priority inheritance is a misuse, and I am against
> making the rest of the OS slow in order to provide it.

Wel, I disagree. You have been stating this for years (and all this time,
without a good reason). Now I still do not see a valid reason in your
article.

Apparently other people did see valid use for
priority inheritance (me included). Does implementing priority inheritance
make the operating system slower even when not using it?

--martijn





Powered by ASHosting



Re: [linux-audio-dev] priority inversion & inheritance

2002-07-10 Thread Martijn Sipkema

> victor yodaiken wrote a nice article on why priority inheritance as a
> way of handling priority inversion isn\\\'t a particularly attractive
> solution.

I read this article, but I am not convinced. The only argument against
using priority inheritance is that it is supposed to have poor
performance. If that is true, then perhaps using priority ceiling is
better. For audio applications this would probably be easy, since they
are not that complex.

An examples situation for using priority inheritance
might be porting a read/write audio i/o application to a
callback based interface without too much effort. This can\\\'t in the
general case be done without adding latency, if there is no blocking
allowed in the callback function. But why not block? It takes time
, but it has a worst case bounded execution time. Yet only if there is
no priority inversion. And the only way to prevent that, without
knowing the priority of the callback function, is using priority
inheritance. (of course the read/write thread might want to be running
high priority SCHED_FIFO already in which case a normal mutex would
work also).

I think the article has some good examples of when not to use priority
inheritance, but there are still situation where using it is ok.

What do we have now to handle priority inversion in Linux? We haven\\\'t
got priority ceiling or priority inheritance. Should we set a high
priority just before acquiring a mutex that is used by a high priority
thread and lower the priority after it is released? Is this as efficient
as priority ceiling?

Is any design where a high and a low priority thread share access to some
resource flawed?

If the article is saying the programmer should be protected from missuse
of priority inheritance by not supplying it, isn\\\'t that like Pascal? (I
never liked Pascal).

--martijn







Powered by ASHosting



Re: [linux-audio-dev] apply buys emagic

2002-07-01 Thread Martijn Sipkema

>Apple aquired Emagic.
> 
>no Windows version of any Logic after Sept 30
> 
>What implies that to us? Any guess?
> 
> haven't seen any other news about it yet.

I saw it on http://www.heise.de
I sure hope that Emagic will now be willing to give the specifications
on their AMT protocol for MIDI interfaces.

Logic is IMHO the best MIDI sequencer and for pc users that
aren't ready to go over to mac, perhaps this makes switching to Linux
easier. Alas there isn't a sequencer in the league of Logic for Linux
yet, and there is still VST/Cakewalk for Windows.

PS

Personally I don't like all this big companies aquiring smaller ones.

--martijn






Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-14 Thread Martijn Sipkema

> Going back to the issue of latency, it should be pointed out that while 
> it might not be a big deal if your softsynth takes 25 ms to trigger, 

It is unless you only use it with a sequencer.

> latency on the PCI bus is a big problem.  If you can't get data from 
> your HD (or RAM)

>From memory I think. (is it possible for an audio card to read directly
from the hard disk??).

> to your soundcard fast enough
>, you _will_ hear dropouts 
> in the audio.  The more tracks, the higher the sampling rate, and the 
> longer the wordlength, the greater the problem becomes.  I think this is 
> what people mostly are worried about when they talk about latency 
> problems.

Yes, the PCI bus adds some extra latency. But that's not a problem.
The bandwidth I think will become a problem, but only with very large
numbers of channels high sample rate.

>A 25 ms dropout in the audiostream is quite noticeable and 
> annoying.  The discussions (which I did contribute to) on latency in 
> acoustic instruments touched on the subject that trained performers of 
> those instruments have learned to compensate for the inevitable delay 
> between articulation and sound.  Bus latency, however, is a completely 
> different story

If the bus has sufficient bandwidth there should be no dropouts. 25ms is
an extemely long period for the PCI bus. I'm not sure what the normal
buffer size for PCI transfers is, but this is in the order of a few hundred
usec worth of samples ,in the case of the audiowerk8 (philips saa7146)
a 24 Dword buffer per DMA channel (8 bytes per audio frame).

--martijn






Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-14 Thread Martijn Sipkema

> if i read this correctly, it's about latency wrt _another_player_. all
> trained ensemble musicians are easily able to compensate for the rather
> long delays that occur on normal stages.  not *hearing_oneself_in_time*
> is a completely different thing. if i try to groove on a softsynth, 10
> ms response time feels ugly on the verge of unusable (provided my
> calculations and measurements on latency are correct), and i'm not even
> very good.

Well, I just did some guitar playing through my boss gx700 with various
delay settings. Indeed 10ms delay is noticable, but I wouldn't go as
far as ugly/unusable. You could just feel there was a slight delay. But
really anything below 10ms is realtime. I don't think a 5ms delay is
a problem for playing any instrument. I don't know if the gx700 (or
effect units in general) has an additional delay from adc/dac or the
way it processes.

> let a drummer play an electronic trigger that does not make any sound by
> itself, and feed him a triggered synth drum over his headphones with 5
> ms latency, and he will kill you. his/her body control will be off to
> hell in a handbasket if the actual motion and the sound are not totally
> in sync, which means the drumtrack will be garbage and the drummer will
> suffer from increased muscular strain.

This already happens with 5ms??

> as some people mentioned, some instruments have a long "natural"
> latency, so the players have learnt to compensate, and the latency is
> part of the "feel". but then, this is not true for most percussive or
> plucked instruments.

Indeed when playing pads on a synth it would be hard to even notice
a 40ms delay.

I once read somewhere that some digital mixers have a 6ms delay.

--martijn







Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-13 Thread Martijn Sipkema

> How about the 1.0-1.5 ms latencies that everbody tries to obtain (or
already
> has) in both Linux/Win world? That always made me wonder if this isn't
just
> hype like the 192 kHz issue.
>
> I'm not a professional musician, but a 25 ms latency makes me more than
> happy.

I would say that for playing a software synthesizer realtime a latency of
10ms or less is ok. I'm not sure what latency most hardware synthesizers
have, but this will probably not be much better, sometimes even more than
10ms. Only the transmission of a noteon/off MIDI command will use
1ms (3 * 320usec, no running status).


--martijn






Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-13 Thread Martijn Sipkema

> You certainly can't play an instrument with 10ms
> latency.

in 10ms sound travels somewhat more than 3 meters.
that why i use nearfield monitors :)

--martijn






  1   2   >