Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
David Woodhouse wrote: > On Thu, 1 Feb 2001, Pavel Machek wrote: > > >> I thought that Vtech Helio folks already have XIP supported... > > > Plenty of people are doing XIP of the kernel. I'm not aware of anyone > doing XIP of userspace pages. uClinux does XIP (readonly) for userspace programs in the Dragonball port. Of course it's a different executable format than Linux, so there are some hooks for it. -- Joe - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
On Thu, 1 Feb 2001, Pavel Machek wrote: > I thought that Vtech Helio folks already have XIP supported... Plenty of people are doing XIP of the kernel. I'm not aware of anyone doing XIP of userspace pages. -- dwmw2 - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
Hi! > > I wasn't thinking of running the kernel XIP from writable, but even > > trying to do that from the filesystem is a mess. If you're going to be > > that way about it... > > Heh. I am. Read-only XIP is going to be doable, but writable XIP means that > any time you start to write to the flash chip, you have to find all > the I thought that Vtech Helio folks already have XIP supported... Pavel -- I'm [EMAIL PROTECTED] "In my country we have almost anarchy and I don't care." Panos Katsaloulis describing me w.r.t. patents at [EMAIL PROTECTED] - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
Hi! > > There has been surprisingly little discussion here about the > > desirability of a preemptible kernel. > > > > And I think that is a very intersting topic... (certainly more > interesting than hotmail's firewalling policy ;o) > > Alright, so suppose I dream up an application which I think really > really needs preemption (linux heart pacemaker project? ;o) I'm just not > convinced that linux would ever be the correct codebase to start with. > The fundamental design of every driver in the system presumes that there > is no preemption. Nonsense. SMP+SMM BIOS is *very* similar to preemptible kernel. SMP means that you can run two pieces in kernel at same time. With preemptible kernel "same" is rather bigger granularity, but thats minor difference. And SMI BIOS means that cpu can be stopped for arbitrary time doing its housekeeping. (Going suspend to disk?) > A recent example I came across is in the MTD code which invokes the > erase algorithm for CFI memory. This algorithm spews a command sequence > to the flash chips followed by a list of sectors to erase. Following > each sector adress, the chip will wait for 50usec for another address, > after which timeout it begins the erase cycle. With a RTLinux-style With SMM BIOS, this is br0ken. > So what is the solution in the preemption case? Should we re-write every > driver to handle the preemption? Do we need a cli_yes_i_mean_it() > for You can dissable SMM interrupts, AFAIK. Pavel -- I'm [EMAIL PROTECTED] "In my country we have almost anarchy and I don't care." Panos Katsaloulis describing me w.r.t. patents at [EMAIL PROTECTED] - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
Joe deBlaquiere wrote: ~snip~ > The locical answer is run with HZ=1 so you get 100us intervals, > right ;o). Lets not assume we need the overhead of HZ=1 to get 100us alarm/timer resolution. How about a timer that ticks when we need the next tick... On systems with multiple hardware timers you could kick off a > single event at 200us, couldn't you? I've done that before with the > extra timer assigned exclusively to a resource. With the right hardware resource, one high res counter can give you all the various tick resolutions you need. BTDT on HPRT. George It's not a giant time > slice, but at least you feel like you're allowing something to happen, > right? > >> >> -- >> dwmw2 - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
[EMAIL PROTECTED] said: > A recent example I came across is in the MTD code which invokes the > erase algorithm for CFI memory. This algorithm spews a command > sequence to the flash chips followed by a list of sectors to erase. > Following each sector adress, the chip will wait for 50usec for > another address, after which timeout it begins the erase cycle. With > a RTLinux-style approach the driver is eventually going to fail to > issue the command in time. That code is within spin_lock_bh(), isn't it? So with the current preemption approach, it's not going to get interrupted except by a real interrupt, which hopefully won't take too long anyway. spin_lock_bh() is used because eventually we're intending to stop the erase routine from waiting for completion, and make it poll for completion from a timer routine. We need protection against concurrent access to the chip from that timer routine. But perhaps we could be using spin_lock_irq() to prevent us from being interrupted and failing to meet the timing requirements for subsequent commands to the chip if IRQ handlers really do take too long. -- dwmw2 - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
Andrew Morton wrote: > There has been surprisingly little discussion here about the > desirability of a preemptible kernel. > And I think that is a very intersting topic... (certainly more interesting than hotmail's firewalling policy ;o) Alright, so suppose I dream up an application which I think really really needs preemption (linux heart pacemaker project? ;o) I'm just not convinced that linux would ever be the correct codebase to start with. The fundamental design of every driver in the system presumes that there is no preemption. A recent example I came across is in the MTD code which invokes the erase algorithm for CFI memory. This algorithm spews a command sequence to the flash chips followed by a list of sectors to erase. Following each sector adress, the chip will wait for 50usec for another address, after which timeout it begins the erase cycle. With a RTLinux-style approach the driver is eventually going to fail to issue the command in time. There isn't any logic to detect and correct the preemption case, so it just gets confused and thinks the erase failed. Ergo, RTLinux and MTD are mutually exclusive. (I should probably note that I do not intend this as an indictment of RTLinux or MTD, but just an example of why preemption breaks the Linux driver model). So what is the solution in the preemption case? Should we re-write every driver to handle the preemption? Do we need a cli_yes_i_mean_it() for the cases where disabling interrupts is _absolutely_ required? Do we push drivers like MTD down into preemptable-Linux? Do we push all drivers down? In the meantime, fixing the few places where the kernel spends an extended period of time performing a task makes sense to me. If you're going to be busy for a while it is 'courteous' to allow the scheduler a chance to give some time to other threads. Of course it's hard to know when to draw the line. So now I am starting to wonder about what needs to be profiled. Is there a mechanism in place now to measure the time spent with interrupts off, for instance? I know this has to have been quantified to some extent, right? -- Joe deBlaquiere Red Hat, Inc. 307 Wynn Drive Huntsville AL, 35805 voice : (256)-704-9200 fax : (256)-837-3839 - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
Bill Huey wrote: > > Andrew Morton's patch uses < 10 rescheduling points (maybe less from memory) err... It grew. More like 50 now reiserfs is in there. That's counting real instances - it's not counting ones which are expanded multiple times as "1". It could be brought down to 20-25 with good results. It seems to have a 1/x distribution - double the reschedule count, halve the latency. We're currently doing 300-400 usecs. I think a 1.5-millisecond @ 500MHz kernel would be a good, maintainable solution and a sensible compromise. - - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
[EMAIL PROTECTED] wrote: > > ... > > I suggest that you get your hearing checked. I'm fully in favor of sensible > low latency Linux. I believe however that low latency in Linux will > A. be "soft realtime", close to deadline most of the time. > B. millisecond level on present hardware > C. Best implemented by careful algorithm design instead of > "stuff the kernel with resched points" and hope for the best. Point C would be nice, but I don't believe it will happen because of a) The sheer number of problem areas b) The complexity of fixing them this way and c) The low level of motivation to make Linux perform well in this area. Main problem areas are the icache, dcache, pagecache, buffer cache, slab manager, filemap and filesystems. That's a lot of cantankerous cats to herd. In many cases it just doesn't make sense. If we need to unmap 10,000 pages, well, we need to unmap 10,000 pages. The only algorithmic redesign we can do here is to free them in 500 page blobs. That's silly because we're unbatching work which can be usefully batched. You're much better off unbatching the work *on demand* rather than by prior decision. And the best way of doing that is, yup, by peeking at current->need_resched, or by preempting the kernel. There has been surprisingly little discussion here about the desirability of a preemptible kernel. > > Nice marketing line, but it is not working code. > Guys, please don't. - - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
On Sun, Jan 28, 2001 at 06:14:28AM -0700, [EMAIL PROTECTED] wrote: > > Yes, I most emphatically do disagree with Victor! IRIX is used for > > mission-critical audio applications - recording as well playback - and > And it has bloat, it's famously buggy, it is impossible to maintain, ... However, that doesn't fault its concepts and its original goals. This kind stuff is often more of an implementation and bad abstraction issue than about faulty design and end goals. > > used. I will be very happy when Linux is as good in all these areas, > > and I'm working hard to achieve this goal with negligible impact on the > > current Linux "sweet-spot" applications such as web serving. > As stated previously: I think this is a proven improbability and I have > not seen any code or designs from you to show otherwise. Andrew Morton's patch uses < 10 rescheduling points (maybe less from memory) and in controlled, focused and logical places. It's certainly not a unmaintainable mammoth unlike previous attempts, since Riel (many thanks) has massively cleaned up the VM layer by using more reasonable algorithms, etc... > I suggest that you get your hearing checked. I'm fully in favor of sensible > low latency Linux. I believe however that low latency in Linux will > A. be "soft realtime", close to deadline most of the time. Which is very good and maintainable with Andrew's patches. > B. millisecond level on present hardware Also very good an useable for many applications short of writting dedicated code on specialized DSP cards. > C. Best implemented by careful algorithm design instead of > "stuff the kernel with resched points" and hope for the best. Algorithms ? which ones ? VM layer, scheduler ? It seems there's enough there in the Linux kernel to start doing interesting stuff, assuming that there's a large enough media crowd willing to do the userspace programming. > > for low-latency tasks. RTLinux is not Linux, it is a separate > > environment with a separate, limited set of APIs. You can't run XMMS, > > or any other existing Linux audio app in RTLinux. I want a low-latency > > Linux, not just another RTOS living parasitically alongside Linux. > Nice marketing line, but it is not working code. Mean what ? How does that response answer his criticism ? bill - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
On Sun, Jan 21, 2001 at 06:21:05PM -0800, Nigel Gamble wrote: > Yes, I most emphatically do disagree with Victor! IRIX is used for > mission-critical audio applications - recording as well playback - and > other low-latency applications. The same OS scales to large numbers of > CPUs. And it has the best desktop interactive response of any OS I've And it has bloat, it's famously buggy, it is impossible to maintain, ... > used. I will be very happy when Linux is as good in all these areas, > and I'm working hard to achieve this goal with negligible impact on the > current Linux "sweet-spot" applications such as web serving. As stated previously: I think this is a proven improbability and I have not seen any code or designs from you to show otherwise. > I agree. I'm not wedded to any particular design - I just want a > low-latency Linux by whatever is the best way of achieving that. > However, I am hearing Victor say that we shouldn't try to make Linux > itself low-latency, we should just use his so-called "RTLinux" environment I suggest that you get your hearing checked. I'm fully in favor of sensible low latency Linux. I believe however that low latency in Linux will A. be "soft realtime", close to deadline most of the time. B. millisecond level on present hardware C. Best implemented by careful algorithm design instead of "stuff the kernel with resched points" and hope for the best. RTLinux main focus is hard realtime: a few microseconds here and there are critical for us and for the applications we target. For consumer audio, this is overkill and vanilla Linux should be able to provide services reasonably well. But ... > for low-latency tasks. RTLinux is not Linux, it is a separate > environment with a separate, limited set of APIs. You can't run XMMS, > or any other existing Linux audio app in RTLinux. I want a low-latency > Linux, not just another RTOS living parasitically alongside Linux. Nice marketing line, but it is not working code. -- - Victor Yodaiken Finite State Machine Labs: The RTLinux Company. www.fsmlabs.com www.rtlinux.com - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
Hi! > > And making the kernel preemptive might be the best way to do that > > (and I'm saying "might"...). > > Keep in mind that Ken Thompson & Dennis Ritchie did not decide on a > non-preemptive strategy for UNIX because they were unaware of such > methods or because they were stupid. And when Rob Pike redesigned a new > "unix" Plan9 note there is no-preemptive kernel, and the core Linux > designers have rejected preemptive kernels too. Now it is certainly possible > that things have change and/or all these folks are just plain wrong. But > I wouldn't bet too much on it. Wrong. It was linus who suggested how to do preemptive kernel nicely. I guess he counts as core Linux designer ;-). Pavel -- I'm [EMAIL PROTECTED] "In my country we have almost anarchy and I don't care." Panos Katsaloulis describing me w.r.t. patents at [EMAIL PROTECTED] - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
Nigel Gamble wrote: > Yes, I most emphatically do disagree with Victor! IRIX is used for > mission-critical audio applications - recording as well playback - and > other low-latency applications. The same OS scales to large numbers of > CPUs. And it has the best desktop interactive response of any OS I've > used. I will be very happy when Linux is as good in all these areas, > and I'm working hard to achieve this goal with negligible impact on the > current Linux "sweet-spot" applications such as web serving. I have to agree - when I worked at the University of California, a number of us had SGI Indys in our offices. The desktop was lightning fast, and the graphics were awesome. This is no news to anybody, since SGI is known for graphics. The big surprise, however, came when we were trying to find the best nfs server platform, and benchmarked the SGI just for fun - as it turns out, a little Indy workstation blew away all other platforms, including some rather large expensive SPARC boxes, as an nfs server. So Irix clearly showed the best of both worlds - great latency and great throughput. I guess what I'm saying is, there are a lot of proven concepts in Irix, which work well in real life situations - don't throw out the baby with the bath water - jjs - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
On Sun, 21 Jan 2001, Paul Barton-Davis wrote: > >Let me just point out that Victor has his own commercial axe to grind in > >his continual bad-mouthing of IRIX, the internals of which he knows > >nothing about. > > 1) do you actually disagree with victor ? Yes, I most emphatically do disagree with Victor! IRIX is used for mission-critical audio applications - recording as well playback - and other low-latency applications. The same OS scales to large numbers of CPUs. And it has the best desktop interactive response of any OS I've used. I will be very happy when Linux is as good in all these areas, and I'm working hard to achieve this goal with negligible impact on the current Linux "sweet-spot" applications such as web serving. > this discussion has the hallmarks of turning into a personal > bash-fest, which is really pointless. what is *not* pointless is a > considered discussion about the merits of the IRIX "RT" approach over > possible approaches that Linux might take which are dissimilar to the > IRIX one. on the other hand, as Victor said, a large part of that > discussion ultimately comes down to a design style rather than hard > factual or logical reasoning. I agree. I'm not wedded to any particular design - I just want a low-latency Linux by whatever is the best way of achieving that. However, I am hearing Victor say that we shouldn't try to make Linux itself low-latency, we should just use his so-called "RTLinux" environment for low-latency tasks. RTLinux is not Linux, it is a separate environment with a separate, limited set of APIs. You can't run XMMS, or any other existing Linux audio app in RTLinux. I want a low-latency Linux, not just another RTOS living parasitically alongside Linux. Nigel Gamble[EMAIL PROTECTED] Mountain View, CA, USA. http://www.nrg.org/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
>Let me just point out that Victor has his own commercial axe to grind in >his continual bad-mouthing of IRIX, the internals of which he knows >nothing about. 1) do you actually disagree with victor ? 2) victor is not the only person who has expressed this opinion. the most prolific irix critic seems to be larry mcvoy, who certainly claims to know quite a bit about the internals. this discussion has the hallmarks of turning into a personal bash-fest, which is really pointless. what is *not* pointless is a considered discussion about the merits of the IRIX "RT" approach over possible approaches that Linux might take which are dissimilar to the IRIX one. on the other hand, as Victor said, a large part of that discussion ultimately comes down to a design style rather than hard factual or logical reasoning. Paul Davis <[EMAIL PROTECTED]> Bala Cynwyd, PA, USA Linux Audio Systems 610-667-4807 hybrid rather than pure; compromising rather than clean; distorted rather than straightforward; ambiguous rather than articulated; both-and rather than either-or; the difficult unity of inclusion rather than the easy unity of exclusion. Robert Venturi - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
On Sat, 20 Jan 2001 [EMAIL PROTECTED] wrote: > Let me just point out that Nigel (I think) has previously stated that > the purpose of this approach is to bring the stunning success of > IRIX style "RT" to Linux. Since some of us believe that IRIX is a virtual > handbook of OS errors, it really comes down to a design style. I think > that simplicity and "does the main job well" wins every time over > "really cool algorithms" and "does everything badly". Others > disagree. Let me just point out that Victor has his own commercial axe to grind in his continual bad-mouthing of IRIX, the internals of which he knows nothing about. Nigel Gamble[EMAIL PROTECTED] Mountain View, CA, USA. http://www.nrg.org/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
On Fri, Jan 12, 2001 at 07:45:43PM -0700, Jay Ts wrote: > Andrew Morton wrote: > > > > Jay Ts wrote: > > > > > > Now about the only thing left is to get it included > > > in the standard kernel. Do you think Linus Torvalds is more likely > > > to accept these patches than Ingo's? I sure hope this one works out. > > > > We (or "he") need to decide up-front that Linux is to become > > a low latency kernel. Then we need to decide the best way of > > doing that. > > > > Making the kernel internally preemptive is probably the best way of > > doing this. But it's a *big* task > > Ouch. Yes, I agree that the ideal path is for Linus and the other > kernel developers and ... well, just about everyone ... is to create > a long-range strategy and 'roadmap' that includes support for low-latency. > > And making the kernel preemptive might be the best way to do that > (and I'm saying "might"...). Keep in mind that Ken Thompson & Dennis Ritchie did not decide on a non-preemptive strategy for UNIX because they were unaware of such methods or because they were stupid. And when Rob Pike redesigned a new "unix" Plan9 note there is no-preemptive kernel, and the core Linux designers have rejected preemptive kernels too. Now it is certainly possible that things have change and/or all these folks are just plain wrong. But I wouldn't bet too much on it. -- - Victor Yodaiken Finite State Machine Labs: The RTLinux Company. www.fsmlabs.com www.rtlinux.com - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
Let me just point out that Nigel (I think) has previously stated that the purpose of this approach is to bring the stunning success of IRIX style "RT" to Linux. Since some of us believe that IRIX is a virtual handbook of OS errors, it really comes down to a design style. I think that simplicity and "does the main job well" wins every time over "really cool algorithms" and "does everything badly". Others disagree. On Sat, Jan 13, 2001 at 12:30:46AM +1100, Andrew Morton wrote: > Nigel Gamble wrote: > > > > Spinlocks should not be held for lots of time. This adversely affects > > SMP scalability as well as latency. That's why MontaVista's kernel > > preemption patch uses sleeping mutex locks instead of spinlocks for the > > long held locks. > > Nigel, > > what worries me about this is the Apache-flock-serialisation saga. > > Back in -test8, kumon@fujitsu demonstrated that changing this: > > lock_kernel() > down(sem) > > up(sem) > unlock_kernel() > > into this: > > down(sem) > > up(sem) > > had the effect of *decreasing* Apache's maximum connection rate > on an 8-way from ~5,000 connections/sec to ~2,000 conn/sec. > > That's downright scary. > > Obviously, was very quick, and the CPUs were passing through > this section at a great rate. > > How can we be sure that converting spinlocks to semaphores > won't do the same thing? Perhaps for workloads which we > aren't testing? > > So this needs to be done with caution. > > As davem points out, now we know where the problems are > occurring, a good next step is to redesign some of those > parts of the VM and buffercache. I don't think this will > be too hard, but they have to *want* to change :) > > Some of those algorithms are approximately O(N^2), for huge > values of N. > > > - > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to [EMAIL PROTECTED] > Please read the FAQ at http://www.tux.org/lkml/ -- - Victor Yodaiken Finite State Machine Labs: The RTLinux Company. www.fsmlabs.com www.rtlinux.com - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
On Sat, Jan 13, 2001 at 12:01:04PM +1100, Andrew Morton wrote: > Tim Wright wrote: [...] > > p_lock(lock); > > retry: > > ... > > if (condition where we need to sleep) { > > p_sema_v_lock(sema, lock); > > /* we got woken up */ > > p_lock(lock); > > goto retry; > > } > > ... > > That's an interesting concept. How could this actually be used > to protect a particular resource? Do all users of that > resource have to claim both the lock and the semaphore before > they may access it? > Ahh, I thought I might have been a tad terse in my explanation. No, the idea here is that the spinlock guards the access to the data structure we're concerned about. The sort of code I was thinking about would be where we need to allocate a data structure. We attempt to grab it from the freelist, and if successful, then everything is fine. Otherwise, we need to sleep waiting for some resources to be freed up. So we atomically drop the lock and sleep on the allocation semaphore. The freeing-up path is also protected by the same lock, and would do something like 'if (there are sleepers) wake(sleepers)'. This wakes up the sleeper who grabs the spinlock and retries the alloc. The result is no races, but we don't spin or hold the lock for a long time. It doesn't have to be an allocation. The same idea works for e.g. protecting access to "buffer cache" (not necessarily Linux) data, and then atomically releasing the lock and sleeping waiting for an I/O to happen. > > There are a number of locks (such as pagecache_lock) which in the > great majority of cases are held for a short period, but are > occasionally held for a long period. So these locks are not > a performance problem, they are not a scalability problem but > they *are* a worst-case-latency problem. > Understood. Whether the above metaphor works depends on whether or not the "holding for a long time" case fits this pattern i.e. at this stage, I'm not sufficiently familiar with the Linux VM code. I'm in the process of rectifying that problem :-) Regards, Tim -- Tim Wright - [EMAIL PROTECTED] or [EMAIL PROTECTED] or [EMAIL PROTECTED] IBM Linux Technology Center, Beaverton, Oregon "Nobody ever said I was charming, they said "Rimmer, you're a git!"" RD VI - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
"David S. Miller" wrote: > > Nigel Gamble writes: > > That's why MontaVista's kernel preemption patch uses sleeping mutex > > locks instead of spinlocks for the long held locks. > > Anyone who uses sleeping mutex locks is asking for trouble. Priority > inversion is an issue I dearly hope we never have to deal with in the > Linux kernel, and sleeping SMP mutex locks lead to exactly this kind > of problem. > Exactly why we are going to us priority inherit mutexes. This handles the inversion nicely. George - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
video drivers hog pci bus ? [was:[linux-audio-dev] low-latency scheduling patch for 2.4.0]
[alsa folks, i'd appreciate a comment on this thread from linux-audio-dev] hello everyone ! in a post related to his latest low-latency patch, andrew morton gave a pointer to http://www.zefiro.com/vgakills.txt , which addresses the problem of dropped samples due to agressive video drivers hogging the pci bus with retry attempts to optimize benchmark results while producing a "zipper" noise, e.g. when moving windows around with the mouse while playing a soundfile. some may have tried fiddling with the "pci retry" option in the XF86Config (see the linux audio quality howto by paul winkler at http://www.linuxdj.com/audio/quality for details). i recall some people having reported mysterious l/r swaps w/ alsa drivers on some cards, and iirc, most of these reports were not easily reproduced and explained. the zefiro paper states that the zefiro cards would swap channels occasionally under the circumstances mentioned. it sounds probable to me that all drivers using interleaved data would suffer from this problem. can some more experienced people comment on this ? is my assumption correct that the bus hogging behaviour is affected by the pci_retry option ? btw: the text only mentions pci video cards. will agp cards also clog the pci bus ? please give some detail in your answers - i would like to include this in the linux-audio-dev faq and resources pages. (so chances are you will only have to answer this once :) sorry if this has been dealt with before, i seem to have trouble to follow all my mailing lists... regards, jörn Andrew Morton wrote: > > > > > > A patch against kernel 2.4.0 final which provides low-latency > > > scheduling is at > > > > > > http://www.uow.edu.au/~andrewm/linux/schedlat.html#downloads > > > > > > Some notes: > > > > > > - Worst-case scheduling latency with *very* intense workloads is now > > > 0.8 milliseconds on a 500MHz uniprocessor. > Neither, I think. > > We can't apply some patch and say "there; it's low-latency". > > We (or "he") need to decide up-front that Linux is to become > a low latency kernel. Then we need to decide the best way of > doing that. > > Making the kernel internally preemptive is probably the best way of > doing this. But it's a *big* task to which must beard-scratching must > be put. It goes way beyond the preemptive-kernel patches which have > thus far been proposed. > > I could propose a simple patch for 2.4 (say, the ten most-needed > scheduling points). This would get us down to maybe 5-10 milliesconds > under heavy load (10-20x improvement). > > That would probably be a great and sufficient improvement for > the HA heartbeat monitoring apps, the database TP monitors, > the QuakeIII players and, of course, people who are only > interested in audio record and playback - I'd need advice > from the audio experts for that. > > I hope that one or more of the desktop-oriented Linux distributors > discover that hosing HTML out of gigE ports is not really the > One True Appplication of Linux, and that they decide to offer > a low-latency kernel for the other 99.99% of Linux users. > > > > Well it's extremely nice to see NFS included at least. I was really > > worried about that one. What about Samba? (Keeping in mind that > > serious "professional" musicians will likely have their Linux systems > > networked to a Windows box, at least until they have all the necessary > > tools on Linux. > > > > - If you care about latency, be *very* cautious about upgrading to > > > XFree86 4.x. I'll cover this issue in a separate email, copied > > > to the XFree team. > > I haven't gathered the energy to send it. > > The basic problem with many video cards is this: > > Video adapters have on-board command FIFOs. They also > have a "FIFO has spare room" control bit. > > If you write to the FIFO when there is no spare room, > the damned thing busies the PCI bus until there *is* > room. This can be up to twenty *milliseconds*. > > This will screw up realtime operating systems, > will cause network receive overruns, will screw > up isochronous protocols such as USB and 1394 > and will of course screw up scheduling latency. > > In xfree3 it was OK - the drivers polled the "spare room" > bit before writing. But in xfree4 the drivers are starting > to take advantage of this misfeature. I am told that > a significant number of people are backing out xfree4 > upgrades because of this. For audio. > > The manufacturers got caught out by the trade press > in '98 and '99 and they added registry flags to their > drivers to turn off this obnoxious behaviour. > > What needs to happen is for the xfree guys to add a > control flag to XF86Config for this. I believe they > have - it's called `PCIRetry'. > > I believe PCIRetry defaults to `off'. This is bad. > It should default to `on'. > > You can read about this minor scandal at the following > URLs: > > http://www.zefiro.com/vgakills.txt > http://www.zdnet.com/pcmag/news/tre
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
Andrew Morton wrote: > > Jay Ts wrote: > > > > Now about the only thing left is to get it included > > in the standard kernel. Do you think Linus Torvalds is more likely > > to accept these patches than Ingo's? I sure hope this one works out. > > We (or "he") need to decide up-front that Linux is to become > a low latency kernel. Then we need to decide the best way of > doing that. > > Making the kernel internally preemptive is probably the best way of > doing this. But it's a *big* task Ouch. Yes, I agree that the ideal path is for Linus and the other kernel developers and ... well, just about everyone ... is to create a long-range strategy and 'roadmap' that includes support for low-latency. And making the kernel preemptive might be the best way to do that (and I'm saying "might"...). But all that can take years, if it happens at all, and we may have a short-term approach that will satisfy almost everyone, at least for now, and maybe even allow for the development and maybe even (?) commercial distribution ("shrink wrap") of audio software for Linux. (Er, assuming that the ALSA drivers become the standard audio drivers. Mustn't forget that.) As for actually desiring a preemptive kernel, I'm not a complete expert in this area, but I will say that no one has ever managed to explain to me why the extra complexity is vital, necessary, or just worth the bother. Sure, it would help with the implementation and OS support of the multithreaded and realtime code that I'm developing. So far, I haven't run into any major limitations yet related to lack of a preemtive kernel, but maybe I will later. (?) > I could propose a simple patch for 2.4 (say, the ten most-needed > scheduling points). This would get us down to maybe 5-10 milliesconds > under heavy load (10-20x improvement). 5-10 ms wouldn't be great, but would at least be better than nothing. It would be a good start, perhaps, especially if it were understood that things will get better later on. As with the development of SMP support for Linux. > That would probably be a great and sufficient improvement for [...] > people who are only interested in audio record and playback - I'd need advice > from the audio experts for that. Well, call me an audio expert, then. :) What sort of advice do you want? You can send your comments to the LAD (linux audio development) mailing list, and there are a bunch of smart audio/music programmers who I'm pretty sure will be happy to comment. One thing I'd like to say is that simple recording and playback of audio is hardly the complete picture! Try recording and playback of *many* channels of audio, while at the same time running multiple software synthesizers and effects plugins, and recording and playing back MIDI sequences. And other things, too. One thing I ask of anyone who's developing Linux is to please think in an open-ended manner regarding audio/music. This is really still a pretty new and immature field, and the software (when the Real Stuff gets to Linux, that is) will be happy to absorb whatever hardware resources are thrown at it for years to come. > I hope that one or more of the desktop-oriented Linux distributors > discover that hosing HTML out of gigE ports is not really the > One True Appplication of Linux, I agree approximately 110.111%. :) Really, I find servers to be pretty boring. "Linux is supposed to be fun", right? :) > > What's the prob with XF86 4.0? > [snipped longish explanation] > So, we need to talk to the xfree team. > > Whoops! I accidentally Cc'ed them :-) Thank you. A low-latency kernel would be meaningless if the X server creates delays of 20ms! This just plain needs to be fixed. - Jay Ts [EMAIL PROTECTED] - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
Tim Wright wrote: > > Hmmm... > if is very quick, and is guaranteed not to sleep, then a semaphore > is the wrong way to protect it. A spinlock is the correct choice. If it's > always slow, and can sleep, then a semaphore makes more sense, although if > it's highly contented, you're going to serialize and throughput will die. > At that point, you need to redesign :-) > If it's mostly quick but occasionally needs to sleep, I don't know what the > correct idiom would be in Linux. DYNIX/ptx has the concept of atomically > releasing a spinlock and going to sleep on a semaphore, and that would be > the solution there e.g. > > p_lock(lock); > retry: > ... > if (condition where we need to sleep) { > p_sema_v_lock(sema, lock); > /* we got woken up */ > p_lock(lock); > goto retry; > } > ... That's an interesting concept. How could this actually be used to protect a particular resource? Do all users of that resource have to claim both the lock and the semaphore before they may access it? There are a number of locks (such as pagecache_lock) which in the great majority of cases are held for a short period, but are occasionally held for a long period. So these locks are not a performance problem, they are not a scalability problem but they *are* a worst-case-latency problem. > > I'm stating the obvious here, and re-iterating what you said, and that is that > we need to carefully pick the correct primitive for the job. Unless there's > something very unusual in the Linux implementation that I've missed, a > spinlock is a "cheaper" method of protecting a short critical section, and > should be chosen. > > I know the BKL is a semantically a little unusual (the automatic release on > sleep stuff), but even so, isn't > > lock_kernel() > down(sem) > > up(sem) > unlock_kernel() > > actually equivalent to > > lock_kernel() > > unlock_kernel() > > If so, it's no great surprise that performance dropped given that we replaced > a spinlock (albeit one guarding somewhat more than the critical section) with > a semaphore. Yes. - - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
Andrew Morton wrote: > > Nigel Gamble wrote: > > > > Spinlocks should not be held for lots of time. This adversely affects > > SMP scalability as well as latency. That's why MontaVista's kernel > > preemption patch uses sleeping mutex locks instead of spinlocks for the > > long held locks. > > Nigel, > > what worries me about this is the Apache-flock-serialisation saga. > > Back in -test8, kumon@fujitsu demonstrated that changing this: > > lock_kernel() > down(sem) > > up(sem) > unlock_kernel() > > into this: > > down(sem) > > up(sem) > > had the effect of *decreasing* Apache's maximum connection rate > on an 8-way from ~5,000 connections/sec to ~2,000 conn/sec. > > That's downright scary. > > Obviously, was very quick, and the CPUs were passing through > this section at a great rate. If was that fast, maybe the down/up should have been a spinlock too. But what if it is changed to: BKL_enter_mutx() down(sem) up(sem) BKL_exit_mutex() > > How can we be sure that converting spinlocks to semaphores > won't do the same thing? Perhaps for workloads which we > aren't testing? The key is to keep the fast stuff on the spinlock and the slow stuff on the mutex. Otherwise you WILL eat up the cpu with the overhead. > > So this needs to be done with caution. > > As davem points out, now we know where the problems are > occurring, a good next step is to redesign some of those > parts of the VM and buffercache. I don't think this will > be too hard, but they have to *want* to change :) They will *want* to change if they pop up due to other work :) > > Some of those algorithms are approximately O(N^2), for huge > values of N. > > - > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to [EMAIL PROTECTED] > Please read the FAQ at http://www.tux.org/lkml/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
On Sat, 13 Jan 2001, Andrew Morton wrote: > Nigel Gamble wrote: > > Spinlocks should not be held for lots of time. This adversely affects > > SMP scalability as well as latency. That's why MontaVista's kernel > > preemption patch uses sleeping mutex locks instead of spinlocks for the > > long held locks. > > Nigel, > > what worries me about this is the Apache-flock-serialisation saga. > > Back in -test8, kumon@fujitsu demonstrated that changing this: > > lock_kernel() > down(sem) > > up(sem) > unlock_kernel() > > into this: > > down(sem) > > up(sem) > > had the effect of *decreasing* Apache's maximum connection rate > on an 8-way from ~5,000 connections/sec to ~2,000 conn/sec. > > That's downright scary. > > Obviously, was very quick, and the CPUs were passing through > this section at a great rate. Yes, this demonstrates that spinlocks are preferable to sleep locks for short sections. However, it looks to me like the implementation of up() may be partly to blame. It looks to me as if it tends to prefer to context switch to the woken up process, instead of continuing to run the current process. Surrounding the semaphore with the BKL has the effect of enforcing the latter behavior, because the semaphore itself will never have any waiters. > How can we be sure that converting spinlocks to semaphores > won't do the same thing? Perhaps for workloads which we > aren't testing? > > So this needs to be done with caution. > > As davem points out, now we know where the problems are > occurring, a good next step is to redesign some of those > parts of the VM and buffercache. I don't think this will > be too hard, but they have to *want* to change :) Yes, wherever the code can be redesigned to avoid long held locks, that would definitely be my preferred solution. I think everyone would be happy if we could end up with a maintainable solution using only spinlocks that are held for no longer than a couple of hundred microseconds. Nigel Gamble[EMAIL PROTECTED] Mountain View, CA, USA. http://www.nrg.org/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
On Fri, 12 Jan 2001, Tim Wright wrote: > On Sat, Jan 13, 2001 at 12:30:46AM +1100, Andrew Morton wrote: > > what worries me about this is the Apache-flock-serialisation saga. > > > > Back in -test8, kumon@fujitsu demonstrated that changing this: > > > > lock_kernel() > > down(sem) > > > > up(sem) > > unlock_kernel() > > > > into this: > > > > down(sem) > > > > up(sem) > > > > had the effect of *decreasing* Apache's maximum connection rate > > on an 8-way from ~5,000 connections/sec to ~2,000 conn/sec. > > > > That's downright scary. > > > > Obviously, was very quick, and the CPUs were passing through > > this section at a great rate. > > > > How can we be sure that converting spinlocks to semaphores > > won't do the same thing? Perhaps for workloads which we > > aren't testing? > > > > So this needs to be done with caution. > > > > Hmmm... > if is very quick, and is guaranteed not to sleep, then a semaphore > is the wrong way to protect it. A spinlock is the correct choice. If it's > always slow, and can sleep, then a semaphore makes more sense, although if > it's highly contented, you're going to serialize and throughput will die. > At that point, you need to redesign :-) > If it's mostly quick but occasionally needs to sleep, I don't know what the > correct idiom would be in Linux. DYNIX/ptx has the concept of atomically > releasing a spinlock and going to sleep on a semaphore, and that would be > the solution there e.g. > > p_lock(lock); > retry: > ... > if (condition where we need to sleep) { > p_sema_v_lock(sema, lock); > /* we got woken up */ > p_lock(lock); > goto retry; > } > ... > > I'm stating the obvious here, and re-iterating what you said, and that is that > we need to carefully pick the correct primitive for the job. Unless there's > something very unusual in the Linux implementation that I've missed, a > spinlock is a "cheaper" method of protecting a short critical section, and > should be chosen. > > I know the BKL is a semantically a little unusual (the automatic release on > sleep stuff), but even so, isn't > > lock_kernel() > down(sem) > > up(sem) > unlock_kernel() > > actually equivalent to > > lock_kernel() > > unlock_kernel() > > If so, it's no great surprise that performance dropped given that we replaced > a spinlock (albeit one guarding somewhat more than the critical section) with > a semaphore. > > Tim > > -- > Tim Wright - [EMAIL PROTECTED] or [EMAIL PROTECTED] or [EMAIL PROTECTED] > IBM Linux Technology Center, Beaverton, Oregon > "Nobody ever said I was charming, they said "Rimmer, you're a git!"" RD VI > Nigel Gamble[EMAIL PROTECTED] Mountain View, CA, USA. http://www.nrg.org/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
On Sat, Jan 13, 2001 at 12:30:46AM +1100, Andrew Morton wrote: > what worries me about this is the Apache-flock-serialisation saga. > > Back in -test8, kumon@fujitsu demonstrated that changing this: > > lock_kernel() > down(sem) > > up(sem) > unlock_kernel() > > into this: > > down(sem) > > up(sem) > > had the effect of *decreasing* Apache's maximum connection rate > on an 8-way from ~5,000 connections/sec to ~2,000 conn/sec. > > That's downright scary. > > Obviously, was very quick, and the CPUs were passing through > this section at a great rate. > > How can we be sure that converting spinlocks to semaphores > won't do the same thing? Perhaps for workloads which we > aren't testing? > > So this needs to be done with caution. > Hmmm... if is very quick, and is guaranteed not to sleep, then a semaphore is the wrong way to protect it. A spinlock is the correct choice. If it's always slow, and can sleep, then a semaphore makes more sense, although if it's highly contented, you're going to serialize and throughput will die. At that point, you need to redesign :-) If it's mostly quick but occasionally needs to sleep, I don't know what the correct idiom would be in Linux. DYNIX/ptx has the concept of atomically releasing a spinlock and going to sleep on a semaphore, and that would be the solution there e.g. p_lock(lock); retry: ... if (condition where we need to sleep) { p_sema_v_lock(sema, lock); /* we got woken up */ p_lock(lock); goto retry; } ... I'm stating the obvious here, and re-iterating what you said, and that is that we need to carefully pick the correct primitive for the job. Unless there's something very unusual in the Linux implementation that I've missed, a spinlock is a "cheaper" method of protecting a short critical section, and should be chosen. I know the BKL is a semantically a little unusual (the automatic release on sleep stuff), but even so, isn't lock_kernel() down(sem) up(sem) unlock_kernel() actually equivalent to lock_kernel() unlock_kernel() If so, it's no great surprise that performance dropped given that we replaced a spinlock (albeit one guarding somewhat more than the critical section) with a semaphore. Tim -- Tim Wright - [EMAIL PROTECTED] or [EMAIL PROTECTED] or [EMAIL PROTECTED] IBM Linux Technology Center, Beaverton, Oregon "Nobody ever said I was charming, they said "Rimmer, you're a git!"" RD VI - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
Nigel Gamble wrote: > > Spinlocks should not be held for lots of time. This adversely affects > SMP scalability as well as latency. That's why MontaVista's kernel > preemption patch uses sleeping mutex locks instead of spinlocks for the > long held locks. Nigel, what worries me about this is the Apache-flock-serialisation saga. Back in -test8, kumon@fujitsu demonstrated that changing this: lock_kernel() down(sem) up(sem) unlock_kernel() into this: down(sem) up(sem) had the effect of *decreasing* Apache's maximum connection rate on an 8-way from ~5,000 connections/sec to ~2,000 conn/sec. That's downright scary. Obviously, was very quick, and the CPUs were passing through this section at a great rate. How can we be sure that converting spinlocks to semaphores won't do the same thing? Perhaps for workloads which we aren't testing? So this needs to be done with caution. As davem points out, now we know where the problems are occurring, a good next step is to redesign some of those parts of the VM and buffercache. I don't think this will be too hard, but they have to *want* to change :) Some of those algorithms are approximately O(N^2), for huge values of N. - - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
"David S. Miller" wrote: > > ... > Bug:In the tcp_minisock.c changes, if you bail out of the loop > early (ie. max_killed=1) you do not decrement tcp_tw_count > by killed, which corrupts the state of the TIME_WAIT socket > reaper. The fix is simple, just duplicate the tcp_tw_count > decrement into the "if (max_killed)" code block. Well that was moderately stupid. Thanks. It doesn't seem to cause problems in practice though. Maybe in the longer term... I believe the tcp_minisucks.c code needs redoing irrespective of latency stuff. It can spend several hundred milliseconds in a timer handler, which is rather unsociable. There are a number of moderately complex ways of smoothing out its behaviour, but I'm inclined to just punt the whole thing up to process context via schedule_task(). We'll see... - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
Nigel Gamble writes: > That's why MontaVista's kernel preemption patch uses sleeping mutex > locks instead of spinlocks for the long held locks. Anyone who uses sleeping mutex locks is asking for trouble. Priority inversion is an issue I dearly hope we never have to deal with in the Linux kernel, and sleeping SMP mutex locks lead to exactly this kind of problem. Later, David S. Miller [EMAIL PROTECTED] - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
On Wed, 10 Jan 2001, David S. Miller wrote: > Opinion: Personally, I think the approach in Andrew's patch >is the way to go. > >Not because it can give the absolute best results. >But rather, it is because it says "here is where a lot > of time is spent". > >This has two huge benefits: >1) It tells us where possible algorithmic improvements may > be possible. In some cases we may be able to improve the > code to the point where the pre-emption points are no > longer necessary and can thus be removed. This is definitely an important goal. But lock-metering code in a fully preemptible kernel an also identify spots where algorithmic improvements are most important. >2) It affects only code which can burn a lot of cpu without > scheduling. Compare this to schemes which make the kernel > fully pre-emptable, causing _EVERYONE_ to pay the price of > low-latency. If we were to later fine algorithmic > improvements to the high-latency pieces of code, we > couldn't then just "undo" support for pre-emption because > dependencies will have swept across the whole kernel > already. > > Pre-emption, by itself, also doesn't help in situations > where lots of time is spent while holding spinlocks. > There are several other operating systems which support > pre-emption where you will find hard coded calls to the > scheduler in time-consuming code. Heh, it's almost like, > "what's the frigging point of pre-emption then if you > still have to manually check in some spots?" Spinlocks should not be held for lots of time. This adversely affects SMP scalability as well as latency. That's why MontaVista's kernel preemption patch uses sleeping mutex locks instead of spinlocks for the long held locks. In a fully preemptible kernel that is implemented correctly, you won't find any hard-coded calls to the scheduler in time consuming code. The scheduler should only be called in response to an interrupt (IO or timeout) when we know that a higher priority process has been made runnable, or when the running process sleeps (voluntarily or when it has to wait for something) or exits. This is the case in both of the fully preemptible kernels which I've worked on (IRIX and REAL/IX). Nigel Gamble[EMAIL PROTECTED] Mountain View, CA, USA. http://www.nrg.org/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
"David S. Miller" wrote: > 2) It affects only code which can burn a lot of cpu without > scheduling. Compare this to schemes which make the kernel > fully pre-emptable, causing _EVERYONE_ to pay the price of > low-latency Is there necessarily a price? Kernel preemption can make io-bound code go faster by allowing a blocked task to start running again immediately on io completion. As things are now, the task will have to wait for whatever might be happening in the kernel to complete. -- Daniel - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
Just some commentary and a bug report on your patch Andrew: Opinion: Personally, I think the approach in Andrew's patch is the way to go. Not because it can give the absolute best results. But rather, it is because it says "here is where a lot of time is spent". This has two huge benefits: 1) It tells us where possible algorithmic improvements may be possible. In some cases we may be able to improve the code to the point where the pre-emption points are no longer necessary and can thus be removed. 2) It affects only code which can burn a lot of cpu without scheduling. Compare this to schemes which make the kernel fully pre-emptable, causing _EVERYONE_ to pay the price of low-latency. If we were to later fine algorithmic improvements to the high-latency pieces of code, we couldn't then just "undo" support for pre-emption because dependencies will have swept across the whole kernel already. Pre-emption, by itself, also doesn't help in situations where lots of time is spent while holding spinlocks. There are several other operating systems which support pre-emption where you will find hard coded calls to the scheduler in time-consuming code. Heh, it's almost like, "what's the frigging point of pre-emption then if you still have to manually check in some spots?" Bug:In the tcp_minisock.c changes, if you bail out of the loop early (ie. max_killed=1) you do not decrement tcp_tw_count by killed, which corrupts the state of the TIME_WAIT socket reaper. The fix is simple, just duplicate the tcp_tw_count decrement into the "if (max_killed)" code block. Later, David S. Miller [EMAIL PROTECTED] - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
> The darn thing disables intrs on its own for quite some time with some of > the more aggressive drivers. We saw our 20us latencies under RTLinux go up > a lot with some of those drivers. It isnt disabling interrupts. Its stalling the PCI bus. Its nasty tricks by card vendors apparently to get good benchmark numbers. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
Jay Ts wrote: > > > A patch against kernel 2.4.0 final which provides low-latency > > scheduling is at > > > > http://www.uow.edu.au/~andrewm/linux/schedlat.html#downloads > > > > Some notes: > > > > - Worst-case scheduling latency with *very* intense workloads is now > > 0.8 milliseconds on a 500MHz uniprocessor. > > Wow! That's super. Now about the only thing left is to get it included > in the standard kernel. Do you think Linus Torvalds is more likely > to accept these patches than Ingo's? I sure hope this one works out. Neither, I think. We can't apply some patch and say "there; it's low-latency". We (or "he") need to decide up-front that Linux is to become a low latency kernel. Then we need to decide the best way of doing that. Making the kernel internally preemptive is probably the best way of doing this. But it's a *big* task to which must beard-scratching must be put. It goes way beyond the preemptive-kernel patches which have thus far been proposed. I could propose a simple patch for 2.4 (say, the ten most-needed scheduling points). This would get us down to maybe 5-10 milliesconds under heavy load (10-20x improvement). That would probably be a great and sufficient improvement for the HA heartbeat monitoring apps, the database TP monitors, the QuakeIII players and, of course, people who are only interested in audio record and playback - I'd need advice from the audio experts for that. I hope that one or more of the desktop-oriented Linux distributors discover that hosing HTML out of gigE ports is not really the One True Appplication of Linux, and that they decide to offer a low-latency kernel for the other 99.99% of Linux users. > > This is one to > > three orders of magnitude better than BeOS, MacOS and the Windowses. > > ** salivates ** > > > - Low latency will probably only be achieved when using the ext2 and > > NFS filesystems. > > Well it's extremely nice to see NFS included at least. I was really > worried about that one. What about Samba? (Keeping in mind that > serious "professional" musicians will likely have their Linux systems > networked to a Windows box, at least until they have all the necessary > tools on Linux. I would expect the smbfs client code to be OK. Will test - thanks. > > - If you care about latency, be *very* cautious about upgrading to > > XFree86 4.x. I'll cover this issue in a separate email, copied > > to the XFree team. > > Did that email pass by me unnoticed? What's the prob with XF86 4.0? I haven't gathered the energy to send it. The basic problem with many video cards is this: Video adapters have on-board command FIFOs. They also have a "FIFO has spare room" control bit. If you write to the FIFO when there is no spare room, the damned thing busies the PCI bus until there *is* room. This can be up to twenty *milliseconds*. This will screw up realtime operating systems, will cause network receive overruns, will screw up isochronous protocols such as USB and 1394 and will of course screw up scheduling latency. In xfree3 it was OK - the drivers polled the "spare room" bit before writing. But in xfree4 the drivers are starting to take advantage of this misfeature. I am told that a significant number of people are backing out xfree4 upgrades because of this. For audio. The manufacturers got caught out by the trade press in '98 and '99 and they added registry flags to their drivers to turn off this obnoxious behaviour. What needs to happen is for the xfree guys to add a control flag to XF86Config for this. I believe they have - it's called `PCIRetry'. I believe PCIRetry defaults to `off'. This is bad. It should default to `on'. You can read about this minor scandal at the following URLs: http://www.zefiro.com/vgakills.txt http://www.zdnet.com/pcmag/news/trends/t980619a.htm http://www.research.microsoft.com/~mbj/papers/tr-98-29.html So, we need to talk to the xfree team. Whoops! I accidentally Cc'ed them :-) - - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
} > - If you care about latency, be *very* cautious about upgrading to } > XFree86 4.x. I'll cover this issue in a separate email, copied } > to the XFree team. } } Did that email pass by me unnoticed? What's the prob with XF86 4.0? The darn thing disables intrs on its own for quite some time with some of the more aggressive drivers. We saw our 20us latencies under RTLinux go up a lot with some of those drivers. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
> A patch against kernel 2.4.0 final which provides low-latency > scheduling is at > > http://www.uow.edu.au/~andrewm/linux/schedlat.html#downloads > > Some notes: > > - Worst-case scheduling latency with *very* intense workloads is now > 0.8 milliseconds on a 500MHz uniprocessor. Wow! That's super. Now about the only thing left is to get it included in the standard kernel. Do you think Linus Torvalds is more likely to accept these patches than Ingo's? I sure hope this one works out. > This is one to > three orders of magnitude better than BeOS, MacOS and the Windowses. ** salivates ** > - Low latency will probably only be achieved when using the ext2 and > NFS filesystems. Well it's extremely nice to see NFS included at least. I was really worried about that one. What about Samba? (Keeping in mind that serious "professional" musicians will likely have their Linux systems networked to a Windows box, at least until they have all the necessary tools on Linux. > - If you care about latency, be *very* cautious about upgrading to > XFree86 4.x. I'll cover this issue in a separate email, copied > to the XFree team. Did that email pass by me unnoticed? What's the prob with XF86 4.0? - Jay Ts - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/