David Woodhouse wrote:
> On Thu, 1 Feb 2001, Pavel Machek wrote:
>
>
>> I thought that Vtech Helio folks already have XIP supported...
>
>
> Plenty of people are doing XIP of the kernel. I'm not aware of anyone
> doing XIP of userspace pages.
uClinux does XIP (readonly) for userspace pro
On Thu, 1 Feb 2001, Pavel Machek wrote:
> I thought that Vtech Helio folks already have XIP supported...
Plenty of people are doing XIP of the kernel. I'm not aware of anyone
doing XIP of userspace pages.
--
dwmw2
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Hi!
> > I wasn't thinking of running the kernel XIP from writable, but even
> > trying to do that from the filesystem is a mess. If you're going to be
> > that way about it...
>
> Heh. I am. Read-only XIP is going to be doable, but writable XIP means that
> any time you start to write to the f
Hi!
> > There has been surprisingly little discussion here about the
> > desirability of a preemptible kernel.
> >
>
> And I think that is a very intersting topic... (certainly more
> interesting than hotmail's firewalling policy ;o)
>
> Alright, so suppose I dream up an application which I t
Joe deBlaquiere wrote:
~snip~
> The locical answer is run with HZ=1 so you get 100us intervals,
> right ;o).
Lets not assume we need the overhead of HZ=1 to get 100us
alarm/timer resolution. How about a timer that ticks when we need the
next tick...
On systems with multiple hardwa
[EMAIL PROTECTED] said:
> A recent example I came across is in the MTD code which invokes the
> erase algorithm for CFI memory. This algorithm spews a command
> sequence to the flash chips followed by a list of sectors to erase.
> Following each sector adress, the chip will wait for 50usec for
Andrew Morton wrote:
> There has been surprisingly little discussion here about the
> desirability of a preemptible kernel.
>
And I think that is a very intersting topic... (certainly more
interesting than hotmail's firewalling policy ;o)
Alright, so suppose I dream up an application which I
Bill Huey wrote:
>
> Andrew Morton's patch uses < 10 rescheduling points (maybe less from memory)
err... It grew. More like 50 now reiserfs is in there. That's counting
real instances - it's not counting ones which are expanded multiple times
as "1".
It could be brought down to 20-25 with goo
[EMAIL PROTECTED] wrote:
>
> ...
>
> I suggest that you get your hearing checked. I'm fully in favor of sensible
> low latency Linux. I believe however that low latency in Linux will
> A. be "soft realtime", close to deadline most of the time.
> B. millisecond level on present h
On Sun, Jan 28, 2001 at 06:14:28AM -0700, [EMAIL PROTECTED] wrote:
> > Yes, I most emphatically do disagree with Victor! IRIX is used for
> > mission-critical audio applications - recording as well playback - and
> And it has bloat, it's famously buggy, it is impossible to maintain, ...
However
On Sun, Jan 21, 2001 at 06:21:05PM -0800, Nigel Gamble wrote:
> Yes, I most emphatically do disagree with Victor! IRIX is used for
> mission-critical audio applications - recording as well playback - and
> other low-latency applications. The same OS scales to large numbers of
> CPUs. And it has
Hi!
> > And making the kernel preemptive might be the best way to do that
> > (and I'm saying "might"...).
>
> Keep in mind that Ken Thompson & Dennis Ritchie did not decide on a
> non-preemptive strategy for UNIX because they were unaware of such
> methods or because they were stupid. And whe
Nigel Gamble wrote:
> Yes, I most emphatically do disagree with Victor! IRIX is used for
> mission-critical audio applications - recording as well playback - and
> other low-latency applications. The same OS scales to large numbers of
> CPUs. And it has the best desktop interactive response of
On Sun, 21 Jan 2001, Paul Barton-Davis wrote:
> >Let me just point out that Victor has his own commercial axe to grind in
> >his continual bad-mouthing of IRIX, the internals of which he knows
> >nothing about.
>
> 1) do you actually disagree with victor ?
Yes, I most emphatically do disagree wi
>Let me just point out that Victor has his own commercial axe to grind in
>his continual bad-mouthing of IRIX, the internals of which he knows
>nothing about.
1) do you actually disagree with victor ?
2) victor is not the only person who has expressed this opinion. the
most prolific irix crit
On Sat, 20 Jan 2001 [EMAIL PROTECTED] wrote:
> Let me just point out that Nigel (I think) has previously stated that
> the purpose of this approach is to bring the stunning success of
> IRIX style "RT" to Linux. Since some of us believe that IRIX is a virtual
> handbook of OS errors, it really co
On Fri, Jan 12, 2001 at 07:45:43PM -0700, Jay Ts wrote:
> Andrew Morton wrote:
> >
> > Jay Ts wrote:
> > >
> > > Now about the only thing left is to get it included
> > > in the standard kernel. Do you think Linus Torvalds is more likely
> > > to accept these patches than Ingo's? I sure hope t
Let me just point out that Nigel (I think) has previously stated that
the purpose of this approach is to bring the stunning success of
IRIX style "RT" to Linux. Since some of us believe that IRIX is a virtual
handbook of OS errors, it really comes down to a design style. I think
that simplicity
On Sat, Jan 13, 2001 at 12:01:04PM +1100, Andrew Morton wrote:
> Tim Wright wrote:
[...]
> > p_lock(lock);
> > retry:
> > ...
> > if (condition where we need to sleep) {
> > p_sema_v_lock(sema, lock);
> > /* we got woken up */
> > p_lock(lock);
> > goto retry;
> > }
> > ...
>
> Th
"David S. Miller" wrote:
>
> Nigel Gamble writes:
> > That's why MontaVista's kernel preemption patch uses sleeping mutex
> > locks instead of spinlocks for the long held locks.
>
> Anyone who uses sleeping mutex locks is asking for trouble. Priority
> inversion is an issue I dearly hope we n
Andrew Morton wrote:
>
> Jay Ts wrote:
> >
> > Now about the only thing left is to get it included
> > in the standard kernel. Do you think Linus Torvalds is more likely
> > to accept these patches than Ingo's? I sure hope this one works out.
>
> We (or "he") need to decide up-front that Linu
Tim Wright wrote:
>
> Hmmm...
> if is very quick, and is guaranteed not to sleep, then a semaphore
> is the wrong way to protect it. A spinlock is the correct choice. If it's
> always slow, and can sleep, then a semaphore makes more sense, although if
> it's highly contented, you're going to ser
Andrew Morton wrote:
>
> Nigel Gamble wrote:
> >
> > Spinlocks should not be held for lots of time. This adversely affects
> > SMP scalability as well as latency. That's why MontaVista's kernel
> > preemption patch uses sleeping mutex locks instead of spinlocks for the
> > long held locks.
>
>
On Sat, 13 Jan 2001, Andrew Morton wrote:
> Nigel Gamble wrote:
> > Spinlocks should not be held for lots of time. This adversely affects
> > SMP scalability as well as latency. That's why MontaVista's kernel
> > preemption patch uses sleeping mutex locks instead of spinlocks for the
> > long he
On Fri, 12 Jan 2001, Tim Wright wrote:
> On Sat, Jan 13, 2001 at 12:30:46AM +1100, Andrew Morton wrote:
> > what worries me about this is the Apache-flock-serialisation saga.
> >
> > Back in -test8, kumon@fujitsu demonstrated that changing this:
> >
> > lock_kernel()
> > down(sem)
> >
On Sat, Jan 13, 2001 at 12:30:46AM +1100, Andrew Morton wrote:
> what worries me about this is the Apache-flock-serialisation saga.
>
> Back in -test8, kumon@fujitsu demonstrated that changing this:
>
> lock_kernel()
> down(sem)
>
> up(sem)
> unlock_kernel()
>
> i
Nigel Gamble wrote:
>
> Spinlocks should not be held for lots of time. This adversely affects
> SMP scalability as well as latency. That's why MontaVista's kernel
> preemption patch uses sleeping mutex locks instead of spinlocks for the
> long held locks.
Nigel,
what worries me about this is
"David S. Miller" wrote:
>
> ...
> Bug:In the tcp_minisock.c changes, if you bail out of the loop
> early (ie. max_killed=1) you do not decrement tcp_tw_count
> by killed, which corrupts the state of the TIME_WAIT socket
> reaper. The fix is simple, just duplicate the
Nigel Gamble writes:
> That's why MontaVista's kernel preemption patch uses sleeping mutex
> locks instead of spinlocks for the long held locks.
Anyone who uses sleeping mutex locks is asking for trouble. Priority
inversion is an issue I dearly hope we never have to deal with in the
Linux ker
On Wed, 10 Jan 2001, David S. Miller wrote:
> Opinion: Personally, I think the approach in Andrew's patch
>is the way to go.
>
>Not because it can give the absolute best results.
>But rather, it is because it says "here is where a lot
> of time is spent".
>
>
"David S. Miller" wrote:
> 2) It affects only code which can burn a lot of cpu without
> scheduling. Compare this to schemes which make the kernel
> fully pre-emptable, causing _EVERYONE_ to pay the price of
> low-latency
Is there necessarily a pr
Just some commentary and a bug report on your patch Andrew:
Opinion: Personally, I think the approach in Andrew's patch
is the way to go.
Not because it can give the absolute best results.
But rather, it is because it says "here is where a lot
of time is spen
> The darn thing disables intrs on its own for quite some time with some of
> the more aggressive drivers. We saw our 20us latencies under RTLinux go up
> a lot with some of those drivers.
It isnt disabling interrupts. Its stalling the PCI bus. Its nasty tricks by
card vendors apparently to get
Jay Ts wrote:
>
> > A patch against kernel 2.4.0 final which provides low-latency
> > scheduling is at
> >
> > http://www.uow.edu.au/~andrewm/linux/schedlat.html#downloads
> >
> > Some notes:
> >
> > - Worst-case scheduling latency with *very* intense workloads is now
> > 0.8 milliseconds
} > - If you care about latency, be *very* cautious about upgrading to
} > XFree86 4.x. I'll cover this issue in a separate email, copied
} > to the XFree team.
}
} Did that email pass by me unnoticed? What's the prob with XF86 4.0?
The darn thing disables intrs on its own for quite some t
> A patch against kernel 2.4.0 final which provides low-latency
> scheduling is at
>
> http://www.uow.edu.au/~andrewm/linux/schedlat.html#downloads
>
> Some notes:
>
> - Worst-case scheduling latency with *very* intense workloads is now
> 0.8 milliseconds on a 500MHz uniprocessor.
Wow!
36 matches
Mail list logo