antics correct, but we also need to be aware of performance in
the non-realtime case.
--
Mike Kravetz [EMAIL PROTECTED]
IBM Linux Technology Center
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [E
threshold value as opposed to 1.
My guess is that the threshold value was changed from 0 to
1 in the 2.4 kernel for better performance with some workload.
Anyone remember what that workload was/is?
--
Mike Kravetz [EMAIL PROTECTED]
IBM Linux Technology C
prox equal to the number of CPUs yet
scheduler performance has gone downhill.
--
Mike Kravetz [EMAIL PROTECTED]
IBM Linux Technology Center
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTE
ask
wakeups that could potentially be run in parallel (on
separate CPUS with no other serialization in the way)
then you 'might' see some benefit. Those are some big IFs.
I know little about the networking stack or this workload.
Just wanted to explain how this scheduling work 'co
try out some of our scheduler patches
located at:
http://lse.sourceforge.net/scheduling/
I would be interested in your observations.
--
Mike Kravetz [EMAIL PROTECTED]
IBM Linux Technology Center
-
To unsubscribe from this list: send the line "unsubscribe linux
dule_idle' component of the scheduler. We have developed
a 'token passing' benchmark which attempts to address these issues
(called reflex at the above site). However, I would really like
to get a pointer to a community acceptable workload/benchmark for
these low thread cases.
--
M
duling decisions as contention on the runqueue
locks increase. However, at this point one could argue that
we have moved away from a 'realistic' low task count system load.
> lmbench's lat_ctx for example, and other tools in lmbench trigger various
> scheduler workloads as wel
multi-queue patch I developed, the
scheduler always attempts to make the same global scheduling decisions
as the current scheduler.
--
Mike Kravetz [EMAIL PROTECTED]
IBM Linux Technology Center
-
To unsubscribe from this list: send the line "unsubscribe linux-ke
ons, load balancing algorithms take considerable effort
to get working in a reasonable well performing manner.
>
> Could you make a port of your thing on recent kernels?
There is a 2.4.2 patch on the web page. I'll put out a 2.4.3 patch
as soon as I get some time.
--
Mike Krave
? OR Is the reasoning that in
these cases there is so much 'scheduling' activity that we
should force the reschedule?
--
Mike Kravetz [EMAIL PROTECTED]
IBM Linux Technology Center
-
To unsubscribe from this list: send the line "unsubscribe linux-kern
1.661
1024 FRC196.425 6.166
2048 FRC FRC 23.291
4096 FRC FRC 47.117
*FRC = failed to reach confidence level
--
Mike Kravetz [EMAIL PROTECTED]
IBM Linux
On Fri, Jan 19, 2001 at 01:26:16AM +0100, Andrea Arcangeli wrote:
> On Thu, Jan 18, 2001 at 03:53:11PM -0800, Mike Kravetz wrote:
> > Here are some very preliminary numbers from sched_test_yield
> > (which was previously posted to this (lse-tech) list by Bill
> > Hartner).
y secondary
to reducing lock contention within the scheduler. A co-worker down
the hall just ran pgbench (a postgresql db) benchmark and saw
contention on the runqueue lock at 57%. Now, I know nothing about this
benchmark, but it will be interesting to see what happens after
applying my patch.
On Fri, Jan 19, 2001 at 02:30:41AM +0100, Andrea Arcangeli wrote:
> On Thu, Jan 18, 2001 at 04:52:25PM -0800, Mike Kravetz wrote:
> > was less than the number of processors. I'll give the tests a try
> > with a smaller number of threads. I'm also open to suggestions
of
running tasks is less than the number of processors.
--
Mike Kravetz [EMAIL PROTECTED]
IBM Linux Technology Center
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/
o tasks that last ran on the
current CPU. In our multi-queue scheduler, tasks on a remote queue
must have high enough priority (to overcome this boost) before being
moved to the local queue.
--
Mike Kravetz [EMAIL PROTECTED]
IBM Linux Technology Center
15450
On Thu, Jan 18, 2001 at 05:34:35PM -0800, Mike Kravetz wrote:
> On Fri, Jan 19, 2001 at 02:30:41AM +0100, Andrea Arcangeli wrote:
> > On Thu, Jan 18, 2001 at 04:52:25PM -0800, Mike Kravetz wrote:
> > > was less than the number of processors. I'll give the tests a try
>
On Fri, Jan 19, 2001 at 12:49:21PM -0800, Mike Kravetz showed his lack
of internet slang understanding and wrote:
>
> It was my intention to post IIRC numbers for small thread counts today.
> However, the benchmark (not the system) seems to hang on occasion. This
> occurs on both th
actthreads has to be zero.
Not as currently coded. If two threads try to decrement actthreads
at the same time, there is no guarantee that it will be decremented
twice. That is why you need to put some type of synchronization in
place.
--
Mike Kravetz [EMAIL PROT
esults in
the not too distant future. Until then, we'll be looking into
optimizations to help out the multi-queue scheduler at low
thread counts.
--
Mike Kravetz [EMAIL PROTECTED]
IBM Linux Technology Center
-
To unsubscribe from this list: send the line "u
On Wed, May 09, 2001 at 11:29:22AM -0500, Andrew M. Theurer wrote:
>
> I am evaluating Linux 2.4 SMP scalability, using Netbench(r) as a
> workload with Samba, and I wanted to get some feedback on results so
> far.
Do you have any kernel profile or lock contention data?
--
problems.
I'm curious, is this behavior by design OR are we just getting
lucky?
Thanks,
--
Mike Kravetz [EMAIL PROTECTED]
IBM Linux Technology Center
15450 SW Koll Parkway
Beaverton, OR 97006-6063 (503)578-3494
-
To unsubscribe from this lis
://sourceforge.net/projects/lse
Thanks,
--
Mike Kravetz [EMAIL PROTECTED]
IBM Linux Technology Center
15450 SW Koll Parkway
Beaverton, OR 97006-6063 (503)578-3494
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the
Ragnar,
Are you sure that was line 115? Could it have been line 515?
Also, do you have any Oops data?
Thanks,
--
Mike Kravetz [EMAIL PROTECTED]
IBM Linux Technology Center
15450 SW Koll Parkway
Beaverton, OR 97006-6063 (503)578-3494
On Wed
George,
I can't answer your question. However, have you noticed that this
lock ordering has changed in the test11 kernel. The new sequence is:
read_lock_irq(&tasklist_lock);
spin_lock(&runqueue_lock);
Perhaps the person who made this change could provide their reasoning.
An
Scheduling Scalability page is at:
http://lse.sourceforge.net/scheduling/
If you are interested in this work, please join the lse-tech
mailing list at:
http://sourceforge.net/projects/lse
--
Mike Kravetz [EMAIL PROTECTED]
IBM Linux Technology Center
-
To
stem. Is
that an accurate statement?
If the above is accurate, then I am wondering what would be a
good scheduler benchmark for these low task count situations.
I could undo the optimizations in sys_sched_yield() (for testing
purposes only!), and run the existing benchmarks. Can anyone
suggest
t case of lock
contention. This was done at the expense of the normal case.
I'm currently working on this situation and expect to have a new
patch out in the not too distant future.
I expect the numbers will get better.
--
Mike Kravetz [EMAIL PROTECTED]
I
28 matches
Mail list logo