Hi,
On 8/22/05, Peter Williams <[EMAIL PROTECTED]> wrote:
> Michal Piotrowski wrote:
> > [1.] One line summary of the problem:
> > oops when shuting down system
> >
> > [2.] Full description of the problem/report:
> > After kernbenching nicksched (heav load make -j128) I just record
> > results on
On Sun, 21 Aug 2005 14:44, Michal Piotrowski wrote:
> On 8/21/05, Con Kolivas <[EMAIL PROTECTED]> wrote:
> > Well it will survive all right, but eventually get into swap thrash
> > territory and that's not a meaningful cpu scheduler benchmark.
> >
> > Cheers,
> > Con
>
> Ok. How about make -j? It's
On 8/21/05, Con Kolivas <[EMAIL PROTECTED]> wrote:
> Well it will survive all right, but eventually get into swap thrash territory
> and that's not a meaningful cpu scheduler benchmark.
>
> Cheers,
> Con
>
Ok. How about make -j? It's one of kernbench test runs, on my box load
average > 1500 ;).
On Sun, 21 Aug 2005 14:16, Michal Piotrowski wrote:
> Hi,
>
> On 8/21/05, Con Kolivas <[EMAIL PROTECTED]> wrote:
> > On Sun, 21 Aug 2005 11:34, Michal Piotrowski wrote:
> > > Hi,
> >
> > Hi
> >
> > > here are kernbench results:
> >
> > Nice to see you using kernbench :)
> >
> > > ./kernbench -M -o
Hi,
On 8/21/05, Con Kolivas <[EMAIL PROTECTED]> wrote:
> On Sun, 21 Aug 2005 11:34, Michal Piotrowski wrote:
> > Hi,
>
> Hi
>
> > here are kernbench results:
>
> Nice to see you using kernbench :)
>
> > ./kernbench -M -o 128
> > [..]
> > Average Optimal -j 128 Load Run:
>
> Was there any reas
On Sun, 21 Aug 2005 11:34, Michal Piotrowski wrote:
> Hi,
Hi
> here are kernbench results:
Nice to see you using kernbench :)
> ./kernbench -M -o 128
> [..]
> Average Optimal -j 128 Load Run:
Was there any reason you chose 128? Optimal usually works out automatically
from kernbench to 4x numb
[1.] One line summary of the problem:
oops when shuting down system
[2.] Full description of the problem/report:
After kernbenching nicksched (heav load make -j128) I just record
results on cd and shutdown system.
[3.] Keywords (i.e., modules, networking, kernel):
plugsched, nicksched, sysfs, vfs
Hi,
here are kernbench results:
cpusched=ingosched
./kernbench -M -o 128
[..]
Average Optimal -j 128 Load Run:
Elapsed Time 365,4
User Time 620,8
System Time 64,6
Percent CPU 187,2
Context Switches 38296,8
Sleeps 37867
(reboot)
---
On Sat, 2005-08-20 at 10:31 +1000, Con Kolivas wrote:
> It's an X problem and it's being fixed. Get over it, we're not tuning
> the scheduler for a broken app.
>
You're right, this problem seems much, much better in Xorg 6.8.2. I
think the Damage extension might be responsible. There's definite
On Sat, 2005-08-20 at 10:31 +1000, Con Kolivas wrote:
> On Sat, 20 Aug 2005 06:13, Lee Revell wrote:
> >
> > I agree that tweaking the scheduler is probably pointless, as long as X
> > is burning gazillions of CPU cycles redrawing things that don't need to
> > be redrawn.
> >
> > Then again even th
On Sat, 20 Aug 2005 06:13, Lee Revell wrote:
> On Fri, 2005-08-19 at 14:36 +1000, Con Kolivas wrote:
> > On Fri, 19 Aug 2005 02:41 pm, Peter Williams wrote:
> > > Maybe we could use interbench to find a nice value for X that doesn't
> > > destroy Audio and Video? The results that I just posted for
On Fri, 2005-08-19 at 14:36 +1000, Con Kolivas wrote:
> On Fri, 19 Aug 2005 02:41 pm, Peter Williams wrote:
> > Maybe we could use interbench to find a nice value for X that doesn't
> > destroy Audio and Video? The results that I just posted for
> > spa_no_frills with X reniced to -10 suggest that
On Fri, 19 Aug 2005 02:41 pm, Peter Williams wrote:
> Con Kolivas wrote:
> > On Fri, 19 Aug 2005 01:28 pm, Lee Revell wrote:
> >>On Fri, 2005-08-19 at 05:09 +0200, Michal Piotrowski wrote:
> >>>Hi,
> >>>here are interbench v0.29 resoults:
> >>
> >>The X test under simulated "Compile" load looks mos
Con Kolivas wrote:
On Fri, 19 Aug 2005 01:28 pm, Lee Revell wrote:
On Fri, 2005-08-19 at 05:09 +0200, Michal Piotrowski wrote:
Hi,
here are interbench v0.29 resoults:
The X test under simulated "Compile" load looks most interesting.
Most of the schedulers do quite poorly on this test - onl
Lee Revell wrote:
On Fri, 2005-08-19 at 05:09 +0200, Michal Piotrowski wrote:
Hi,
here are interbench v0.29 resoults:
The X test under simulated "Compile" load looks most interesting.
Most of the schedulers do quite poorly on this test - only Zaphod with
default max_ia_bonus and max_tpt_bon
On Fri, 19 Aug 2005 01:28 pm, Lee Revell wrote:
> On Fri, 2005-08-19 at 05:09 +0200, Michal Piotrowski wrote:
> > Hi,
> > here are interbench v0.29 resoults:
>
> The X test under simulated "Compile" load looks most interesting.
>
> Most of the schedulers do quite poorly on this test - only Zaphod w
On Fri, 2005-08-19 at 05:09 +0200, Michal Piotrowski wrote:
> Hi,
> here are interbench v0.29 resoults:
The X test under simulated "Compile" load looks most interesting.
Most of the schedulers do quite poorly on this test - only Zaphod with
default max_ia_bonus and max_tpt_bonus manages to delive
Hi,
here are interbench v0.29 resoults:
cpusched=ingosched
Using 1844991 loops per ms, running every load for 30 seconds
Benchmarking kernel 2.6.13-rc6-2 at datestamp 200508181941
--- Benchmarking simulated cpu of Audio in the presence of simulated ---
LoadLatency +/- SD (ms) Max Latency
On Thu, 18 Aug 2005 09:48 am, Peter Williams wrote:
> Con Kolivas wrote:
> > On Thu, 18 Aug 2005 09:15 am, Peter Williams wrote:
> >>Con Kolivas wrote:
> > He did a make allyesconfig which is a bit different and probably far too
> > i/o bound. By the way a single kernel compile is hardly a reproduc
Con Kolivas wrote:
On Thu, 18 Aug 2005 09:15 am, Peter Williams wrote:
Con Kolivas wrote:
On Wed, 17 Aug 2005 18:10, Peter Williams wrote:
Michal Piotrowski wrote:
Hi,
here are schedulers benchmark (part2):
[bits deleted]
Here's a summary of your output generated using the attached Pyth
On Thu, 18 Aug 2005 09:15 am, Peter Williams wrote:
> Con Kolivas wrote:
> > On Wed, 17 Aug 2005 18:10, Peter Williams wrote:
> >>Michal Piotrowski wrote:
> >>>Hi,
> >>>here are schedulers benchmark (part2):
> >>>[bits deleted]
> >>
> >>Here's a summary of your output generated using the attached P
Con Kolivas wrote:
On Wed, 17 Aug 2005 18:10, Peter Williams wrote:
Michal Piotrowski wrote:
Hi,
here are schedulers benchmark (part2):
[bits deleted]
Here's a summary of your output generated using the attached Python script.
| Build Statistics | Overall Statistics
-
On Thu, 18 Aug 2005 04:04, Michal Piotrowski wrote:
> Hi,
> here are additional staircase scheduler benchmarks.
>
> (make all -j8)
>
> scheduler:
> staircase
>
> sched_compute=1
> real49m48.619s
> user77m20.788s
> sys 6m7.653s
Very nice thank you.
Since you are benchmarking, here is
Hi,
here are additional staircase scheduler benchmarks.
(make all -j8)
scheduler:
staircase
sched_compute=1
schedstat:
version 12
timestamp 4294712019
cpu0 1 0 0 31 0 18994 4568 7407 5903 10267 6976 14426
domain0 3 18574 18398 6 3938 193 4 0 18398 335 285 0 1191 175 0 0 285 4753 4508
75 6843 33
On Wed, 17 Aug 2005 21:23, Michal Piotrowski wrote:
> Hi,
>
> On 8/17/05, Con Kolivas <[EMAIL PROTECTED]> wrote:
> > On Mon, 15 Aug 2005 22:29, Michal Piotrowski wrote:
> > > Hi,
> > > here are my benchmarks (part1):
> >
> > Want to try the staircase cpu scheduler in "compute" mode for the compute
Hi,
On 8/17/05, Peter Williams <[EMAIL PROTECTED]> wrote:
> I was intrigued by the fact that zaphod(d,d) and zaphod(d,0) take longer
> in real time but use less cpu. I was assuming that this meant that some
> other job was getting some cpu but the schedstats data doesn't support
> that. Also it
Hi,
On 8/17/05, Con Kolivas <[EMAIL PROTECTED]> wrote:
> On Mon, 15 Aug 2005 22:29, Michal Piotrowski wrote:
> > Hi,
> > here are my benchmarks (part1):
>
> Want to try the staircase cpu scheduler in "compute" mode for the compute
> intensive workloads?
>
> Thanks,
> Con
>
>
Yes, I'll try int
On Wed, 17 Aug 2005 18:10, Peter Williams wrote:
> Michal Piotrowski wrote:
> > Hi,
> > here are schedulers benchmark (part2):
> > [bits deleted]
>
> Here's a summary of your output generated using the attached Python script.
>
> | Build Statistics | Overall Statistics
>
> ---
Michal Piotrowski wrote:
Hi,
here are schedulers benchmark (part2):
[bits deleted]
Here's a summary of your output generated using the attached Python script.
| Build Statistics | Overall Statistics
---
On Mon, 15 Aug 2005 22:29, Michal Piotrowski wrote:
> Hi,
> here are my benchmarks (part1):
Want to try the staircase cpu scheduler in "compute" mode for the compute
intensive workloads?
Thanks,
Con
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a messa
Hi,
here are schedulers benchmark (part2):
II 2.6.12 kernel compilation. (make allyesconfig, time make all -j64)
1
scheduler:
ingosched
schedstat:
version 12
timestamp 4294703525
cpu0 0 0 56 56 169 18916 4327 7006 5153 8279 4999 14589
domain0 3 14286 13960 223 8331 213 41 0 13960 515 361 8 4456 4
Hi,
On 8/16/05, Peter Williams <[EMAIL PROTECTED]> wrote:
> Peter Williams wrote:
> > Michal Piotrowski wrote:
> >
> >> Hi,
> >> here are my benchmarks (part1):
> >
> >
> > Would you mind doing a few extra runs when you do Zaphod with different
> > configuration parameters? Namely:
> >
> > 1. def
Hi,
here are my benchmarks (part1):
I 2.6.12 kernel compilation. (make allyesconfig, time make all -j8)
1 cpusched=ingosched:
ng02:/usr/src/linux-2.6.12# time make all -j8
[..]
real51m11.775s
user77m3.995s
sys 6m21.558s
ng02:/usr/src/linux-2.6.12# cat /proc/scheduler
ingosched
ng02:/u
This version contains minor bug fixes and improvements to the zaphod
scheduler including changes to the default configuration parameters that
take into account the results of tests using Con Kolivas's new (and very
useful) interbench benchmark tool.
A patch from Plugsched-5.2.3 to PlugSched-5.
34 matches
Mail list logo